Scientists have long assumed that the DNA polymerases on the leading and lagging strands somehow coordinate with each other throughout the replication process, so that one does not get ahead of the other during the unravelling process and cause mutations.
But this new footage reveals that there’s no coordination at play here at all – somehow, each strand acts independently of the other, and still results in a perfect match each time.
(DNA Replication Has Been Filmed For The First Time, And It’s Not What We Expected,” Science Alert
- It’s a good thing that the Supreme Court’s conservative wing is opposed to judges making law, because if they added a new term like “bona fide relationship” to immigration law, it would be hugely confusing. A bona fide crisis for opponents of “judicial activism.”
- If you have an AT&T email account, Verizon is going to break your Flickr account.
- Google Will No Longer Scan Gmail for Ad Targeting Does that mean that the incremental ad revenue from learning more about people is not worth the effort to discuss privacy?
When I saw that Wired had created a list, “20 People Who Are Creating the Future,” I didn’t expect to see anyone in security on it.
I was proven wrong in a wonderful way — #1 on their list is Parisa Tabriz, under the headline “Put Humans First, Code Second.” A great choice, a well-deserved honor for Parisa, and a bit of a rebuke to those who want to focus on code vulnerabilities, and say “you can’t patch human stupidity.”
I was recently in a meeting at a client site where there was a lot of eye rolling going on. It’s about not understanding the ground rules, and one of those groundrules is that security rarely flows downhill.
That is, if you’re in a stack like the one to the right, the application is vulnerable to the components underneath it. The components underneath should isolate themselves from (protect themselves against) things higher in the stack.
I can’t talk about the meeting in detail (client confidentiality), so I’m going to pick an article I saw where some of the same thinking. I’m using that article as a bit of a straw man. This article was convenient, not unique. These points are more strongly made if they are grounded in real quotes, rather than ones I make up.
“The lack of key validation (i.e. the verification that public keys are not invalid) is therefore not a major security risk. But I believe that validating keys would make Signal even more secure and robust against maliciously or accidentally invalid keys,” the researchers explained.
In this farfetched example, researchers explain, communications would be intentionally compromised by the sender. The goal, could be to give the message recipient the appearance of secure communications in hopes they may be comfortable sharing something they might not otherwise.
“People could also intentionally install malware on their own device, intentionally backdoor their own random number generator, intentionally publish their own private keys, or intentionally broadcast their own communication over a public loudspeaker. If someone intentionally wants to compromise their own communication, that’s not a vulnerability,” Marlinspike said. (I’m choosing to not link to the article, because, I don’t mean to call out the people making that argument.)
So here’s the rule: Security doesn’t flow downhill without extreme effort. If you are an app, it is hard to protect the device as a whole. It is hard to protect yourself if the user decides to compromise their device or mess up their RNG. And Moxie is right to not try to improve the security of Android or IoS against these attacks: it’s very difficult to do from where he sits. Security rarely flows downhill.
There are exceptions. Companies like Good Technologies built complex crypto to protect corporate data on devices that might be compromised. And best I understand it, it worked by having a server send a set of keys to the device, and the outer layer of Good decrypted the real app and data with those keys, then got more keys. And they had some anti-debugging lawyers in there (oops, is that a typo?) so that the OS couldn’t easily steal the keys. And it was about the best you could do with the technology that phone providers were shipping. It is, netted out, a whole lot more fair than employers demanding the ability to wipe your personal device and your personal data.
So back to that meeting. A security advisor from a central security group was trying to convince a product team of something very much like “the app should protect itself from the OS.” He wasn’t winning, and he was trotting out arguments like “it’s not going to cost [you] anything.” But that was obviously not the case. The cost of anything is the foregone alternative, and these “stone soup improvements” to security (to borrow from Mordaxus) were going to come at the expense of other features. Even if there was agreement on what direction to go, it was going to another few meetings to get these changes designed, and then it was going to cost a non-negligible number of programmer days to implement, and more to test and document.
That’s not the most important cost. Far more important than the cost of implementing the feature was the effort to get to agreement on these new features versus others.
But even that is not the most important cost. The real price was respect for the central security organization. Hearing those arguments made it that much less likely that those engineers were going to see the “security advisor,” or their team, as helpful.
As security engineers, we have to pick battles and priorities just like other engineers. We have to find the improvements that make the most sense for their cost, and we have to work those through the various challenges.
One of the complexities of consulting is that it can be trickier to interrupt in a large meeting, and you have less ability to speak for an organization. I’d love your advice on what a consultant should do when they watch someone at a client site demonstrating skepticism. Should I have stepped in at the time? How? (I did talk with a more senior person on the team who is working the issue.)
The Edge is an interesting site with in depth interviews with smart folks. There’s a long interview with Ross Anderson published recently.
It’s a big retrospective on the changes over thirty years, and there’s enough interesting bits that I’ll only quote one:
The next thing that’s happened is that over the past ten years or so, we’ve begun to realize that as systems became tougher and more difficult to penetrate technically, the bad guys have been turning to the users. The people who use systems tend to have relatively little say in them because they are a dispersed interest. And in the case of modern systems funded by advertising, they’re not even the customer, they’re the product.
Take the time to listen. Ross’s emphasis is a bit lost in the text.
Access to an account is access to an account. A lot of systems talk about “backup” authentication, but make that backup authentication available at all times. This has led to all sorts of problems, because the idea that the street you grew up on is a secret didn’t make sense even before Yahoo! “invalidated“it. Not to mention that even when answers to these questions are freeform, they tend to have only a few bits of entropy. Colors? First names? All have distributions. Then there’s the ones who insist they know your answers:
One of the people who’s focused on really improving account recovery is Brad Hill, and at F8, Facebook announced some new tech which I think is a very useful new point in the design space.
As developers, we talk a lot about building experiences that people love. But there’s one experience that never fails to elicit a groan from people everywhere: recovering an account after forgetting your password.
Delegated Account Recovery helps people and businesses recover their accounts using the services that they trust. It is an open protocol that gives companies the ability to provide better and more secure options to their customers for regaining access to their accounts. Facebook — and other providers in the future — can help people verify who they are when they forget their password, lose their two-factor codes, or don’t want to answer security questions based on personal information. (“Delegated Account Recovery Now Available in Beta.”)
It’s worth checking out.
And not that I’m trying to make trouble for anyone, but at what point does relying on use of a “secret” question like “street you grew up on” become the sort of unfair trade practice that garners regulatory attention? My guess is that the availability of credible alternatives brings that day closer.
When I started blogging a dozen years ago, the world was different. Over time, I ended up with at least two main blogs (Emergent Chaos and New School), and guest posting at Dark Reading, IANS, various Microsoft blogs, and other places.
I decided it’s time to bring all that under a single masthead, and hey, get TLS finally. I’ve imported the EmergentChaos and New School archives, but not the others. For those others, I’ll post a link here as I post there.
If you subscribe to either or both, I suggest subscribing here; I’ll post reminders to those other blogs to move as well. If you maintain a link to either of the old blogs, please update it to point here.
I’m sure I’ve broken things in the imports, please let me know what they are.
In the near future, I’ll set up redirects from the old blogs to here.
So I’m curious: on what basis is the President of the United States able to issue orders to attack the armed forces of Syria?
It is not on the basis of the 2001 “Authorization for Use of Military Force,” cited in many instances, because there has been no claim that Syria was involved in the 9/11 attacks. (Bush and then Obama both stretched this basis incredibly, and worryingly, far. But both took care to trace back to an authorization.)
It is not on the basis of an emergency use of force because the United States was directly threatened.
Which leaves us with, as the NY Times reports:
Mr. Trump authorized the strike with no congressional approval for the use of force, an assertion of presidential authority that contrasts sharply with the protracted deliberations over the use of force by his predecessor, Barack Obama. (“Dozens of U.S. Missiles Hit Air Base in Syria.”)
Or, as Donald Trump once said:
Seriously, what is the legal basis of this order?
Have we really arrived at a point where the President of the United States can simply order the military to strike anywhere, anytime, at his personal discretion?
This video is really amazingly inspiring:
Not only does it show more satellites than I’ve ever seen in a single frame of video, but the rocket that took them up was launched by the Indian Space Research Organisation, who managed to launch not only the largest satellite constellation ever, but had room for a few more birds in the launch. It’s an impressive achievement, and it (visually) crystalizes a shift in how we approach space. Also, congratulations to the team at Planet, the ability to image all of Earth’s landmass every day.
Launching a micro satellite into low Earth orbit is now accessible to hobbyists. Many readers of this blog could do it. That’s astounding. Stop and think about that for a moment. Our failure to have exciting follow-on missions after Apollo can obscure the fascinating things which are happening in space, as it gets cheap and almost boring to get to low Earth orbit. The Economist has a good summary. That’s not to say that there aren’t things happening further out. This is the year that contestants in the Google Lunar XPrize competition must launch. Two tourists have paid a deposit to fly around the moon.
But what’s happening close to the planet is where the economic changes will be most visible soon. That’s not to say it’s the only thing to watch, but the same engines will enable more complex and daring missions.
For more on what’s happening in India around space exploration and commercialization, this is a fascinating interview with Susmita Mohanty.
After the February, 2017 S3 incident, Amazon posted this:
We are making several changes as a result of this operational event. While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future. We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. (“Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region“)
How often do you see public lessons like this in security?
“We have modified our email clients to not display URLs which have friendly text that differs meaningfully from the underlying anchor. Additionally, we re-write URLs, and route them through our gateway unless they meet certain criteria…”
…if a valve failed 75% of the time, would you get angry with the valve and simply continual to replace it? No, you might reconsider the design specs. You would try to figure out why the valve failed and solve the root cause of the problem. Maybe it is underspecified, maybe there shouldn’t be a valve there, maybe some change needs to be made in the systems that feed into the valve. Whatever the cause, you would find it and fix it. The same philosophy must
apply to people.
(Thanks to Steve Bellovin for reminding me of the Norman essay recently.)