DNA Replicates, Filmed at 11.

Scientists have long assumed that the DNA polymerases on the leading and lagging strands somehow coordinate with each other throughout the replication process, so that one does not get ahead of the other during the unravelling process and cause mutations.

But this new footage reveals that there’s no coordination at play here at all – somehow, each strand acts independently of the other, and still results in a perfect match each time.
(DNA Replication Has Been Filmed For The First Time, And It’s Not What We Expected,” Science Alert

Paper: Independent and Stochastic Action of DNA Polymerases in the Replisome.

Links of Interest

  • It’s a good thing that the Supreme Court’s conservative wing is opposed to judges making law, because if they added a new term like “bona fide relationship” to immigration law, it would be hugely confusing. A bona fide crisis for opponents of “judicial activism.”
  • If you have an AT&T email account, Verizon is going to break your Flickr account.
  • Google Will No Longer Scan Gmail for Ad Targeting Does that mean that the incremental ad revenue from learning more about people is not worth the effort to discuss privacy?

IoT Security Workshop (Seattle, August)

Jean Camp and Yoshi Kohno are organizing an interesting workshop upcoming at the University of Washington on “Best Practices In The IoT:”

Our agenda begins with a presentation on the Federal Government initiatives in
the IoT. When collecting the reading materials for emerging standards, we found
nearly a thousand pages once all governmental materials are brought together…The product of the workshop will be a summary document identifying (i) a consensus set of graduated best practices for security and privacy for IoT in the home, and (ii) any gaps where best practices cannot yet be identified.

(I believe that the workshop organizers might agree with me regards the term “best practices,” but are driven by funders to use it.)

Also, they are searching for a few more sponsors if you can help in that department.

Bicycling and Threat Modeling

Bikeshare

The Economist reports on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:

The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.

Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?

Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.

You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in “The Ultimate Stopping Machine,” I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn’t dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there’s nuance. I’d be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.)

In the case of the dockless bikes, however, I would be skeptical of a technique that missed “reserving” a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled “dockless,” so you have an obvious contrast with a docked system.

When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration here and here.

For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they’re picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)

Photo by Viktor Kern.

Bicycling and Risk

A study found that those who cycle have a net 41% lower risk of premature death. Now, when I read that headline my first thought was that it was 100 people over 6 months and a statistical fluke. But no, they followed a quarter million Britons for 5 years.

Bike commuter

Now, it’s not obvious that it’s causal. Perhaps those who are healthier choose to ride to work? But it seems reasonable to assume that getting a bunch of exercise, fresh air, and adrenaline rushes as distracted drivers read their timeslines as they drive could lead to better health.

The paper is “Association between active commuting and incident cardiovascular disease, cancer, and mortality: prospective cohort study,” and a press discussion is at “Cycling to work may cut your risk of premature death by 40%.”

Photo by Jack Alexander.

Maintaining & Updating Software

In the aftermath of Wannacry, there’s a lot of discussion of organizations not updating their systems. There are two main reasons organizations don’t update the operating systems they run: compatibility and training. Training is simpler — you have to train people about the changes to the Start Menu to move them to Windows 8, and that’s expensive. (I sometimes worked with sales people when I was at Microsoft, and they could have managed this much better than they have.)

Compatability is harder. In his excellent blog post on “Who Pays?,” Steve Bellovin discusses how “achieving a significant improvement in a product’s security generally requires a new architecture and a lot of changed code. It’s not a patch, it’s a new release.” There are substantial changes to the ways memory is managed and laid out between the versions, including ASLR, DEP, CFG, etc. There are many changes and seeing how they impact real programs is hard. That’s part of the reason Microsoft released the Enhanced Mitigation Experiment Experience Toolkit.

Rusting car

This doesn’t just apply to platforms, it also applies to libraries. (For example, see Jon Callas, “Apple and OpenSSL.”

Even when compatibility is generally very high, someone needs to test the code to see if it works, and that costs money. It costs a lot more money if you don’t have test code, test documentation (YAGNI!) or if, umm, your test code has dependencies on libraries that don’t work on the new platform…It is unlikely that re-certifying on a new platform is less than weeks of work, and for larger products, it could easily extend to person years of work, to maintain software that’s already been sold. The costs are non-trivial, which brings me back to Steve Bellovin’s post:

There are, then, four basic choices. We can demand that vendors pay, even many years after the software has shipped. We can set up some sort of insurance system, whether run by the government or by the private sector. We can pay out of general revenues. If none of those work, we’ll pay, as a society, for security failures.

This is a fair summary, and I want to add two points.

First, it remains fashionable to bash Microsoft for all the world’s security woes, there is a far bigger problem, that of open source, which is usually released without any maintenance plan. (My friends at the Core Infrastructure Initiative are working on this problem.)

  • Code is speech. The United States rarely imposes liabilities on people for speaking, and it seems downright perverse to do so more if they let others use their words.
  • There may not be an organization, or the author of the code may have explicitly disclaimed that they’re responsible. If there is, and we as a society suddenly impose unexpected costs on them, that might inhibit future funding of open source. (As an example, the author of Postfix was paid by IBM for a while. Does IBM have responsibility for Postfix, now that he’s left and gone to Google?) How does the “releasing code” calculus change if you’re required to maintain it for ever?
  • The Open Source Definition prohibits discrimination against fields of endeavor, and requires licenses be technology neutral. So it seems hard to release an open source library and forbid the use of code in long-lived consumer goods.
  • What if Bob makes a change to Alice’s code, and introduces a security bug in a subtle way? What if Alice didn’t document that the code was managing a security issue? Does she need to fix it?

Second, the costs to society will not be evenly distributed: they’re going to fall on sectors with less software acumen, and places where products are repaired more than they’re replaced, which tend to be the poorer places and countries.

[Update: Ross Anderson blogs about a new paper that he wrote with Éireann Leverett and Richard Clayton. The paper is more focused on the regulatory challenge that maintaining and updating software provokes than the economics.]

Photo by Pawel Kadysz.