CyberSecurity Hall of Fame

Congratulations to the 2016 winners!

  • Dan Geer, Chief Information Security Officer at In-Q-Tel;
  • Lance J. Hoffman, Distinguished Research Professor of Computer Science, The George Washington University;
  • Horst Feistel, Cryptographer and Inventor of the United States Data Encryption Standard (DES);
  • Paul Karger, High Assurance Architect, Prolific Writer and Creative Inventor;
  • Butler Lampson, Adjunct Professor at MIT, Turing Award and Draper Prize winner;
  • Leonard J. LaPadula, Co-author of the Bell-LaPadula Model of Computer Security; and
  • William Hugh Murray, Pioneer, Author and Founder of the Colloquium for Information System Security Education (CISSE)

In a world where influence seems to be measured in likes, re-tweets and shares, the work by these 7 fine people really stands the test of time. For some reason this showed up on Linkedin as “Butler was mentioned in the news,” even though it’s a few years old. Again, test of time.

Keeping the Internet Secure

Today, a global coalition led by civil society and technology experts sent a letter asking the government of Australia to abandon plans to introduce legislation that would undermine strong encryption. The letter calls on government officials to become proponents of digital security and work collaboratively to help law enforcement adapt to the digital era.

In July 2017, Prime Minister Malcolm Turnbull held a press conference to announce that the government was drafting legislation that would compel device manufacturers to assist law enforcement in accessing encrypted information. In May of this year, Minister for Law Enforcement and Cybersecurity Angus Taylor restated the government’s priority to introduce legislation and traveled to the United States to speak with companies based there.

Today’s letter signed by 76 organizations, companies, and individuals, asks leaders in the government “not to pursue legislation that would undermine tools, policies, and technologies critical to protecting individual rights, safeguarding the economy, and providing security both in Australia and around the world.” (Read the full announcement here)

I’m pleased to have joined in this effort by Accessnow, and you can sign, too, at https://secureaustralia.org.au. Especially if you are Australian, I encourage you to do so.

Conway’s Law and Software Security

In “Conway’s Law: does your organization’s structure make software security even harder?,” Steve Lipner mixes history and wisdom:

As a result, the developers understood pretty quickly that product security was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) security in” to the code, we had 30 or 40 thousand software engineers trying to create secure code. It made a big difference.

NTSB on Uber (Preliminary)

The NTSB has released “Preliminary Report Highway HWY18MH010,” on the Uber self-driving car which struck and killed a woman. I haven’t had a chance to read the report carefully.

Brad Templeton has excellent analysis of the report at “NTSB Report implies serious fault for Uber in fatality” (and Brad’s writings overall on the subject have been phenomenal.)

A few important things to note, cribbed from Brad.

  • The driver was not looking at her phone, but a screen with diagnostic information from the self-driving systems.
  • The car detected a need to brake with approximately enough time to stop had it automatically applied the brakes.
  • That system was turned off for a variety of reasons that look bad (in hindsight, and probably could have been critiqued at the time).

My only comment right now is wouldn’t it be nice to have this level of fact finding in the world of cyber?

Also, it’s very clear that the vehicle was carefully preserved. Can anyone say how the NTSB and/or Uber preserved the data center, cloud or other remote parts of the computer systems involved, including the algorithms that were deployed that day (versus reconstructing them later)?

Just Culture and Information Security

Yesterday Twitter revealed they had accidentally stored plain-text passwords in some log files. There was no indication the data was accessed and users were warned to update their passwords. There was no known breach, but Twitter went public anyway, and was excoriated in the press and… on Twitter.

This is a problem for our profession and industry. We get locked into a cycle where any public disclosure of a breach or security mistake results in…

Well, you can imagine what it results in, or you can go read “The Security Profession Needs to Adopt Just Culture” by Rich Mogull. It’s a very important article, and you should read it, and the links, and take the time to consider what it means. In that spirit, I want to reflect on something I said the other night. I was being intentionally provocative, and perhaps crossed the line away from being just. What I said was a password management company had one job, and if they expose your passwords, you should not use their password management software.

Someone else in the room, coming from a background where they have blameless post-mortems, challenged my use of the phrase ‘you had one job,’ and praised the company for coming forward. And I’ve been thinking about that, and my take is, the design where all the passwords are at a single site is substantially and predictably worse than a design where the passwords are distributed in local clients and local data storage. (There are tradeoffs. With a single site, you may be able to monitor for and respond to unusual access patterns rapidly, and you can upgrade all the software at once. There is an availability benefit. My assessment is that the single-store design is not worth it, because of the catastrophic failure modes.)

It was a fair criticism. I’ve previously said “we live in an ‘outrage world’ where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.” Did I fall into that trap myself? Possibly.

In “Just Culture: A Foundation for Balanced Accountability and Patient Safety,” which Rich links, there’s a table in Figure 2, headed “Choose the column that best describes the caregiver’s action.” In reading that table, I believe that a password manager with central storage falls into the reckless category, although perhaps it’s merely risky. In either case, the system leaders are supposed to share in accountability.

Could I have been more nuanced? Certainly. Would it have carried the same impact? No. Justified? I’d love to hear your thoughts!

Threat Model Thursday: Q&A

In a comment on “Threat Model Thursday: ARM’s Network Camera TMSA“, Dips asks:

Would it been better if they had been more explicit with their graphics ? I am a beginner in Threat Modelling and would have appreciated a detailed diagram denoting the trust boundaries. Do you think it would help? Or it would further complicate?

That’s a great question, and exactly what I hoped for when I thought about a series. The simplest answer is ‘probably!’ More explicit boundaries would be helpful. My second answer is ‘that’s a great exercise!’ Where could the boundaries be placed? What would enforce them there? Where else could you put them? What are the tradeoffs between the two?

My third answer is to re-phrase the question. Rather than asking ‘would it help,’ let’s ask ‘who might be helped by better boundary demarcation’ ‘when would it help them,’ and ‘is this the most productive thing to improve?’ I would love to hear everyone’s perspective.

Lastly, it would be reasonable to expect that Arm might produce a model that depends on the sorts of boundaries that their systems can help protect. It would be really interesting to see a model from a different perspective. If someone draws one or finds one, I’d be happy to look at it for the next article in the series.