“20 Ways to Make AppSec Move at the Speed of DevOps” is in CSO. It’s a good collection, and I’m quoted.
Congratulations to the 2016 winners!
- Dan Geer, Chief Information Security Officer at In-Q-Tel;
- Lance J. Hoffman, Distinguished Research Professor of Computer Science, The George Washington University;
- Horst Feistel, Cryptographer and Inventor of the United States Data Encryption Standard (DES);
- Paul Karger, High Assurance Architect, Prolific Writer and Creative Inventor;
- Butler Lampson, Adjunct Professor at MIT, Turing Award and Draper Prize winner;
- Leonard J. LaPadula, Co-author of the Bell-LaPadula Model of Computer Security; and
- William Hugh Murray, Pioneer, Author and Founder of the Colloquium for Information System Security Education (CISSE)
In a world where influence seems to be measured in likes, re-tweets and shares, the work by these 7 fine people really stands the test of time. For some reason this showed up on Linkedin as “Butler was mentioned in the news,” even though it’s a few years old. Again, test of time.
Today, a global coalition led by civil society and technology experts sent a letter asking the government of Australia to abandon plans to introduce legislation that would undermine strong encryption. The letter calls on government officials to become proponents of digital security and work collaboratively to help law enforcement adapt to the digital era.
In July 2017, Prime Minister Malcolm Turnbull held a press conference to announce that the government was drafting legislation that would compel device manufacturers to assist law enforcement in accessing encrypted information. In May of this year, Minister for Law Enforcement and Cybersecurity Angus Taylor restated the government’s priority to introduce legislation and traveled to the United States to speak with companies based there.
Today’s letter signed by 76 organizations, companies, and individuals, asks leaders in the government “not to pursue legislation that would undermine tools, policies, and technologies critical to protecting individual rights, safeguarding the economy, and providing security both in Australia and around the world.” (Read the full announcement here)
I’m pleased to have joined in this effort by Accessnow, and you can sign, too, at https://secureaustralia.org.au. Especially if you are Australian, I encourage you to do so.
Emergynt has created the Emergynt Risk Deck, a set of 51 cards, representing actors, vulnerabilities, targets, consequences and risks. It’s more a discussion tool than a game, but I have a weakness for the word “emergent,” and I’ve added it to my list of security games
Also, Lancaster University has created an Agile Security Game.
In “Conway’s Law: does your organization’s structure make software security even harder?,” Steve Lipner mixes history and wisdom:
As a result, the developers understood pretty quickly that product security was their job rather than ours. And instead of having twenty or thirty security engineers trying to “inspect (or test) security in” to the code, we had 30 or 40 thousand software engineers trying to create secure code. It made a big difference.
Yesterday Twitter revealed they had accidentally stored plain-text passwords in some log files. There was no indication the data was accessed and users were warned to update their passwords. There was no known breach, but Twitter went public anyway, and was excoriated in the press and… on Twitter.
This is a problem for our profession and industry. We get locked into a cycle where any public disclosure of a breach or security mistake results in…
Well, you can imagine what it results in, or you can go read “The Security Profession Needs to Adopt Just Culture” by Rich Mogull. It’s a very important article, and you should read it, and the links, and take the time to consider what it means. In that spirit, I want to reflect on something I said the other night. I was being intentionally provocative, and perhaps crossed the line away from being just. What I said was a password management company had one job, and if they expose your passwords, you should not use their password management software.
Someone else in the room, coming from a background where they have blameless post-mortems, challenged my use of the phrase ‘you had one job,’ and praised the company for coming forward. And I’ve been thinking about that, and my take is, the design where all the passwords are at a single site is substantially and predictably worse than a design where the passwords are distributed in local clients and local data storage. (There are tradeoffs. With a single site, you may be able to monitor for and respond to unusual access patterns rapidly, and you can upgrade all the software at once. There is an availability benefit. My assessment is that the single-store design is not worth it, because of the catastrophic failure modes.)
It was a fair criticism. I’ve previously said “we live in an ‘outrage world’ where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.” Did I fall into that trap myself? Possibly.
In “Just Culture: A Foundation for Balanced Accountability and Patient Safety,” which Rich links, there’s a table in Figure 2, headed “Choose the column that best describes the caregiver’s action.” In reading that table, I believe that a password manager with central storage falls into the reckless category, although perhaps it’s merely risky. In either case, the system leaders are supposed to share in accountability.
Could I have been more nuanced? Certainly. Would it have carried the same impact? No. Justified? I’d love to hear your thoughts!
“The remains of Yahoo just got hit with a $35 million fine because it didn’t tell investors about Russian hacking.” The headline says most of it, but importantly, “‘We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,’ said Steven Peikin, Co-Director of the SEC Enforcement Division.”
A lot of times, I hear people, including lawyers, get very focused on “it’s not material.” Those people should study the SEC’s statement carefully.
There’s a long story in the New York Times, “Where Countries Are Tinderboxes and Facebook Is a Match:”
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I’m excited to read about “20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund.”
We can usefully think of this as a type of threat modeling.
- What are we working on? Social technology.
- What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.
- What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.
- Did we do a good job? Not yet.
These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What’s worked to minimize the attacks or their impacts on other systems? What “obvious” defenses, such as “real names,” tend to fail?
Image: Washington Post
I hadn’t seen “Integrating Security Into the DevSecOps Toolchain,” which is a Gartner piece that’s fairly comprehensive, grounded and well-thought through.
If you enjoyed my “Reasonable Software Security Engineering,” then this Gartner blog does a nice job of laying out important aspects which didn’t fit into that ISACA piece.
Thanks to Stephen de Vries of Continuum for drawing my attention to it.
There’s a long and important blog post from Matt Miller, “Mitigating speculative execution side channel hardware vulnerabilities.”
What makes it important is that it’s a model of these flaws, and helps us understand their context and how else they might appear. It’s also nicely organized along threat modeling lines.
What can go wrong? There’s a set of primitives (conditional branch misprediction, indirect branch misprediction, and exception delivery or deferral). These are built into gadgets for windowing and disclosure gadgets.
There’s also models for mitigations including classes of ways to prevent speculative execution, removing sensitive content from memory and removing observation channels.