Just Culture and Information Security

Yesterday Twitter revealed they had accidentally stored plain-text passwords in some log files. There was no indication the data was accessed and users were warned to update their passwords. There was no known breach, but Twitter went public anyway, and was excoriated in the press and… on Twitter.

This is a problem for our profession and industry. We get locked into a cycle where any public disclosure of a breach or security mistake results in…

Well, you can imagine what it results in, or you can go read “The Security Profession Needs to Adopt Just Culture” by Rich Mogull. It’s a very important article, and you should read it, and the links, and take the time to consider what it means. In that spirit, I want to reflect on something I said the other night. I was being intentionally provocative, and perhaps crossed the line away from being just. What I said was a password management company had one job, and if they expose your passwords, you should not use their password management software.

Someone else in the room, coming from a background where they have blameless post-mortems, challenged my use of the phrase ‘you had one job,’ and praised the company for coming forward. And I’ve been thinking about that, and my take is, the design where all the passwords are at a single site is substantially and predictably worse than a design where the passwords are distributed in local clients and local data storage. (There are tradeoffs. With a single site, you may be able to monitor for and respond to unusual access patterns rapidly, and you can upgrade all the software at once. There is an availability benefit. My assessment is that the single-store design is not worth it, because of the catastrophic failure modes.)

It was a fair criticism. I’ve previously said “we live in an ‘outrage world’ where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.” Did I fall into that trap myself? Possibly.

In “Just Culture: A Foundation for Balanced Accountability and Patient Safety,” which Rich links, there’s a table in Figure 2, headed “Choose the column that best describes the caregiver’s action.” In reading that table, I believe that a password manager with central storage falls into the reckless category, although perhaps it’s merely risky. In either case, the system leaders are supposed to share in accountability.

Could I have been more nuanced? Certainly. Would it have carried the same impact? No. Justified? I’d love to hear your thoughts!

Threat Model Thursday: Q&A

In a comment on “Threat Model Thursday: ARM’s Network Camera TMSA“, Dips asks:

Would it been better if they had been more explicit with their graphics ? I am a beginner in Threat Modelling and would have appreciated a detailed diagram denoting the trust boundaries. Do you think it would help? Or it would further complicate?

That’s a great question, and exactly what I hoped for when I thought about a series. The simplest answer is ‘probably!’ More explicit boundaries would be helpful. My second answer is ‘that’s a great exercise!’ Where could the boundaries be placed? What would enforce them there? Where else could you put them? What are the tradeoffs between the two?

My third answer is to re-phrase the question. Rather than asking ‘would it help,’ let’s ask ‘who might be helped by better boundary demarcation’ ‘when would it help them,’ and ‘is this the most productive thing to improve?’ I would love to hear everyone’s perspective.

Lastly, it would be reasonable to expect that Arm might produce a model that depends on the sorts of boundaries that their systems can help protect. It would be really interesting to see a model from a different perspective. If someone draws one or finds one, I’d be happy to look at it for the next article in the series.

$35M for Covering up A Breach

The remains of Yahoo just got hit with a $35 million fine because it didn’t tell investors about Russian hacking.” The headline says most of it, but importantly, “‘We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,’ said Steven Peikin, Co-Director of the SEC Enforcement Division.”

A lot of times, I hear people, including lawyers, get very focused on “it’s not material.” Those people should study the SEC’s statement carefully.

Designing for Good Social Systems

There’s a long story in the New York Times, “Where Countries Are Tinderboxes and Facebook Is a Match:”

A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.

I’ve written previously about the drama triangle, how social media drives engagement through dopamine and hatred, and a tool to help you breathe through such feelings.

These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I’m excited to read about “20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund.”

We can usefully think of this as a type of threat modeling.

  • What are we working on? Social technology.
  • What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.
  • What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.
  • Did we do a good job? Not yet.

These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What’s worked to minimize the attacks or their impacts on other systems? What “obvious” defenses, such as “real names,” tend to fail?

Image: Washington Post

346,000 Wuhan Citizens’ Secrets

“346,000 Wuhan Citizens’ Secrets” was an exhibition created with $800 worth of data by Deng Yufeng. From the New York Times:

Chinese Personal Data framed

Six months ago, Mr. Deng started buying people’s information, using the Chinese messaging app QQ to reach sellers. He said that the data was easy to find and that he paid a total of $800 for people’s names, genders, phone numbers, online shopping records, travel itineraries, license plate numbers — at a cost at just over a tenth of a penny per person.

The Personal Data of 346,000 People, Hung on a Museum Wall
,” by Sui-Lee Wee and Elsie Chen.

Threat Model Thursday: Talking, Dialogue and Review

As we head into RSA, I want to hold the technical TM Thursday post, and talk about how we talk to others in our organizations about particular threat models, and how we frame those conversations.

I’m a big fan of the whiteboard-driven dialogue part of threat modeling. That’s where we look at a design, find issues, and make tradeoffs together with developers, operations, and others. The core is the tradeoff: if we do this, it has this effect. I’m borrowing here John Allspaw’s focus on the social nature of dialogue: coming together to explore ideas. It’s rare to have a consultant as an active participant in these dialogues, because a consultant does not have ‘skin in the game,’ they do not carry responsibility for the tradeoffs. These conversations involve a lot of “what about?” and “what if” statements, and active listening is common.

Let me contrast that with the “threat model review.” When reviews happen late in a cycle, they are unlikely to be dialogues about tradeoffs, because the big decisions have been made. At their best, they are validation that the work has been done appropriately. Unfortunately, they frequently devolve into tools for re-visiting decisions that have been made, or arguments for bringing security in next time. Here, outside consultants can add a lot of value, because they’re less tied to the social aspects of the conversation, offer a “review” or “assessment.” These conversations involve a lot of “why” and “did you” questions. They often feel inquisitorial, investigatory and judgmental. Those being questioned often spend time explaining the tradeoffs that were made, and recording those tradeoff discussions was rarely a priority as decisions were made.

These social frames interleave with the activities and deliverables involved in threat modeling. We can benefit from a bit more reductionism in taking ‘threat modeling’ down to smaller units so we can understand and experiment. For example, my colleagues at RISCS refer to “traditional threat modeling approaches,” and we can read that lots of ways. At a technical level, was that an attacker-centric approach grounded in TARA? STRIDE-per-element? At a social level, was it a matter of security champs coming in late and offering their opinions on the threat modeling that had been done?

So I can read the discussion about the ThoughtWorks “Sensible Conversations” as a social shift from a review mode to a dialogue mode, in which case it seems very sensible to me, and I can read it as about the technical shift about their attacker/asset cards. My first read is that their success is more about the social shift which is the headline. The technical shift (or shifts) may be a part of enabling that by saying “hey, lets try a different approach.”

Image: Štefan Štefančík. Thanks to FS & SW for feedback on the draft.