CyberDB was kind enough to include us in their “Best Cyber Security News Blogs 2018. There’s some standbys and some I wasn’t familiar with on the list. Thank you for including us!
In a comment on “Threat Model Thursday: ARM’s Network Camera TMSA“, Dips asks:
Would it been better if they had been more explicit with their graphics ? I am a beginner in Threat Modelling and would have appreciated a detailed diagram denoting the trust boundaries. Do you think it would help? Or it would further complicate?
That’s a great question, and exactly what I hoped for when I thought about a series. The simplest answer is ‘probably!’ More explicit boundaries would be helpful. My second answer is ‘that’s a great exercise!’ Where could the boundaries be placed? What would enforce them there? Where else could you put them? What are the tradeoffs between the two?
My third answer is to re-phrase the question. Rather than asking ‘would it help,’ let’s ask ‘who might be helped by better boundary demarcation’ ‘when would it help them,’ and ‘is this the most productive thing to improve?’ I would love to hear everyone’s perspective.
Lastly, it would be reasonable to expect that Arm might produce a model that depends on the sorts of boundaries that their systems can help protect. It would be really interesting to see a model from a different perspective. If someone draws one or finds one, I’d be happy to look at it for the next article in the series.
“The remains of Yahoo just got hit with a $35 million fine because it didn’t tell investors about Russian hacking.” The headline says most of it, but importantly, “‘We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,’ said Steven Peikin, Co-Director of the SEC Enforcement Division.”
A lot of times, I hear people, including lawyers, get very focused on “it’s not material.” Those people should study the SEC’s statement carefully.
There’s a long story in the New York Times, “Where Countries Are Tinderboxes and Facebook Is a Match:”
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I’m excited to read about “20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund.”
We can usefully think of this as a type of threat modeling.
- What are we working on? Social technology.
- What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.
- What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.
- Did we do a good job? Not yet.
These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What’s worked to minimize the attacks or their impacts on other systems? What “obvious” defenses, such as “real names,” tend to fail?
Image: Washington Post
My friends at Continuum Security have some cool swag here at RSA. Go get some at South 2125 (the Spanish Pavilion). Their meet us blog post.
“346,000 Wuhan Citizens’ Secrets” was an exhibition created with $800 worth of data by Deng Yufeng. From the New York Times:
Six months ago, Mr. Deng started buying people’s information, using the Chinese messaging app QQ to reach sellers. He said that the data was easy to find and that he paid a total of $800 for people’s names, genders, phone numbers, online shopping records, travel itineraries, license plate numbers — at a cost at just over a tenth of a penny per person.
“The Personal Data of 346,000 People, Hung on a Museum Wall
,” by Sui-Lee Wee and Elsie Chen.
As we head into RSA, I want to hold the technical TM Thursday post, and talk about how we talk to others in our organizations about particular threat models, and how we frame those conversations.
I’m a big fan of the whiteboard-driven dialogue part of threat modeling. That’s where we look at a design, find issues, and make tradeoffs together with developers, operations, and others. The core is the tradeoff: if we do this, it has this effect. I’m borrowing here John Allspaw’s focus on the social nature of dialogue: coming together to explore ideas. It’s rare to have a consultant as an active participant in these dialogues, because a consultant does not have ‘skin in the game,’ they do not carry responsibility for the tradeoffs. These conversations involve a lot of “what about?” and “what if” statements, and active listening is common.
Let me contrast that with the “threat model review.” When reviews happen late in a cycle, they are unlikely to be dialogues about tradeoffs, because the big decisions have been made. At their best, they are validation that the work has been done appropriately. Unfortunately, they frequently devolve into tools for re-visiting decisions that have been made, or arguments for bringing security in next time. Here, outside consultants can add a lot of value, because they’re less tied to the social aspects of the conversation, offer a “review” or “assessment.” These conversations involve a lot of “why” and “did you” questions. They often feel inquisitorial, investigatory and judgmental. Those being questioned often spend time explaining the tradeoffs that were made, and recording those tradeoff discussions was rarely a priority as decisions were made.
These social frames interleave with the activities and deliverables involved in threat modeling. We can benefit from a bit more reductionism in taking ‘threat modeling’ down to smaller units so we can understand and experiment. For example, my colleagues at RISCS refer to “traditional threat modeling approaches,” and we can read that lots of ways. At a technical level, was that an attacker-centric approach grounded in TARA? STRIDE-per-element? At a social level, was it a matter of security champs coming in late and offering their opinions on the threat modeling that had been done?
So I can read the discussion about the ThoughtWorks “Sensible Conversations” as a social shift from a review mode to a dialogue mode, in which case it seems very sensible to me, and I can read it as about the technical shift about their attacker/asset cards. My first read is that their success is more about the social shift which is the headline. The technical shift (or shifts) may be a part of enabling that by saying “hey, lets try a different approach.”
Image: Štefan Štefančík. Thanks to FS & SW for feedback on the draft.
Joseph Lorenzo Hall has a post at the Center for Democracy and Technology, “Taking the Pulse of Security Research.” One part of the post is an expert statement on security research, and I’m one of the experts who has signed on.
I fully support what CDT chose to include in the statement, and I want to go deeper. The back and forth of design and critique is not only a critical part of how an individual design gets better, but fields in which such criticism is the norm advance faster.
A quick search in Petroski’s Engineers of Dreams: Great Bridge Builders and the Spanning of America brings us the following. (The Roeblings built the Brooklyn Bridge, Lindenthal had proposed a concept for the crossing, which lost to Roebling’s, and he built many others.)
In Lindenthal’s case, he was so committed to the suspension concept for bridging the Hudson River that he turned the argument naturally and not unfairly to his use. Lindenthal admitted, for example, that it was “a popular assumption that suspension bridges cannot be well used for railroad purposes,” and further conceded that throughout the world there was only one suspension bridge then carrying railroad tracks, Roebling’s Niagara Gorge Bridge, completed in 1854, over which trains had to move slowly. However, rather than seeing this as scant evidence for his case, Lindenthal held up as a model the “greater moral courage and more abiding faith in the truth of constructive principles” that Roebling needed to build his bridge in the face of contemporary criticism by the “most eminent bridge engineers then living.” In Lindenthal’s time, three decades later, it was not merely a question of moral courage; “nowadays bridges are not built on faith,” and there was “not another field of applied mechanics where results can be predicted with so much precision as in bridges of iron and steel.” (“Engineers of Dreams: Great Bridge Builders and the Spanning of America,” Henry Petroski)
Importantly for the case which CDT is making, over the span of thirty years, we went from a single suspension bridge to “much precision” in their construction. That progress happened because criticisms and questions are standard while a bridge is proposed, and if it fails, there are inquests and inquiries as to why.
In his The Great Bridge: The Epic Story of the Building of the Brooklyn Bridge, David McCullough describes the prolonged public discussion of the engineering merits:
It had been said repeatedly by critics of the plan that a single span of such length was impossible, that the bridge trains would shake the structure to pieces and, more frequently, that no amount of calculations on paper could guarantee how it might hold up in heavy winds, but the odds were that the great river span would thrash and twist until it snapped in two and fell, the way the Wheeling Bridge had done (a spectacle some of his critics hoped to be on hand for, to judge by the tone of their attacks).
The process of debating plans for a bridge strengthen, not weaken, the resulting structure. Both books are worth reading as you think about how to advance the field of cybersecurity.
Image credit: Cleveland Electric, on their page about a fiber optic structural monitoring system which they retro-fitted onto the bridge in question.
I hadn’t seen “Integrating Security Into the DevSecOps Toolchain,” which is a Gartner piece that’s fairly comprehensive, grounded and well-thought through.
If you enjoyed my “Reasonable Software Security Engineering,” then this Gartner blog does a nice job of laying out important aspects which didn’t fit into that ISACA piece.
Thanks to Stephen de Vries of Continuum for drawing my attention to it.
Last week, I encouraged you to take a look at the ARM Network Camera Threat Model and Security Analysis, and consider:
First, how does it align with the 4-question frame (“what are we working on,” “What can go wrong,” “what are we going to do about it,” and “did we do a good job?”) Second, ask yourself who, what, why, and how.
Before I get into my answers, I want to re-iterate what I said in the first: I’m doing this to learn and to illustrate, not to criticize. It’s a dangerous trap to think that “the way to threat model is.” And this model is a well-considered and structured analysis of networked camera security, and it goes a bit beyond
So let me start with the who, what, why and how of this model.