“The remains of Yahoo just got hit with a $35 million fine because it didn’t tell investors about Russian hacking.” The headline says most of it, but importantly, “‘We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,’ said Steven Peikin, Co-Director of the SEC Enforcement Division.”
A lot of times, I hear people, including lawyers, get very focused on “it’s not material.” Those people should study the SEC’s statement carefully.
There’s a long story in the New York Times, “Where Countries Are Tinderboxes and Facebook Is a Match:”
A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
I’ve written previously about the drama triangle, how social media drives engagement through dopamine and hatred, and a tool to help you breathe through such feelings.
These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I’m excited to read about “20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund.”
We can usefully think of this as a type of threat modeling.
- What are we working on? Social technology.
- What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.
- What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.
- Did we do a good job? Not yet.
These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What’s worked to minimize the attacks or their impacts on other systems? What “obvious” defenses, such as “real names,” tend to fail?
Image: Washington Post
“346,000 Wuhan Citizens’ Secrets” was an exhibition created with $800 worth of data by Deng Yufeng. From the New York Times:
Six months ago, Mr. Deng started buying people’s information, using the Chinese messaging app QQ to reach sellers. He said that the data was easy to find and that he paid a total of $800 for people’s names, genders, phone numbers, online shopping records, travel itineraries, license plate numbers — at a cost at just over a tenth of a penny per person.
“The Personal Data of 346,000 People, Hung on a Museum Wall
,” by Sui-Lee Wee and Elsie Chen.
I hadn’t seen “Integrating Security Into the DevSecOps Toolchain,” which is a Gartner piece that’s fairly comprehensive, grounded and well-thought through.
If you enjoyed my “Reasonable Software Security Engineering,” then this Gartner blog does a nice job of laying out important aspects which didn’t fit into that ISACA piece.
Thanks to Stephen de Vries of Continuum for drawing my attention to it.
ISACA has released a podcast that we did to talk about the “Reasonable Software Security Engineering” perspectives article. You can download the podcast at ISACA, or you can use:
Larry Greenblat is releasing a series of videos titled “Passing the CISSP Exam with the help of Spock & Kirk.” I, of course, love this, because using stories to help people learn and remember is awesome, and it reminds me of my own “The Security Principles of Saltzer and Schroeder, illustrated with Star Wars.” Also, my thoughts on Star Wars vs Star Trek for these sorts of things.
I have a new Perspectives article at ISACA, Reasonable Software Security Engineering. It talks about the how, why and where you need to ground a software security engineering program.
On Tuesday, I spoke at the Seattle Privacy/TechnoActivism 3rd Monday meeting, and shared some initial results from the Seattle Privacy Threat Model project.
Overall, I’m happy to say that the effort has been a success, and opens up a set of possibilities.
- Every participant learned about threats they hadn’t previously considered. This is surprising in and of itself: there are few better-educated sets of people than those willing to commit hours of their weekends to threat modeling privacy.
- We have a new way to contextualize the decisions we might make, evidence that we can generate these in a reasonable amount of time, and an example of that form.
- We learned about how long it would take (a few hours to generate a good list of threats, a few hours per category to understand defenses and tradeoffs), and how to accelerate that. (We spent a while getting really deep into threat scenarios in a way that didn’t help with the all-up models.)
- We saw how deeply and complexly mobile phones and apps play into privacy.
- We got to some surprising results about privacy in your commute.
More at the Seattle Privacy Coalition blog, “Threat Modeling the Privacy of Seattle Residents,” including slides, whitepaper and spreadsheets full of data.
As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. The 2018 call for papers is open and closes April 9th. Over the past few years, I think we’ve developed an interesting track with good material year over year.
I wrote a short blog post on what we look for.
The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you’ve done for the BlackHat audience. Oftentimes, Blackhat does not count as “Publication” in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you’re doing so to BlackHat.)
If you’re considering submitting, I encourage you to read all three recommendations posts at https://usa-briefings-cfp.blackhat.com/
There’s a fundraising campaign to “Keep the Bombe on the Bletchley Park Estate.”
The Bombe was a massive intellectual and engineering achievement at the British codebreaking center at Bletchley Park during the second world war. The Bombes were all disassembled after the war, and the plans destroyed, making the reconstruction of the Bombe at Bletchley a second impressive achievement.
My photo is from the exhibit on the reconstruction.