BlackHat and Human Factors

As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. The 2018 call for papers is open and closes April 9th. Over the past few years, I think we’ve developed an interesting track with good material year over year.

I wrote a short
blog post
on what we look for.

The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you’ve done for the BlackHat audience. Oftentimes, Blackhat does not count as “Publication” in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you’re doing so to BlackHat.)

If you’re considering submitting, I encourage you to read all three recommendations posts at https://usa-briefings-cfp.blackhat.com/

Keep the Bombe on the Bletchley Park Estate

There’s a fundraising campaign to “Keep the Bombe on the Bletchley Park Estate.”

The Bombe was a massive intellectual and engineering achievement at the British codebreaking center at Bletchley Park during the second world war. The Bombes were all disassembled after the war, and the plans destroyed, making the reconstruction of the Bombe at Bletchley a second impressive achievement.

My photo is from the exhibit on the reconstruction.

Doing Science With Near Misses

Last week at Art into Science, I presented “That was Close! Doing Science with Near Misses” (Google, pptx.)

The core idea is that we should borrow from aviation to learn from near misses, and learn to protect ourselves and our systems better. The longer form is in the draft “Voluntary Reporting of Cybersecurity “Near Misses”

The talk was super-well received and I’m grateful to Sounil Yu and the participants in the philosphy track, who juggled so we could collaborate and brainstorm. If you’d like to help, by far the most helpful way would be to tell us about a near miss you’ve experienced using our form, and give us feedback on the form. Since Thursday, I’ve added a space for that feedback, and made a few other suggested adjustments which were easy to implement.

If you’ve had a chance to think about definitions for either near misses or accidents, I’d love to hear about those, in comments, in your blog (trackbacks should work), or whatever works for you. If you were at Art Into Science, there’s a #near-miss channel on the conference Slack, and I’ll be cleaning up the notes.

Image from the EHS Database, who have a set of near miss safety posters.

Jonathan Marcil’s Threat Modeling Toolkit talk

There’s a lot of threat modeling content here at AppSec Cali, and sadly, I’m only here today. Jonathan Marcil has been a guest here on Adam & friends, and today is talking about his toolkit: data flow diagrams and attack trees.

His world is very time constrained, and it’s standing room only.

  • Threat modeling is an appsec activity, understand attackers and systems
  • For security practitioners and software engineers. A tool to help clarify what the system is for reviewers. Highlight ameliorations or requirements.
  • Help catch important things despite chaos.
  • Must be collaborative: communication is a key
  • Being wrong is great: people get engaged to correct you!
  • Data flow diagrams vs connection flow diagrams: visual overload. This is not an architectural doc, but an aid to security discussion. He suggests extending the system modeling approach to fit your needs, which is great, and is why I put my definition of a DFD3 on github; let’s treat our tools as artifacts like developers do.
  • An extended example of modeling Electrum.
  • The system model helps organize your own thoughts. Build a visual model of the things that matter to you, leave out the bits that don’t matter.
  • Found a real JSONRPC vuln in the wallet because of investigations driven by system model.
  • His models also have a “controls checklist;” “these are the controls I think we have.” Controls tied by numbers to parts of diagram. Green checklists are a great motivator.
  • Discussion of one line vs two; would another threat modeling expert be able to read this diagram? What would be a better approach for a SAML-based system? Do you need trust boundaries between the browser and the IDP? What’s going through your head as you build this?
  • Use attack trees to organize threat intelligence: roots are goals, leafs are routes to goals. If the goal is to steal cryptocurrency, one route is to gain wallet access, via stealing the physical wallet or software access. (Sorry, I’m bad at taking photos as I blog.) He shows the attack tree growing in a nice succession of slides.
  • Attack trees are useful because they’re re-usable.
  • Uses PlantUML to draw trees with code, has a bunch of advantages of version control, automatically balancing trees.
  • Questions: How to collaborate with and around threat models? How to roll out to a group of developers? How to sell them on doing something beyond a napkin.
  • Diagrams for architects versus diagrams for developers.
  • If we had an uber-tree, it wouldn’t be useful because you need to scope it and cut it. (Adam adds: perhaps scoping and cutting are easier than creating, if the tree isn’t overwhelming?)
  • Link attack tree to flow diagram; add the same numbered controls to the attack tree.
  • If you can be in a meeting and say nothing in the TM meeting, you’ve won!

Lastly, Jonathan did a great job of live-tweeting his own talk.

AppSec Cali 2018: Izar Tarandach

I’m at the OWASP AppSec Cali event, and while there’ll be video, I’m taking notes:

Context for the talk

  • What fails during the development process? Incomplete requirements, non-secure design, lack of security mindset, leaky development
  • These failures are threats which can be mitigated. (eg, compliance and risk requirements address incomplete requirements)
  • We keep failing in the same way. How often are developers required to pass a security interview to get a job?
  • Story of Alice the manager, and Bob the developer who learns about a SQL injection in their legacy code. Bob is overwhelmed by security requirements.
  • “The problem with programmers is that you can never tell what a programmer is going until it is too late.” — Seymour Cray
  • Security team objective: be informed about product flow; help developers not write and not deploy security issues; stop being a bottleneck; so focus secure development on the developer, not the security expert.

Notable Security Events

  • How to integrate security expertise into development in a more fluid way. Does this tie to “the spec”?
  • Developers don’t know that their changes are security relevant
  • Funny example of a training quiz that doesn’t have a learning goal
  • Noel Burch’s hierarchy of competence. From unconscious incompetence through unconscious competence.
  • Learning: step-by-step, instructions, theory; training: repetition, muscle memory; applying: real life doing.
  • Tie domains to notable events, use checklists for those notable events.
  • Specifically formed “if you did X, do Y.” Each “Y” must be in the language of the developer, concise, testable, and supported by training.
  • Ran an experiment, got solid feedback.
  • Short training gets used more.
  • Crisply defined responsibilities by role.

Star Trek’s Astromycologist

This is very cool: “Star Trek’s secret weapon: a scientist with a mushroom fetish bent on saving the planet.”

On Star Trek: Discovery, the character Lieutenant Paul Stamets is an “astromycologist” — a mushroom expert in outer space who is passionate about the power of fungi.

Stamets is actually named after a real U.S. scientist who spends his downtime tramping through the forests of B.C.’s Cortes Island.

The real Stamets has a few books. “Mycelium Running” is a fascinating read.

Fire and building codes

What’s more primordial than fire? It’s easy to think that fire is a static threat, and defenses against it can be static. So it was surprising to see that changes in home design and contents are leading to fires spread much faster, and that the Canadian Commission on Building and Fire Codes is considering mandates for home sprinklers.

The CBC’s “Rise in fast-burning house fires heats up calls for sprinklers in homes” has a good discussion of the changing threat, the costs of mitigation, and the tradeoffs entailed.

The Resistance Has Infiltrated This Base!

In a memo issued Jan. 4 and rescinded about an hour later, Deputy Defense Secretary Pat Shanahan announced a new “Central Cloud Computing Program Office” — or “C3PO” — to “acquire the Joint Enterprise Defense Infrastructure (JEDI) Cloud.”

“C3PO is authorized to obligate funds as necessary in support of the JEDI Cloud,” Shanahan, a former Boeing Co. executive, wrote, managing to get a beloved droid from the space-themed movies and an equally popular fictional order of warriors into what otherwise would be a routine message in the Pentagon bureaucracy.

The memo was recalled because “it was issued in error,” according to Shanahan’s spokesman, Navy Captain Jeff Davis.

Thanks to MC for the story.

Not Bugs, but Features

“[Mukhande Singh] said “real water” should expire after a few months. His does. “It stays most fresh within one lunar cycle of delivery,” he said. “If it sits around too long, it’ll turn green. People don’t even realize that because all their water’s dead, so they never see it turn green.”
(Unfiltered Fervor: The Rush to Get Off the Water Grid, Nellie Bowles, NYTimes, Dec 29, 2017.)
So those things turning the water green? Apparently, not bugs, but features. In unrelated “not understanding food science” news, don’t buy the Mellow sous vide machine. Features.