Category: Usability

Promoting Threat Modeling Work

Quick: are all the flowers the same species?

People regularly ask me to promote their threat modeling work, and I’m often happy to do so, even when I have questions about it. There are a few things I look at before I do, and I want to share some of those because I want to promote work that moves things forward, so we all benefit from it. Some of the things I look for include:

  • Specifics. If you have a new threat modeling approach, that’s great. Describe the steps concisely and crisply. (If I can’t find a list in your slide deck or paper, it’s not concise and crisp.) If you have a new variant on a building block or a new way to answer one of the four questions, be clear about that, so that those seeing your work can easily put it into context, and know what’s different. The four question framework makes this easy. For example, “this is an extension of ‘what are we working on,’ and you can use any method to answer the other questions.” Such a sentence makes it easy for those thinking of picking up your tool to put it immediately in context.
  • Names. Name your work. We don’t discuss Guido’s programming language with a strange dependence on whitespace, we discuss Python. For others to understand it, your work needs a name, not an adjective. There are at least half a dozen distinct ‘awesome’ ways to threat model being promoted today. Their promoters don’t make it easy to figure out what’s different from the many other awesome approaches. These descriptors also carry an implication that only they are awesome, and the rest, by elimination, must suck. Lastly, I don’t believe that anyone is promoting The Awesome Threat Modeling Method — if you are, I apologize, I was looking for an illustrative name that avoids calling anyone out.

    (Microsoft cast a pall over the development of threat modeling by having at least four different things labeled ‘the Microsoft approach to threat modeling.’ Those included DFD+STRIDE, Asset-entry, patterns and practices and TAM, and variations on each.) Also, we discuss Python 2 versus Python 3, not ‘the way Guido talked about Python in 2014 in that video that got taken off Youtube because it used walk-on music..’

  • Respect. Be respectful of the work others have done, and the approaches they use. Threat modeling is a very big tent, and what doesn’t work for you may well work for others. This doesn’t mean ‘never criticize,’ but it does mean don’t cast shade. It’s fine to say ‘Threat modeling an entire system at once doesn’t work in agile teams at west coast software companies.’ It’s even better to say ‘Writing misuse cases got an NPS of -50 and Elevation of Privilege scored 15 at the same 6 west coast companies founded in the last 5 years.’
    I won’t promote work that tears down other work for the sake of tearing it down, or that does so by saying either ‘this doesn’t work’ without specifics of the situation in which it didn’t work. Similarly, it’s fine to say “it took too long” if you say how long it took to do what steps, and, ideally, quantify ‘too long.’

I admit that I have failed at each of these in the past, and endeavor to do better. Specifics, labels, and respectful conversation help us understand the field of flowers.

What else should we do better as we improve the ways we tackle threat modeling?

Photo by Stephanie Krist on Unsplash.

The White Box Essays (Book Review)

The White Box, and its accompanying book, “The White Box Essays” are a FANTASTIC resource, and I wish I’d had them available to me as I designed Elevation of Privilege and helped with Control-Alt-Hack.

The book is for people who want to make games, and it does a lovely job of teaching you how, including things like the relationship between story and mechanics, the role of luck, how the physical elements teach the players, and the tradeoffs that you as a designer make as you design, prototype, test, refine and then get your game to market. In the go-to-market side, there are chapters on self-publishing, crowdfunding, what needs to be on a box.

The Essays don’t tell you how to create a specific game, they show you how to think about the choices you can make, and their impact on the game. For example:

Consider these three examples of ways randomness might be used (or not) in a design:

  • Skill without randomness (e.g., chess). With no random elements, skill is critical. The more skilled a player is, the greater their odds to win. The most skilled player will beat a new player close to 100% of the time.
  • Both skill and randomness (e.g., poker). Poker has many random elements, but a skilled player is better at choosing how to deal with those random elements than an unskilled one. The best poker player can play with new players and win most of the time, but the new players are almost certain to win a few big hands. (This is why there is a larger World Series of Poker than World Chess Championship — new players feel like they have a chance against the pros at poker. Since more players feel they have a shot at winning, more of them play, and the game is more popular.)
  • Randomness without skill (e.g., coin-flipping). There is no way to apply skill to coin-flipping and even the “best” coin flipper in the world can’t do better than 50/50, even against a new player.

The chapter goes on to talk about how randomness allows players to claim both credit and avoid blame, when players make choices about die rolls and the impact on gameplay, and a host of other tradeoffs.

The writing is solid: it’s as long as it needs to be, and then moves along (like a good game). What do you need to do, and why? How do you structure your work? If you’ve ever thought about designing a game, you should buy this book. But more than the book, there’s a boxed set, with meeples, tokens, cubes, and disks for you to use as you prototype. (And in the book is a discussion of how to use them, and the impact of your choices on production costs.)

I cannot say enough good things about this. After I did my first game design work, I went and looked for a collection of knowledge like this, and it didn’t exist. I’m glad it now does.

Image from Atlas Games.

Incentives and Multifactor Authentication

It’s well known that adoption rates for multi-factor authentication are poor. For example, “Over 90 percent of Gmail users still don’t use two-factor authentication.”

Someone was mentioning to me that there are bonuses in games. You get access to special rooms in Star Wars Old Republic. There’s a special emote in Fortnite. (Above)

How well do these incentives work? Are there numbers out there?

Pivots and Payloads

SANS has announced a new boardgame, “Pivots and Payloads,” that “takes you through pen test methodology, tactics, and tools with many possible setbacks that defenders can utilize to hinder forward progress for a pen tester or attacker. The game helps you learn while you play. It’s also a great way to showcase to others what pen testers do and how they do it.”

If you register for their webinar, which is on Wednesday the 19th, they’ll send you some posters versions that convert to boardgames.

If you’re interested in serious games for security, I maintain a list at https://adam.shostack.org/games.html.

John Harrison’s Struggle Continues

Today is John Harrison’s 352nd birthday, and Google has a doodle to celebrate. Harrison was rescued from historical obscurity by Dava Sobel’s excellent book Longitude, which documented Harrison’s struggle to first build and then demonstrate the superiority of his clocks to the mathematical and astronomical solutions heralded by leading scientists of the day. Their methods were complex, tedious and hard to execute from the deck of a ship.

To celebrate, I’d like to share this photo I took at the Royal Museums Greenwich in 2017:

Harrison Worksheet framed

(A Full size version is on Flickr.)

As the placard says, “First produced in 1768, this worksheet gave navigators an easy process for calculating their longitude using new instruments and the Nautical Almanac. Each naval ship’s master was required to train with qualified teachers in London or Portsmouth in order to gain a certificate of navigational competence.” (Emphasis added.)

As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. The 2018 call for papers is open and closes April 9th. Over the past few years, I think we’ve developed an interesting track with good material year over year.

I wrote a short blog post on what we look for.

The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you’ve done for the BlackHat audience. Oftentimes, Blackhat does not count as “Publication” in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you’re doing so to BlackHat.)

If you’re considering submitting, I encourage you to read all three recommendations posts at https://usa-briefings-cfp.blackhat.com/

There’s an interesting new paper at bioRXiv, “The Readability Of Scientific Texts Is Decreasing Over Time.”

Lower readability is also a problem for specialists (22, 23, 24). This was explicitly shown by Hartley (22) who demonstrated that rewriting scientific abstracts, to improve their readability, increased academics’ ability to comprehend them. While science is complex, and some jargon is unavoidable (25), this does not justify the continuing trend that we have shown.

Ironically, the paper is released as a PDF, which is hard to read on a mobile phone. There’s a tool, pandoc, which can easily create HTML versions from their LaTeX source. I encourage everyone who cares about their work being read to create HTML and ebook versions.

“Comparing the Usability of Cryptographic APIs”

Obstacles Frame

 

(The abstract:) Potentially dangerous cryptography errors are well documented in many applications. Conventional wisdom suggests that many of these errors are caused by cryptographic Application Programming Interfaces (APIs) that are too complicated, have insecure defaults, or are poorly documented. To address this problem, researchers have created several cryptographic libraries that they claim are more usable; however, none of these libraries have been empirically evaluated for their ability to promote more secure development. This paper is the first to examine both how and why the design and resulting usability of different cryptographic libraries affects the security of code written with them, with the goal of understanding how to build effective future libraries. We conducted a controlled experiment in which 256 Python developers recruited from GitHub attempt common tasks involving symmetric and asymmetric cryptography using one of five different APIs.
We examine their resulting code for functional correctness and security, and compare their results to their self-reported sentiment about their assigned library. Our results suggest that while APIs designed for simplicity can provide security
benefits—reducing the decision space, as expected, prevents choice of insecure parameters—simplicity is not enough. Poor
documentation, missing code examples, and a lack of auxiliary features such as secure key storage, caused even participants
assigned to simplified libraries to struggle with both basic functional correctness and security. Surprisingly, the
availability of comprehensive documentation and easy-to use code examples seems to compensate for more complicated APIs in terms of functionally correct results and participant reactions; however, this did not extend to security results. We find it particularly concerning that for about 20% of functionally correct tasks, across libraries, participants believed their code was secure when it was not. Our results suggest that while new cryptographic libraries that want to promote effective security should offer a simple, convenient interface, this is not enough: they should also, and perhaps more importantly, ensure support for a broad range of common tasks and provide accessible documentation with secure, easy-to-use code examples.

It’s interesting that even when developers took care to consider usability of their APIs, usability testing revealed serious issues. But it’s not surprising. The one constant of usability testing is that people surprise you.

The paper is: “Comparing the Usability of Cryptographic APIs,” Yasemin Acar (CISPA, Saarland University), Michael Backes (CISPA, Saarland University & MPI-SWS), Sascha Fahl (CISPA, Saarland University), Simson Garfinkel (National Institute of Standards and Technology), Doowon Kim (University of Maryland), Michelle Mazurek (University of Maryland), Christian Stransky (CISPA, Saarland University), The Increasingly-misnamed Oakland Conference, 2017.

How Not to Design an Error Message

SC07FireAlarm

The voice shouts out: “Detector error, please see manual.” Just once, then a few hours later. And when I did see the manual, I discovered that it means “Alarm has reached its End of Life

No, really. That’s how my fire alarm told me that it’s at its end of life. By telling me to read the manual. Why it doesn’t say “device has reached end of life?” That would be direct and to the point. But no. When you press the button, it says “please see manual.” Now, this was a 2009 device, so maybe, just maybe, there was a COGS issue in how much storage was needed.

But sheesh. Warning messages should be actionable, explanatory and tested. At least it was loud and annoying.

Navigation