Category: Usability

Passwords Advice

Bruce Marshall has put together a comparison of OWASP ASVS v3 and v4 password requirements: OWASP ASVS 3.0 & 4.0 Comparison. This is useful in and of itself, and is also the sort of thing that more standards bodies should do, by default.

It’s all too common to have a new standard come out without clear diffs. It’s all too common for new standards to build closely on other standards, without clearly saying what they’ve altered and why. This leaves the analysis of ‘what’s different’ to each user of the standards. It increases the probability of errors. Both drive cost and waste effort. We should judge standards on their delivery of these important contextual documents.

Polymorphic Warnings On My Mind

There’s a fascinating paper, “Tuning Out Security Warnings: A Longitudinal Examination Of Habituation Through Fmri, Eye Tracking, And Field Experiments.” (It came out about a year ago.)

The researchers examined what happens in people’s brains when they look at warnings, and they found that:

Research in the fields of information systems and human-computer interaction has shown that habituation—decreased response to repeated stimulation—is a serious threat to the effectiveness of security warnings. Although habituation is a neurobiological phenomenon that develops over time, past studies have only examined this problem cross-sectionally. Further, past studies have not examined how habituation influences actual security warning adherence in the field. For these reasons, the full extent of the problem of habituation is unknown.

We address these gaps by conducting two complementary longitudinal experiments. First, we performed an experiment collecting fMRI and eye-tracking data simultaneously to directly measure habituation to security warnings as it develops in the brain over a five-day workweek. Our results show not only a general decline of participants’ attention to warnings over time but also that attention recovers at least partially between workdays without exposure to the warnings. Further, we found that updating the appearance of a warning—that is, a polymorphic design—substantially reduced habituation of attention.

Second, we performed a three-week field experiment in which users were naturally exposed to privacy permission warnings as they installed apps on their mobile devices. Consistent with our fMRI results, users’ warning adherence substantially decreased over the three weeks. However, for users who received polymorphic permission warnings, adherence dropped at a substantially lower rate and remained high after three weeks, compared to users who received standard warnings. Together, these findings provide the most complete view yet of the problem of habituation to security warnings and demonstrate that polymorphic warnings can substantially improve adherence.

It’s not short, but it’s not hard reading. Worthwhile if you care about usable security.

Promoting Threat Modeling Work

Quick: are all the flowers the same species?

People regularly ask me to promote their threat modeling work, and I’m often happy to do so, even when I have questions about it. There are a few things I look at before I do, and I want to share some of those because I want to promote work that moves things forward, so we all benefit from it. Some of the things I look for include:

  • Specifics. If you have a new threat modeling approach, that’s great. Describe the steps concisely and crisply. (If I can’t find a list in your slide deck or paper, it’s not concise and crisp.) If you have a new variant on a building block or a new way to answer one of the four questions, be clear about that, so that those seeing your work can easily put it into context, and know what’s different. The four question framework makes this easy. For example, “this is an extension of ‘what are we working on,’ and you can use any method to answer the other questions.” Such a sentence makes it easy for those thinking of picking up your tool to put it immediately in context.
  • Names. Name your work. We don’t discuss Guido’s programming language with a strange dependence on whitespace, we discuss Python. For others to understand it, your work needs a name, not an adjective. There are at least half a dozen distinct ‘awesome’ ways to threat model being promoted today. Their promoters don’t make it easy to figure out what’s different from the many other awesome approaches. These descriptors also carry an implication that only they are awesome, and the rest, by elimination, must suck. Lastly, I don’t believe that anyone is promoting The Awesome Threat Modeling Method — if you are, I apologize, I was looking for an illustrative name that avoids calling anyone out.

    (Microsoft cast a pall over the development of threat modeling by having at least four different things labeled ‘the Microsoft approach to threat modeling.’ Those included DFD+STRIDE, Asset-entry, patterns and practices and TAM, and variations on each.) Also, we discuss Python 2 versus Python 3, not ‘the way Guido talked about Python in 2014 in that video that got taken off Youtube because it used walk-on music..’

  • Respect. Be respectful of the work others have done, and the approaches they use. Threat modeling is a very big tent, and what doesn’t work for you may well work for others. This doesn’t mean ‘never criticize,’ but it does mean don’t cast shade. It’s fine to say ‘Threat modeling an entire system at once doesn’t work in agile teams at west coast software companies.’ It’s even better to say ‘Writing misuse cases got an NPS of -50 and Elevation of Privilege scored 15 at the same 6 west coast companies founded in the last 5 years.’
    I won’t promote work that tears down other work for the sake of tearing it down, or that does so by saying either ‘this doesn’t work’ without specifics of the situation in which it didn’t work. Similarly, it’s fine to say “it took too long” if you say how long it took to do what steps, and, ideally, quantify ‘too long.’

I admit that I have failed at each of these in the past, and endeavor to do better. Specifics, labels, and respectful conversation help us understand the field of flowers.

What else should we do better as we improve the ways we tackle threat modeling?

Photo by Stephanie Krist on Unsplash.

The White Box Essays (Book Review)

The White Box, and its accompanying book, “The White Box Essays” are a FANTASTIC resource, and I wish I’d had them available to me as I designed Elevation of Privilege and helped with Control-Alt-Hack.

The book is for people who want to make games, and it does a lovely job of teaching you how, including things like the relationship between story and mechanics, the role of luck, how the physical elements teach the players, and the tradeoffs that you as a designer make as you design, prototype, test, refine and then get your game to market. In the go-to-market side, there are chapters on self-publishing, crowdfunding, what needs to be on a box.

The Essays don’t tell you how to create a specific game, they show you how to think about the choices you can make, and their impact on the game. For example:

Consider these three examples of ways randomness might be used (or not) in a design:

  • Skill without randomness (e.g., chess). With no random elements, skill is critical. The more skilled a player is, the greater their odds to win. The most skilled player will beat a new player close to 100% of the time.
  • Both skill and randomness (e.g., poker). Poker has many random elements, but a skilled player is better at choosing how to deal with those random elements than an unskilled one. The best poker player can play with new players and win most of the time, but the new players are almost certain to win a few big hands. (This is why there is a larger World Series of Poker than World Chess Championship — new players feel like they have a chance against the pros at poker. Since more players feel they have a shot at winning, more of them play, and the game is more popular.)
  • Randomness without skill (e.g., coin-flipping). There is no way to apply skill to coin-flipping and even the “best” coin flipper in the world can’t do better than 50/50, even against a new player.

The chapter goes on to talk about how randomness allows players to claim both credit and avoid blame, when players make choices about die rolls and the impact on gameplay, and a host of other tradeoffs.

The writing is solid: it’s as long as it needs to be, and then moves along (like a good game). What do you need to do, and why? How do you structure your work? If you’ve ever thought about designing a game, you should buy this book. But more than the book, there’s a boxed set, with meeples, tokens, cubes, and disks for you to use as you prototype. (And in the book is a discussion of how to use them, and the impact of your choices on production costs.)

I cannot say enough good things about this. After I did my first game design work, I went and looked for a collection of knowledge like this, and it didn’t exist. I’m glad it now does.

Image from Atlas Games.

Incentives and Multifactor Authentication

It’s well known that adoption rates for multi-factor authentication are poor. For example, “Over 90 percent of Gmail users still don’t use two-factor authentication.”

Someone was mentioning to me that there are bonuses in games. You get access to special rooms in Star Wars Old Republic. There’s a special emote in Fortnite. (Above)

How well do these incentives work? Are there numbers out there?

Pivots and Payloads

SANS has announced a new boardgame, “Pivots and Payloads,” that “takes you through pen test methodology, tactics, and tools with many possible setbacks that defenders can utilize to hinder forward progress for a pen tester or attacker. The game helps you learn while you play. It’s also a great way to showcase to others what pen testers do and how they do it.”

If you register for their webinar, which is on Wednesday the 19th, they’ll send you some posters versions that convert to boardgames.

If you’re interested in serious games for security, I maintain a list at https://adam.shostack.org/games.html.

John Harrison’s Struggle Continues

Today is John Harrison’s 352nd birthday, and Google has a doodle to celebrate. Harrison was rescued from historical obscurity by Dava Sobel’s excellent book Longitude, which documented Harrison’s struggle to first build and then demonstrate the superiority of his clocks to the mathematical and astronomical solutions heralded by leading scientists of the day. Their methods were complex, tedious and hard to execute from the deck of a ship.

To celebrate, I’d like to share this photo I took at the Royal Museums Greenwich in 2017:

Harrison Worksheet framed

(A Full size version is on Flickr.)

As the placard says, “First produced in 1768, this worksheet gave navigators an easy process for calculating their longitude using new instruments and the Nautical Almanac. Each naval ship’s master was required to train with qualified teachers in London or Portsmouth in order to gain a certificate of navigational competence.” (Emphasis added.)

As a member of the BlackHat Review Board, I would love to see more work on Human Factors presented there. The 2018 call for papers is open and closes April 9th. Over the past few years, I think we’ve developed an interesting track with good material year over year.

I wrote a short blog post on what we look for.

The BlackHat CFP calls for work which has not been published elsewhere. We prefer fully original work, but will consider a new talk that explains work you’ve done for the BlackHat audience. Oftentimes, Blackhat does not count as “Publication” in the view of academic program committees, and so you can present something at BlackHat that you plan to publish later. (You should of course check with the other venue, and disclose that you’re doing so to BlackHat.)

If you’re considering submitting, I encourage you to read all three recommendations posts at https://usa-briefings-cfp.blackhat.com/

There’s an interesting new paper at bioRXiv, “The Readability Of Scientific Texts Is Decreasing Over Time.”

Lower readability is also a problem for specialists (22, 23, 24). This was explicitly shown by Hartley (22) who demonstrated that rewriting scientific abstracts, to improve their readability, increased academics’ ability to comprehend them. While science is complex, and some jargon is unavoidable (25), this does not justify the continuing trend that we have shown.

Ironically, the paper is released as a PDF, which is hard to read on a mobile phone. There’s a tool, pandoc, which can easily create HTML versions from their LaTeX source. I encourage everyone who cares about their work being read to create HTML and ebook versions.

Navigation