Author: adam

DNS Security

I’m happy to say that some new research by Jay Jacobs, Wade Baker, and myself is now available, thanks to the Global Cyber Alliance.

They asked us to look at the value of DNS security, such as when your DNS provider uses threat intel to block malicious sites. It’s surprising how effective it is for a tool that’s so easy to deploy. (Just point to a DNS server like 9.9.9.9).


The report is available from GCA’s site: Learn About How DNS Security Can Mitigate One-Third of Cyber Incidents

When security goes off the rails

New at Dark Reading, my When Security Goes Off the Rails, Cyber can learn a lot from the highly regulated world of rail travel. The most important lesson: the value of impartial analysis.

(As I watch the competing stories, “Baltimore City leaders blame NSA for ransomware attack,” and “N.S.A. Denies Its Cyberweapon Was Used in Baltimore Attack, Congressman Says,” I’d like to see an investigations capability that can give us facts.)

Polymorphic Warnings On My Mind

There’s a fascinating paper, “Tuning Out Security Warnings: A Longitudinal Examination Of Habituation Through Fmri, Eye Tracking, And Field Experiments.” (It came out about a year ago.)

The researchers examined what happens in people’s brains when they look at warnings, and they found that:

Research in the fields of information systems and human-computer interaction has shown that habituation—decreased response to repeated stimulation—is a serious threat to the effectiveness of security warnings. Although habituation is a neurobiological phenomenon that develops over time, past studies have only examined this problem cross-sectionally. Further, past studies have not examined how habituation influences actual security warning adherence in the field. For these reasons, the full extent of the problem of habituation is unknown.

We address these gaps by conducting two complementary longitudinal experiments. First, we performed an experiment collecting fMRI and eye-tracking data simultaneously to directly measure habituation to security warnings as it develops in the brain over a five-day workweek. Our results show not only a general decline of participants’ attention to warnings over time but also that attention recovers at least partially between workdays without exposure to the warnings. Further, we found that updating the appearance of a warning—that is, a polymorphic design—substantially reduced habituation of attention.

Second, we performed a three-week field experiment in which users were naturally exposed to privacy permission warnings as they installed apps on their mobile devices. Consistent with our fMRI results, users’ warning adherence substantially decreased over the three weeks. However, for users who received polymorphic permission warnings, adherence dropped at a substantially lower rate and remained high after three weeks, compared to users who received standard warnings. Together, these findings provide the most complete view yet of the problem of habituation to security warnings and demonstrate that polymorphic warnings can substantially improve adherence.

It’s not short, but it’s not hard reading. Worthwhile if you care about usable security.

Promoting Threat Modeling Work

Quick: are all the flowers the same species?

People regularly ask me to promote their threat modeling work, and I’m often happy to do so, even when I have questions about it. There are a few things I look at before I do, and I want to share some of those because I want to promote work that moves things forward, so we all benefit from it. Some of the things I look for include:

  • Specifics. If you have a new threat modeling approach, that’s great. Describe the steps concisely and crisply. (If I can’t find a list in your slide deck or paper, it’s not concise and crisp.) If you have a new variant on a building block or a new way to answer one of the four questions, be clear about that, so that those seeing your work can easily put it into context, and know what’s different. The four question framework makes this easy. For example, “this is an extension of ‘what are we working on,’ and you can use any method to answer the other questions.” Such a sentence makes it easy for those thinking of picking up your tool to put it immediately in context.
  • Names. Name your work. We don’t discuss Guido’s programming language with a strange dependence on whitespace, we discuss Python. For others to understand it, your work needs a name, not an adjective. There are at least half a dozen distinct ‘awesome’ ways to threat model being promoted today. Their promoters don’t make it easy to figure out what’s different from the many other awesome approaches. These descriptors also carry an implication that only they are awesome, and the rest, by elimination, must suck. Lastly, I don’t believe that anyone is promoting The Awesome Threat Modeling Method — if you are, I apologize, I was looking for an illustrative name that avoids calling anyone out.

    (Microsoft cast a pall over the development of threat modeling by having at least four different things labeled ‘the Microsoft approach to threat modeling.’ Those included DFD+STRIDE, Asset-entry, patterns and practices and TAM, and variations on each.) Also, we discuss Python 2 versus Python 3, not ‘the way Guido talked about Python in 2014 in that video that got taken off Youtube because it used walk-on music..’

  • Respect. Be respectful of the work others have done, and the approaches they use. Threat modeling is a very big tent, and what doesn’t work for you may well work for others. This doesn’t mean ‘never criticize,’ but it does mean don’t cast shade. It’s fine to say ‘Threat modeling an entire system at once doesn’t work in agile teams at west coast software companies.’ It’s even better to say ‘Writing misuse cases got an NPS of -50 and Elevation of Privilege scored 15 at the same 6 west coast companies founded in the last 5 years.’
    I won’t promote work that tears down other work for the sake of tearing it down, or that does so by saying either ‘this doesn’t work’ without specifics of the situation in which it didn’t work. Similarly, it’s fine to say “it took too long” if you say how long it took to do what steps, and, ideally, quantify ‘too long.’

I admit that I have failed at each of these in the past, and endeavor to do better. Specifics, labels, and respectful conversation help us understand the field of flowers.

What else should we do better as we improve the ways we tackle threat modeling?

Photo by Stephanie Krist on Unsplash.

Testing Building Blocks

There are a couple of new, short (4-page), interesting papers from a team at KU Leuven including:

What makes these interesting is that they are digging into better-formed building blocks of threat modeling, comparing them to requirements, and analyzing how they stack up.

The work is centered on threat modeling for privacy and data protection, but what they look at includes STRIDE, CAPEC and CWE. What makes this interesting is not just the results of the comparison, but that they compare and contrast between techniques (DFD variants vs CARiSMA extended; STRIDE vs CAPEC or OWASP). Comparing building blocks at a granular level allows us to ask the question “what went wrong in that threat modeling project” and tweak one part of it, rather than throwing out threat modeling, or trying to train people in an entire method.

Episode 9 Spoilers

Today is the last Star Wars Day before Episode 9 comes out, and brings the Skywalker saga to its end.

Film critics have long talked about how Star Wars is about Luke’s Hero’s Journey, or the core trilogy is about his relationship to his father, but they’re wrong. Also, I regularly say that Star Wars is fundamentally the story of information disclosure: from the opening shot of Princess Leia’s ship being pursued through the climatic destruction of the Death Star, it’s an information security metaphor. But I too am wrong.

Star Wars is a story of how power corrupts.

The prophecy, that someone will bring (or restore) balance to the Force, was never precisely stated in the films*. There were allusions: someone will restore balance to the Force. Variously, the one expected to do that was Anakin, and then Luke, and then everyone who’d heard of the prophecy was either its presumptive subject or dead. But the Force is not out of balance in a way that a Skywalker can fix. The Force is out of balance because of the Skywalkers, and it is only through the ending of their line that balance can be restored.

Justifying that claim requires some of the story from outside the movies. The story starts with a Sith, Darth Plagueis. He was interested in life extension by control of the Force. He was also master to Darth Sideous, who later became the Emperor.

The virgin birth of Anakin Skywalker was not just cheesy adaptation of Christian symbology, it was a massive head-fake that, without ever being explicit, got people treating Anakin as if he was supposed to be a savior figure, who died to answer for the sins of the world. But that’s not the reason for his fatherless birth.

It was the experiments Plagueis did which led to the creation of Anakin Skywalker and it was Plagueis who set the saga in motion. Those actions unbalanced the Force, and the prophecy speaks of one who will bring back the balance.

The extreme and exceptional power of the Skywalkers break both the Jedi and the Sith. This is a side effect of the Force being out of balance. The way to restore balance to the Force is to end them, and that is what Rey will do, by killing Kylo Ren, son of Leia Skywalker.

Star Wars is a story of how how power corrupts, and how heroic quests for justice can both restore the world, and cause tremendous damage along the way.

To the final film’s title, either it’s a final headfake, or a reference to Skywalker as a *title*, those who quest for justice in the galaxy.


* It was retconned last month; older versions are tracked in this Wiki.

Also, I want to acknowledge that Emily Asher-Perrin first put forth the explanation that Skywalker is a title, in her post “Hey, Star Wars: Episode IX — Don’t Retcon Rey Into a Skywalker.”

If you like this, I have plenty more geeky Star Wars content.

The White Box Essays (Book Review)

The White Box, and its accompanying book, “The White Box Essays” are a FANTASTIC resource, and I wish I’d had them available to me as I designed Elevation of Privilege and helped with Control-Alt-Hack.

The book is for people who want to make games, and it does a lovely job of teaching you how, including things like the relationship between story and mechanics, the role of luck, how the physical elements teach the players, and the tradeoffs that you as a designer make as you design, prototype, test, refine and then get your game to market. In the go-to-market side, there are chapters on self-publishing, crowdfunding, what needs to be on a box.

The Essays don’t tell you how to create a specific game, they show you how to think about the choices you can make, and their impact on the game. For example:

Consider these three examples of ways randomness might be used (or not) in a design:

  • Skill without randomness (e.g., chess). With no random elements, skill is critical. The more skilled a player is, the greater their odds to win. The most skilled player will beat a new player close to 100% of the time.
  • Both skill and randomness (e.g., poker). Poker has many random elements, but a skilled player is better at choosing how to deal with those random elements than an unskilled one. The best poker player can play with new players and win most of the time, but the new players are almost certain to win a few big hands. (This is why there is a larger World Series of Poker than World Chess Championship — new players feel like they have a chance against the pros at poker. Since more players feel they have a shot at winning, more of them play, and the game is more popular.)
  • Randomness without skill (e.g., coin-flipping). There is no way to apply skill to coin-flipping and even the “best” coin flipper in the world can’t do better than 50/50, even against a new player.

The chapter goes on to talk about how randomness allows players to claim both credit and avoid blame, when players make choices about die rolls and the impact on gameplay, and a host of other tradeoffs.

The writing is solid: it’s as long as it needs to be, and then moves along (like a good game). What do you need to do, and why? How do you structure your work? If you’ve ever thought about designing a game, you should buy this book. But more than the book, there’s a boxed set, with meeples, tokens, cubes, and disks for you to use as you prototype. (And in the book is a discussion of how to use them, and the impact of your choices on production costs.)

I cannot say enough good things about this. After I did my first game design work, I went and looked for a collection of knowledge like this, and it didn’t exist. I’m glad it now does.

Image from Atlas Games.

‘No need’ to tell the public(?!?)

When Andrew and I wrote The New School, and talked about the need to learn from other professions, we didn’t mean for doctors to learn from ‘cybersecurity thought leaders’ about hiding their problems:

…Only one organism grew back. C. auris.

It was spreading, but word of it was not. The hospital, a specialty lung and heart center that draws wealthy patients from the Middle East and around Europe, alerted the British government and told infected patients, but made no public announcement.

“There was no need to put out a news release during the outbreak,” said Oliver Wilkinson, a spokesman for the hospital.

This hushed panic is playing out in hospitals around the world. Individual institutions and national, state and local governments have been reluctant to publicize outbreaks of resistant infections, arguing there is no point in scaring patients — or prospective ones…

Dr. Silke Schelenz, Royal Brompton’s infectious disease specialist, found the lack of urgency from the government and hospital in the early stages of the outbreak “very, very frustrating.”

“They obviously didn’t want to lose reputation,” Dr. Schelenz said. “It hadn’t impacted our surgical outcomes.” (“A Mysterious Infection, Spanning the Globe in a Climate of Secrecy“, NYTimes April 6, 2019)

This is the wrong way to think about the problem. Mr. Wilkinson (as quoted) is wrong. There is a fiduciary duty to tell patients that they are at increased risk of C. auris if they go to his hospital.

Moreover, there is a need to tell the public about these problems. Our choices, as a society, kill people. We kill people when we allow antibiotics to be used to make fatter cows or when we allow antifungals to be used on crops.

We can adjust those choices, but only if we know the consequences we are accepting. Hiding outcomes hinders cybersecurity, and it’s a bad model for medicine or public policy.

(Picture courtesy of Clinical Advisor. I am somewhat sorry for my use of such a picture here, where it’s unexpected.))

Navigation