Category: Security

Actionable Followups from the Capital One Breach

Alexandre Sieira has some very interesting and actionable advice from looking at the Capital One Breach in “Learning from the July 2019 Capital One Breach.”

Alex starts by saying “The first thing I want to make clear is that I sympathize with the Capital One security and operations teams at this difficult time. Capital One is a well-known innovator in cloud security, has very competent people dedicated to this and has even developed and high quality open source solutions such as Cloud Custodian that benefit the entire community.” I share that perspective – I’ve spent a lot of time at OWASP, DevSecCon and other events talking with the smart folks at Capital One.

One thing I’ll add to his post is that the advice to “Avoid using * like the plague” is easy to implement with static analysis, by which I mean grep or diff in a commit hook. Similarly, if you want to block the grant of ListBuckets, you can look for that specific string.

Over time, you can evolve to check that the permissions are from a small subset of permissions you agree should be granted. One of the nice things about the agile approach to security is that you can start tomorrow, and then evolve.

At Blackhat next week, Dino Dai Zovi will be talking about how “Every Security Team is a Software Team Now.” Part of that thinking is how can we take advice, like Alex’s, and turn it into code that enforces our goals.

As we learn from breaches, as we share the code we build to address these problems, we’ll see fewer and fewer incidents like these.

Valuing CyberSecurity Research Datasets

There was a really interesting paper at the Workshop on the Economics of Information Security. The paper is “Valuing CyberSecurity Research Datasets.”

The paper focuses on the value of the IMPACT data sharing platform at DHS, and how the availability of data shapes the research that’s done.

On its way to that valuation, a very useful contribution of the paper is the analysis of types of research data which exist, and the purposes for which it can be used:

Note that there has been considerable attention paid to information sharing among operators through organizations such as ISACs. In contrast, we examine data provisioning done primarily for research purposes. Cybersecurity data resides on a use spectrum – some research data is relevant for operations and vice versa. Yet, as difficult as it can be to make the case for data sharing among operators, its even harder for researchers. Data sharing for research is generally not deemed as important as for operations. Outcomes are not immediately quantifiable. Bridging the gap between operators and researchers, rather than between operators alone, is further wrought with coordination and value challenges. Finally, research data is often a public good, which means it will likely be undervalued by the parties involved.

The paper enumerates benefits of research, including advancing scientific understanding, enabling infrastructure, creating parity in access to ground truth(s) for academics, technology developers, and others who don’t directly gather data. It also enumerates a set of barriers to research, including legal and ethical risk, costs, value uncertainty, and incentives.

These issues were highly resonant for me, because our near miss work certainly encounters these issues of value uncertainty and cost as we consider how to move beyond the operational data sharing that ISACs enable.

I’m very glad to see the challenges crystalized in this way, and we haven’t even reached the main goal of the paper, which is to assess how much value we get from sharing data.

While talking about this paper Robert Lemos has a story at Dark Reading, and Ross Anderson liveblogged the WEIS conference.

Safety and Security in Automated Driving

Safety First For Automated Driving” is a big, over-arching whitepaper from a dozen automotive manufacturers and suppliers.

One way to read it is that those disciplines have strongly developed safety cultures, which generally do not consider cybersecurity problems. This paper is the cybersecurity specialists making the argument that cyber will fit into safety, and how to do so.

In a sense, this white paper captures a strategic threat model. What are we working on? Autonomous vehicles. What can go wrong? Security issues of all types. What are we going to do? Integrate with and extend the existing safety discipline. Give specific threat information and mitigation strategies to component designers.

I find some parts of it surprising. (I would find it more surprising if I were to look at a 150 page document and not find anything surprising.)

Contrary to the commonly used definition of an [minimal risk condition, (MRC)], which describes only a standstill, this publication expands the definition to also include degraded operation and takeovers by the vehicle operator. Final MRCs refer to MRCs that allow complete deactivation of the automated driving system, e.g. standstill or takeover by the vehicle operator.

One of the “minimal risk” maneuvers listed (table 4) is an emergency stop. And while an emergency stop may certainly be a risk minimizing action in some circumstances, describing it as such is surprising, especially when presented in contrast to a “safe stop” maneuver.

It’s important to remember that driving is incredibly dangerous. In the United States in 2018, an estimated 40,000 people lost their lives in car crashes, and 4.5 million people were seriously injured. (I’ve seen elsewhere that a million of those are hospitalized.) A great many of those injuries are caused by either drunk or distracted drivers, and autonomous vehicles could save many lives, even if imperfect.

Which brings me to a part that I really like, which is the ‘three dimensions of risk treatment’ figure (Figure 8, shown). Words like “risk” and “risk management” encompass a lot, and this figure is a nice side contribution of the paper.

I also like Figure 27 & 28 (shown), showing risks associated with a generic architecture. Having this work available allows systems builders to consider the risks to various components they’re working on. Having it available lets us have a conversation about the systematic risks that exist, but also, allows security experts to ask “is this the right set of risks for systems builders to think about?”

A chart of system components in an autonomous vehicle

Passwords Advice

Bruce Marshall has put together a comparison of OWASP ASVS v3 and v4 password requirements: OWASP ASVS 3.0 & 4.0 Comparison. This is useful in and of itself, and is also the sort of thing that more standards bodies should do, by default.

It’s all too common to have a new standard come out without clear diffs. It’s all too common for new standards to build closely on other standards, without clearly saying what they’ve altered and why. This leaves the analysis of ‘what’s different’ to each user of the standards. It increases the probability of errors. Both drive cost and waste effort. We should judge standards on their delivery of these important contextual documents.

DNS Security

I’m happy to say that some new research by Jay Jacobs, Wade Baker, and myself is now available, thanks to the Global Cyber Alliance.

They asked us to look at the value of DNS security, such as when your DNS provider uses threat intel to block malicious sites. It’s surprising how effective it is for a tool that’s so easy to deploy. (Just point to a DNS server like 9.9.9.9).


The report is available from GCA’s site: Learn About How DNS Security Can Mitigate One-Third of Cyber Incidents

Polymorphic Warnings On My Mind

There’s a fascinating paper, “Tuning Out Security Warnings: A Longitudinal Examination Of Habituation Through Fmri, Eye Tracking, And Field Experiments.” (It came out about a year ago.)

The researchers examined what happens in people’s brains when they look at warnings, and they found that:

Research in the fields of information systems and human-computer interaction has shown that habituation—decreased response to repeated stimulation—is a serious threat to the effectiveness of security warnings. Although habituation is a neurobiological phenomenon that develops over time, past studies have only examined this problem cross-sectionally. Further, past studies have not examined how habituation influences actual security warning adherence in the field. For these reasons, the full extent of the problem of habituation is unknown.

We address these gaps by conducting two complementary longitudinal experiments. First, we performed an experiment collecting fMRI and eye-tracking data simultaneously to directly measure habituation to security warnings as it develops in the brain over a five-day workweek. Our results show not only a general decline of participants’ attention to warnings over time but also that attention recovers at least partially between workdays without exposure to the warnings. Further, we found that updating the appearance of a warning—that is, a polymorphic design—substantially reduced habituation of attention.

Second, we performed a three-week field experiment in which users were naturally exposed to privacy permission warnings as they installed apps on their mobile devices. Consistent with our fMRI results, users’ warning adherence substantially decreased over the three weeks. However, for users who received polymorphic permission warnings, adherence dropped at a substantially lower rate and remained high after three weeks, compared to users who received standard warnings. Together, these findings provide the most complete view yet of the problem of habituation to security warnings and demonstrate that polymorphic warnings can substantially improve adherence.

It’s not short, but it’s not hard reading. Worthwhile if you care about usable security.

The Queen of the Skies and Innovation

The Seattle Times has a story today about how “50 years ago today, the first 747 took off and changed aviation.” It’s true. The 747 was a marvel of engineering and luxury. The book by Joe Sutter is a great story of engineering leadership. For an upcoming flight, I paid extra to reserve an upper deck seat before the last of the passenger-carrying Queens of the Skies retires.

And in a way, the 747 represents a pinnacle of aviation engineering advancements. It was fast, it was long range, it was comfortable. There is no arguing that today’s planes are lighter, quieter, have better air, in seat power and entertainment, but I’m still happy to be flying on one, and there are still a few left to be delivered as cargo airplanes until 2022. (You can get lost in the Wikipedia article.)

And I want to talk a little not about the amazing aircraft, but about the regulatory tradeoffs made for aircraft and for computers.

As mentioned, the 50 year old design, with a great many improvements, remains in production. Also pictured is what’s probably a 1960s era Bell Systems 500 (note the integrated handset cord). Now, if 747s crashed at the rate of computers running Windows, there wouldn’t be any left. Regulation has made aviation safe, but the rate of innovation is low. (Brad Templeton has some thoughts on this in “Tons of new ideas in aviation. Will regulation stop them?.”)

In contrast, innovation in phones, computers and networks have transformed roughly every aspect of life over the last 25 years. The iPhone has transformed phones from phones into computers full of apps.

This has security costs. It is nearly impossible to function in society without a mobile phone. Your location is tracked constantly. A vulnerability in your phone leads to compromise of astounding amounts of personal data. These security costs scale when someone finds a vulnerability. Bruce Schneier has written recently about how this all comes together and leads him to say that even bad regulation is probably better than no regulation.

iphone replaces lots of things

In some ways, we’re already accepting these controls: see “15 Controversial Apps That Were Banned From Apple’s App Store,” or “Google has ‘banned’ these 14 apps from Play Store.” Controls imposed by one of the two companies wealthy enough to compete in mobile phone operating systems are importantly different from government controls, except of course, when those companies remove apps at the behest of governments.

I don’t know how to write regulation that allows for permission-less innovation at the pace we’re used to, and balances that with security and privacy. Something’s likely to give, and we need to think about how to make the societal tradeoffs well. Does anyone?

(Lastly, speaking of that upper-deck reservation, I want to give a shout-out to TProphet’s Award Cat, who drew my attention to the aircraft type and opportunity.)

Incentives and Multifactor Authentication

It’s well known that adoption rates for multi-factor authentication are poor. For example, “Over 90 percent of Gmail users still don’t use two-factor authentication.”

Someone was mentioning to me that there are bonuses in games. You get access to special rooms in Star Wars Old Republic. There’s a special emote in Fortnite. (Above)

How well do these incentives work? Are there numbers out there?

Navigation