Category: Doing it Differently

‘No need’ to tell the public(?!?)

When Andrew and I wrote The New School, and talked about the need to learn from other professions, we didn’t mean for doctors to learn from ‘cybersecurity thought leaders’ about hiding their problems:

…Only one organism grew back. C. auris.

It was spreading, but word of it was not. The hospital, a specialty lung and heart center that draws wealthy patients from the Middle East and around Europe, alerted the British government and told infected patients, but made no public announcement.

“There was no need to put out a news release during the outbreak,” said Oliver Wilkinson, a spokesman for the hospital.

This hushed panic is playing out in hospitals around the world. Individual institutions and national, state and local governments have been reluctant to publicize outbreaks of resistant infections, arguing there is no point in scaring patients — or prospective ones…

Dr. Silke Schelenz, Royal Brompton’s infectious disease specialist, found the lack of urgency from the government and hospital in the early stages of the outbreak “very, very frustrating.”

“They obviously didn’t want to lose reputation,” Dr. Schelenz said. “It hadn’t impacted our surgical outcomes.” (“A Mysterious Infection, Spanning the Globe in a Climate of Secrecy“, NYTimes April 6, 2019)

This is the wrong way to think about the problem. Mr. Wilkinson (as quoted) is wrong. There is a fiduciary duty to tell patients that they are at increased risk of C. auris if they go to his hospital.

Moreover, there is a need to tell the public about these problems. Our choices, as a society, kill people. We kill people when we allow antibiotics to be used to make fatter cows or when we allow antifungals to be used on crops.

We can adjust those choices, but only if we know the consequences we are accepting. Hiding outcomes hinders cybersecurity, and it’s a bad model for medicine or public policy.

(Picture courtesy of Clinical Advisor. I am somewhat sorry for my use of such a picture here, where it’s unexpected.))

Leave Those Numbers for April 1st

“90% of attacks start with phishing!*” “Cyber attacks will cost the world 6 trillion by 2020!”

We’ve all seen these sorts of numbers from vendors, and in a sense they’re April Fools day numbers: you’d have to be a fool to believe them. But vendors quote insane because there’s no downside and much upside. We need to create more and worse downside, and the road there lies through losing sales.

We need to call vendors on these number, and say “I’m sorry, but if you’d lie to me about that, what about the numbers you’re claiming that are hard to verify? The door is to your left.”

If we want to change the behavior, we have to change the impact of the behavior. We need to tell vendors that there’s no place for made up numbers, debunked numbers, unsupported numbers in our buying processes. If those numbers are in their sales and marketing material, they’re going to lose business for it.

* This one seems to trace back to analysis that 90% of APT attacks in the Verizon DBIR started with phishing, but APT and non-APT attacks are clearly different.

Pivots and Payloads

SANS has announced a new boardgame, “Pivots and Payloads,” that “takes you through pen test methodology, tactics, and tools with many possible setbacks that defenders can utilize to hinder forward progress for a pen tester or attacker. The game helps you learn while you play. It’s also a great way to showcase to others what pen testers do and how they do it.”

If you register for their webinar, which is on Wednesday the 19th, they’ll send you some posters versions that convert to boardgames.

If you’re interested in serious games for security, I maintain a list at https://adam.shostack.org/games.html.

Yesterday Twitter revealed they had accidentally stored plain-text passwords in some log files. There was no indication the data was accessed and users were warned to update their passwords. There was no known breach, but Twitter went public anyway, and was excoriated in the press and… on Twitter.

This is a problem for our profession and industry. We get locked into a cycle where any public disclosure of a breach or security mistake results in…

Well, you can imagine what it results in, or you can go read “The Security Profession Needs to Adopt Just Culture” by Rich Mogull. It’s a very important article, and you should read it, and the links, and take the time to consider what it means. In that spirit, I want to reflect on something I said the other night. I was being intentionally provocative, and perhaps crossed the line away from being just. What I said was a password management company had one job, and if they expose your passwords, you should not use their password management software.

Someone else in the room, coming from a background where they have blameless post-mortems, challenged my use of the phrase ‘you had one job,’ and praised the company for coming forward. And I’ve been thinking about that, and my take is, the design where all the passwords are at a single site is substantially and predictably worse than a design where the passwords are distributed in local clients and local data storage. (There are tradeoffs. With a single site, you may be able to monitor for and respond to unusual access patterns rapidly, and you can upgrade all the software at once. There is an availability benefit. My assessment is that the single-store design is not worth it, because of the catastrophic failure modes.)

It was a fair criticism. I’ve previously said “we live in an ‘outrage world’ where it’s easier to point fingers and giggle in 140 characters and hurt people’s lives or careers than it is to make a positive contribution.” Did I fall into that trap myself? Possibly.

In “Just Culture: A Foundation for Balanced Accountability and Patient Safety,” which Rich links, there’s a table in Figure 2, headed “Choose the column that best describes the caregiver’s action.” In reading that table, I believe that a password manager with central storage falls into the reckless category, although perhaps it’s merely risky. In either case, the system leaders are supposed to share in accountability.

Could I have been more nuanced? Certainly. Would it have carried the same impact? No. Justified? I’d love to hear your thoughts!

Threat Model Thursday: ARM’s Network Camera TMSA

Last week, I encouraged you to take a look at the ARM Network Camera Threat Model and Security Analysis, and consider:

First, how does it align with the 4-question frame (“what are we working on,” “What can go wrong,” “what are we going to do about it,” and “did we do a good job?”) Second, ask yourself who, what, why, and how.

Before I get into my answers, I want to re-iterate what I said in the first: I’m doing this to learn and to illustrate, not to criticize. It’s a dangerous trap to think that “the way to threat model is.” And this model is a well-considered and structured analysis of networked camera security, and it goes a bit beyond

So let me start with the who, what, why and how of this model.
Continue reading

Doing Science With Near Misses

[Update: The final article is available at “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’,” at the Colorado Technology Law Journal.] 

Last week at Art into Science, I presented “That was Close! Doing Science with Near Misses” (Slides as web page, or download the pptx.)

The core idea is that we should borrow from aviation to learn from near misses, and learn to protect ourselves and our systems better. The longer form is in the draft “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’Voluntary Reporting of Cybersecurity “Near Misses”

The talk was super-well received and I’m grateful to Sounil Yu and the participants in the philosphy track, who juggled the schedule so we could collaborate and brainstorm. If you’d like to help, by far the most helpful way would be to tell us about a near miss you’ve experienced using our form, and give us feedback on the form. Since Thursday, I’ve added a space for that feedback, and made a few other suggested adjustments which were easy to implement.

If you’ve had a chance to think about definitions for either near misses or accidents, I’d love to hear about those, in comments, in your blog (trackbacks should work), or whatever works for you. If you were at Art Into Science, there’s a #near-miss channel on the conference Slack, and I’ll be cleaning up the notes.

Image from the EHS Database, who have a set of near miss safety posters.

Learning from Near Misses

[Update: Steve Bellovin has a blog post]

One of the major pillars of science is the collection of data to disprove arguments. That data gathering can include experiments, observations, and, in engineering, investigations into failures. One of the issues that makes security hard is that we have little data about large scale systems. (I believe that this is more important than our clever adversaries.) The work I want to share with you today has two main antecedents.

First, in the nearly ten years since Andrew Stewart and I wrote The New School of Information Security, and called for more learning from breaches, we’ve seen a dramatic shift in how people talk about breaches. Unfortunately, we’re still not learning as much as we could. There are structural reasons for that, primarily fear of lawsuits.

Second, last year marked 25 years of calls for an “NTSB for infosec.” Steve Bellovin and I wrote a short note asking why that was. We’ve spent the last year asking what else we might do. We’ve learned a lot about other Aviation Safety Programs, and think there are other models that may be better fits for our needs and constraints in the security realm.

Much that investigation has been a collaboration with Blake Reid, Jonathan Bair, and Andrew Manley of the University of Colorado Law School, and together we have a new draft paper on SSRN, “Voluntary Reporting of Cybersecurity Incidents.”

A good deal of my own motivation in this work is to engineer a way to learn more. The focus of this work, on incidents rather than breaches, and on voluntary reporting and incentives, reflects lessons learned as we try to find ways to measure real world security. The writing and abstract reflect the goal of influencing those outside security to help us learn better:

The proliferation of connected devices and technology provides consumers immeasurable amounts of convenience, but also creates great vulnerability. In recent years, we have seen explosive growth in the number of damaging cyber-attacks. 2017 alone has seen the Wanna Cry, Petya, Not Petya, Bad Rabbit, and of course the historic Equifax breach, among many others. Currently, there is no mechanism in place to facilitate understanding of these threats, or their commonalities. While information regarding the causes of major breaches may become public after the fact, what is lacking is an aggregated data set, which could be analyzed for research purposes. This research could then provide clues as to trends in both attacks and avoidable mistakes made on the part of operators, among other valuable data.

One possible regime for gathering such information would be to require disclosure of events, as well as investigations into these events. Mandatory reporting and investigations would result better data collection. This regime would also cause firms to internalize, at least to some extent, the externalities of security. However, mandatory reporting faces challenges that would make this regime difficult to implement, and possibly more costly than beneficial. An alternative is a voluntary reporting scheme, modeled on the Aviation Safety Reporting System housed within NASA, and possibly combined with an incentive scheme. Under it, organizations that were the victims of hacks or “near misses” would report the incident, providing important details, to some neutral party. This database could then be used both by researchers and by industry as a whole. People could learn what does work, what does not work, and where the weak spots are.

Please, take a look at the paper. I’m eager to hear your feedback.

Navigation