Journal of Terrorism and Cyber Insurance

At the RMS blog, we learn they are “Launching a New Journal for Terrorism and Cyber Insurance:”

Natural hazard science is commonly studied at college, and to some level in the insurance industry’s further education and training courses. But this is not the case with terrorism risk. Even if insurance professionals learn about terrorism in the course of their daily business, as they move into other positions, their successors may begin with hardly any technical familiarity with terrorism risk. It is not surprising therefore that, even fifteen years after 9/11, knowledge and understanding of terrorism insurance risk modeling across the industry is still relatively low.

There is no shortage of literature on terrorism, but much has a qualitative geopolitical and international relations focus, and little is directly relevant to terrorism insurance underwriting or risk management.

This is particularly exciting as Gordon Woo was recommended to me as the person to read on insurance math in new fields. His Calculating Catastrophe is comprehensive and deep.

It will be interesting to see who they bring aboard to complement the very strong terrorism risk team on the cyber side.

What Boards Want in Security Reporting

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)

FBI says their warnings were ignored

There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.

The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:

In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.

When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
[…]
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.

Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”

Shockingly, their warning did not enable the DNC to find anything.

When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.

There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:

  • Network monitoring
  • Kill chain-driven forensics
  • Agents working at the attacker
  • “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*

*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)

Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?

No.

Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.

We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.

We are not having a policy discussion about these tradeoffs, and that’s a shame.

Here are some questions that we can think about:

  • Is the model presented above of how attacks are detected reasonable?
  • Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
  • What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
  • How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?

That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”

[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]

What does the MS Secure Boot Issue teach us about key escrow?

Nothing.

No, seriously. Articles like “Microsoft Secure Boot key debacle causes security panic” and “Bungling Microsoft singlehandedly proves that golden backdoor keys are a terrible idea” draw on words in an advisory to say that this is all about golden keys and secure boot. This post is not intended to attack anyone; researchers, journalists or Microsoft, but to address a rather inflammatory claim that’s being repeated.

Based on my read of a advisory copy (which I made because I cannot read words on an animated background (yes, I’m a grumpy old man (who uses too many parentheticals (especially when I’m sick)))), this is a nice discovery of an authorization failure.

What they found is:

The “supplemental” policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn’t know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy. The “supplemental” policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don’t contain any BCD rules either, which means that if they are loaded, you can enable testsigning.

That’s a fine discovery and a nice vuln. There are ways Microsoft might have designed this better, I’m going to leave those for another day.

Where the post goes off the rails, in my view, is this:

About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a “secure golden key” is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don’t understand still? Microsoft implemented a “secure golden key” system.[1] And the golden keys got released from MS own stupidity.[2] Now, what happens if you tell everyone to make a “secure golden key” system? [3] (Bracketed numbers added – Adam)

So, [1], no they did not. [2] No it didn’t. [3] Even a stopped clock …

You could design a system in which there’s a master key, and accidentally release that key. Based on the advisory, Microsoft has not done that. (I have not talked to anyone at MS about this issue; I might have talked to people about the overall design, but don’t recall having done so.) What this is is an authorization system with a design flaw. As far as I can tell, no keys have been released.

Look, there are excellent reasons to not design a “golden key” system. I talked about them at a fundamental engineering level in my threat modeling book, and posted the excerpt in “Threat Modeling Crypto Back Doors.”

The typical way the phrase “golden key” is used (albiet fuzzily) is that there is a golden key which unlocks communications. That is a bad idea. This is not that, and we as engineers or advocates should not undercut our position on that bad idea by referring to this research as if it really impacts on that “debate.”

Consultants Say Their Cyber Warnings Were Ignored

Back in October, 2014, I discussed a pattern of “Employees Say Company Left Data Vulnerable,” and its a pattern that we’ve seen often since. Today, I want to discuss the consultant’s variation on the story. This is less common, because generally smart consultants don’t comment on the security of their consultees. In this case, it doesn’t seem like the consultant’s report was leaked, but people are discussing it after a high-profile issue.

In brief, the DNC was hacked, probably by Russian intelligence, and emails were given to Wikileaks. Wikileaks published them without redacting things like credit card numbers or social security numbers. The head of the DNC has stepped down. (This is an unusual instance of someone losing their job, which is rare post-breach. However, she did not lose her job because of the breach, she lost it because the breach included information about how her organization tilted the playing field, and how she lied about doing so.)

This story captures a set of archetypes. I want to use this story as a foil for those archetypes, not to critique any of the parties. I’ll follow the pattern from “employess vs company” present those three sections: “I told you so”, “potential spending”, and “how to do better.” I also comments on preventability and “shame.”

Was it preventable?

Computer security consultants hired by the DNC made dozens of recommendations after a two-month review, the people said. Following the advice, which would typically include having specialists hunt for intruders on the network, might have alerted party officials that hackers had been lurking in their network for weeks… (“Democrats Ignored Cybersecurity Warnings Before Theft,” Michael Riley, Bloomberg.)

People are talking about this as if the DNC was ever likely to stop Russian intelligence from breaking into their computers. That’s a very, very challenging goal, one at which both US and British intelligence have failed. (And as I write this, an FBI agent has been charged with espionage on behalf of China.) There’s a lot of “might,” “could have,” and other words that say “possible” without assessing “probable.”

I told you so!

The report included “dozens of recommendations,” some of which, such as “taking special precautions to protect any financial information related to donors” might be a larger project than a PCI compliance initiative. (The logic is that financial information collected by a party is more than just card numbers; there seems to be a lot of SSNs in the data as well). If one recommendation is “get PCI compliant,” than “dozens of recommendations” might be a Sysyphean task, or perhaps the Agean Stables are a better analogy. In either case, only in mythology can the actions be completed.

Missing from the discussion I’ve seen so far is any statement of what was done. Did the organization do the top-5 things the consultants said to do? (Did they even break things out into a top-5?)

Potential Spending

The review found problems ranging from an out-of-date firewall to a lack of advanced malware detection technology on individual computers, according to two of the people familiar with the matter.

It sounds like “advanced malware detection technology” would be helpful here, right? Hindsight is 20:20. An out-of-date firewall? Was it missing security updates (which would be worrisome, but less worrisome than one might think, depending on what those updates fix), or was it just not the latest revision? If it’s not the latest revision, it can probably still do its job. In carefully reading the article, I saw no evidence that any single recommendation, or even implementing all of them, would have prevented the breach.

The DNC is a small organization. They were working with a rapidly shifting set of campaign workers working for the Sanders and Clinton campaigns. I presume they’re also working on a great many state races, and the organizations those politicians are setting up.

I do not believe that doing everything in the consultant’s report could reasonably be expected to prevent a breakin by a determined mid-sized intelligence agency.

Shame

“Shame on them. It looks like they just did the review to check a box but didn’t do anything with it,” said Ann Barron-DiCamillo, who was director of US-Cert, the primary agency protecting U.S. government networks, until last February. “If they had acted last fall, instead of those thousands of e-mails exposed it might have been much less.”

Via Meredith Patterson, I saw “The Left’s Self-Destructive Obsession with Shame,” and there’s an infosec analog. Perhaps they would have found the attackers if they’d followed the advice, perhaps not. Does adding shame work to improve the cybers? If it did, it should have done so by now.

How to do better

I stand by what I said last time. The organization has paid the PR price, and we have learned nothing. What a waste. We should talk about exactly what happened at a technical level.

We should stop pointing fingers and giggling. It isn’t helping us. In many ways, the DNC is not so different from thousands of other small or mid-size organizations who hold sensitive information. Where is the list of effective practice for them to follow? How different is the set of recommendations in this instance from other instances? Where’s the list of “security 101” that the organization should have followed before hiring these consultants? (My list is at “Security 101: Show Your List!.”)

We should define the “smoke alarms and sprinklers” of cyber. Really, why does an organization like the DNC need to pay $60,000 for a cybersecurity policy? It’s a little hard to make sense of, but I think that the net operating expenditures of $5.1m is a decent proxy for the size of the organization, and (if that’s right) 1% of their net operating expenses went to a policy review. Was that a good use of money? How else should they have spent that? What concrete things do we, as a profession or as a community, say they should have done? Is there a reference architecture, or a system in a box which they could have run that we expect would resist Russian intelligence?

We cannot expect every small org to re-invent this wheel. We have to help them better.