[Update, Feb 20 2017: More reading: Trump and the ‘Society of the Spectacle’.]
“We’ll have more guards. We’re going to try to have a ‘goat guarantee’ the first weekend,” deputy council chief Helene Åkerlind, representing the local branch of the Liberal Party, told newspaper Gefle Dagblad.
“It is really important that it stays standing in its 50th year,” she added to Arbetarbladet.
Gävle Council has decided to allocate an extra 850,000 kronor ($98,908) to the goat’s grand birthday party, bringing the town’s Christmas celebrations budget up to 2.3 million kronor this year. (“Swedes rally to protect arson-prone yule goat“)
Obviously, what you need to free up that budget is more burning goats. Or perhaps its a credible plan on why spending it will reduce risk. I’m never quite sure.
Image: The goat’s mortal remains, immortalized in 2011 by Lasse Halvarsson.
There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:
It provides useful context as we consider this quote:
On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)
Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.
I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.
Image from The Langner group. I do wish it showed the S&P 500.
Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:
We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.”  The calls for more data about incidents have continued, including by us [2, 3].
The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.
When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.
In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:
One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.
One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.
Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.
I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.
That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)
So what are the advantages and disadvantages of each?
- Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
- Ease of use. A whiteboard is still easier than just about any other drawing tool.
- Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
- Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
- Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.
- Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
- Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
- Photographs of whiteboards are hard to archive and search without further processing.
- Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
- Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.
(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)
I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?
Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:
More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations
Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.
But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”
They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.
So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)
(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)
There’s two major parts to the DNC/FBI/Russia story. The first part is the really fascinating evolution of public disclosures over the DNC hack. We know the DNC was hacked, that someone gave a set of emails to Wikileaks. There are accusations that it was Russia, and then someone leaked an NSA toolkit and threatened to leak more. (See Nick Weaver’s “NSA and the No Good, Very Bad Monday,” and Ellen Nakishima’s “Powerful NSA hacking tools have been revealed online,” where several NSA folks confirm that the tool dump is real. See also Snowden’s comments “on Twitter:” “What’s new? NSA malware staging servers getting hacked by a rival is not new. A rival publicly demonstrating they have done so is.”) That’s not the part I want to talk about.
The second part is what the FBI knew, how they knew it, who they told, and how. In particular, I want to look at the claims in “FBI took months to warn Democrats[…]” at Reuters:
In its initial contact with the DNC last fall, the FBI instructed DNC personnel to look for signs of unusual activity on the group’s computer network, one person familiar with the matter said. DNC staff examined their logs and files without finding anything suspicious, that person said.
When DNC staffers requested further information from the FBI to help them track the incursion, they said the agency declined to provide it.
“There is a fine line between warning people or companies or even other government agencies that they’re being hacked – especially if the intrusions are ongoing – and protecting intelligence operations that concern national security,” said the official, who spoke on condition of anonymity.
Let me repeat that: the FBI had evidence that the DNC was being hacked by the Russians, and they said “look around for ‘unusual activity.'”
Shockingly, their warning did not enable the DNC to find anything.
When Rob Reeder, Ellen Cram Kowalczyk and I did work on usability of warnings, we recommended they be explanatory, actionable and tested. This warning fails on all those counts.
There may be a line, or really, a balancing act, around disclosing what the FBI knows, and ensuring that how they know it is protected. (I’m going to treat the FBI as the assigned mouthpiece, and move to discussing the US government as a whole, because otherwise we may rat hole on authorities, US vs non-US activity, etc, which are a distraction). Fundamentally, we can create a simple model of how the US government learns about these hacks:
- Network monitoring
- Kill chain-driven forensics
- Agents working at the attacker
- “Fifth party take” where they’ve broken into a spy server and are reading what those spies take.*
*This “fifth party take”, to use the NSA’s jargon, is what makes the NSA server takeover so interesting and relevant. Is the release of the NSA files a comment that the GRU knows that the NSA knows about their hack because the GRU has owned additional operational servers?)
Now, we can ask, if the FBI says “look for connections to Twitter when there’s no one logged into Alice’s computer,” does it allow the attacker to distinguish between those three methods?
Now, it does disclose that that C&C pathway is known, and if the attacker has multiple paths, then it might be interesting to know that only one was detected. But there’s another tradeoff, which is that as long as the penetration is active, the US government can continue to find indicators, and use them to find other break-ins. That’s undeniably useful to the FBI, at the cost of the legitimacy of our electoral processes. That’s a bad tradeoff.
We have to think about and discuss priorities and tradeoffs. We need to talk about the policy which the FBI is implementing, which seems to be to provide un-actionable, useless warnings. Perhaps that’s sufficient in some eyes.
We are not having a policy discussion about these tradeoffs, and that’s a shame.
Here are some questions that we can think about:
- Is the model presented above of how attacks are detected reasonable?
- Is there anything classified which changes the general debate? (No, we learned that from the CRISIS report.)
- What should a government warning include? A single IOC? Some fraction in a range (say 25-35%)? All known IOCs? (Using a range is interesting because it reduces information leakage back to an attacker who’s compromised a source.)
- How do we get IOCs to be bulk declassified so they can be used at organizations whose IT staff do not have clearances, cannot get clearances rapidly, and post-OPM ain’t likely to?
That’s a start. What other questions should we be asking so we can move from “Congressional leaders were briefed a year ago on hacking of Democrats” to “hackers were rebuffed from interfering in our elections” or, “hackers don’t even bother trying to attack election?”
[Update: In “AS FBI WARNS ELECTION SITES GOT HACKED, ALL EYES ARE ON RUSSIA“, Wired links to an FBI Flash, which has an explicit set of indicators, including IPs and httpd log entries, along with explicit recommendations such as “Search logs for commands often passed during SQL injection.” This is far more detail than was in these documents a few years ago, and far more detail than I expected when I wrote the above.]
No, seriously. Articles like “Microsoft Secure Boot key debacle causes security panic” and “Bungling Microsoft singlehandedly proves that golden backdoor keys are a terrible idea” draw on words in an advisory to say that this is all about golden keys and secure boot. This post is not intended to attack anyone; researchers, journalists or Microsoft, but to address a rather inflammatory claim that’s being repeated.
Based on my read of a advisory copy (which I made because I cannot read words on an animated background (yes, I’m a grumpy old man (who uses too many parentheticals (especially when I’m sick)))), this is a nice discovery of an authorization failure.
What they found is:
The “supplemental” policy contains new elements, for the merging conditions. These conditions are (well, at one time) unchecked by bootmgr when loading a legacy policy. And bootmgr of win10 v1511 and earlier certainly doesn’t know about them. To those bootmgrs, it has just loaded in a perfectly valid, signed policy. The “supplemental” policy does NOT contain a DeviceID. And, because they were meant to be merged into a base policy, they don’t contain any BCD rules either, which means that if they are loaded, you can enable testsigning.
That’s a fine discovery and a nice vuln. There are ways Microsoft might have designed this better, I’m going to leave those for another day.
Where the post goes off the rails, in my view, is this:
About the FBI: are you reading this? If you are, then this is a perfect real world example about why your idea of backdooring cryptosystems with a “secure golden key” is very bad! Smarter people than me have been telling this to you for so long, it seems you have your fingers in your ears. You seriously don’t understand still? Microsoft implemented a “secure golden key” system. And the golden keys got released from MS own stupidity. Now, what happens if you tell everyone to make a “secure golden key” system?  (Bracketed numbers added – Adam)
So, , no they did not.  No it didn’t.  Even a stopped clock …
You could design a system in which there’s a master key, and accidentally release that key. Based on the advisory, Microsoft has not done that. (I have not talked to anyone at MS about this issue; I might have talked to people about the overall design, but don’t recall having done so.) What this is is an authorization system with a design flaw. As far as I can tell, no keys have been released.
Look, there are excellent reasons to not design a “golden key” system. I talked about them at a fundamental engineering level in my threat modeling book, and posted the excerpt in “Threat Modeling Crypto Back Doors.”
The typical way the phrase “golden key” is used (albiet fuzzily) is that there is a golden key which unlocks communications. That is a bad idea. This is not that, and we as engineers or advocates should not undercut our position on that bad idea by referring to this research as if it really impacts on that “debate.”
Back in October, 2014, I discussed a pattern of “Employees Say Company Left Data Vulnerable,” and its a pattern that we’ve seen often since. Today, I want to discuss the consultant’s variation on the story. This is less common, because generally smart consultants don’t comment on the security of their consultees. In this case, it doesn’t seem like the consultant’s report was leaked, but people are discussing it after a high-profile issue.
In brief, the DNC was hacked, probably by Russian intelligence, and emails were given to Wikileaks. Wikileaks published them without redacting things like credit card numbers or social security numbers. The head of the DNC has stepped down. (This is an unusual instance of someone losing their job, which is rare post-breach. However, she did not lose her job because of the breach, she lost it because the breach included information about how her organization tilted the playing field, and how she lied about doing so.)
This story captures a set of archetypes. I want to use this story as a foil for those archetypes, not to critique any of the parties. I’ll follow the pattern from “employess vs company” present those three sections: “I told you so”, “potential spending”, and “how to do better.” I also comments on preventability and “shame.”
Was it preventable?
Computer security consultants hired by the DNC made dozens of recommendations after a two-month review, the people said. Following the advice, which would typically include having specialists hunt for intruders on the network, might have alerted party officials that hackers had been lurking in their network for weeks… (“Democrats Ignored Cybersecurity Warnings Before Theft,” Michael Riley, Bloomberg.)
People are talking about this as if the DNC was ever likely to stop Russian intelligence from breaking into their computers. That’s a very, very challenging goal, one at which both US and British intelligence have failed. (And as I write this, an FBI agent has been charged with espionage on behalf of China.) There’s a lot of “might,” “could have,” and other words that say “possible” without assessing “probable.”
I told you so!
The report included “dozens of recommendations,” some of which, such as “taking special precautions to protect any financial information related to donors” might be a larger project than a PCI compliance initiative. (The logic is that financial information collected by a party is more than just card numbers; there seems to be a lot of SSNs in the data as well). If one recommendation is “get PCI compliant,” than “dozens of recommendations” might be a Sysyphean task, or perhaps the Agean Stables are a better analogy. In either case, only in mythology can the actions be completed.
Missing from the discussion I’ve seen so far is any statement of what was done. Did the organization do the top-5 things the consultants said to do? (Did they even break things out into a top-5?)
The review found problems ranging from an out-of-date firewall to a lack of advanced malware detection technology on individual computers, according to two of the people familiar with the matter.
It sounds like “advanced malware detection technology” would be helpful here, right? Hindsight is 20:20. An out-of-date firewall? Was it missing security updates (which would be worrisome, but less worrisome than one might think, depending on what those updates fix), or was it just not the latest revision? If it’s not the latest revision, it can probably still do its job. In carefully reading the article, I saw no evidence that any single recommendation, or even implementing all of them, would have prevented the breach.
The DNC is a small organization. They were working with a rapidly shifting set of campaign workers working for the Sanders and Clinton campaigns. I presume they’re also working on a great many state races, and the organizations those politicians are setting up.
I do not believe that doing everything in the consultant’s report could reasonably be expected to prevent a breakin by a determined mid-sized intelligence agency.
“Shame on them. It looks like they just did the review to check a box but didn’t do anything with it,” said Ann Barron-DiCamillo, who was director of US-Cert, the primary agency protecting U.S. government networks, until last February. “If they had acted last fall, instead of those thousands of e-mails exposed it might have been much less.”
Via Meredith Patterson, I saw “The Left’s Self-Destructive Obsession with Shame,” and there’s an infosec analog. Perhaps they would have found the attackers if they’d followed the advice, perhaps not. Does adding shame work to improve the cybers? If it did, it should have done so by now.
How to do better
I stand by what I said last time. The organization has paid the PR price, and we have learned nothing. What a waste. We should talk about exactly what happened at a technical level.
We should stop pointing fingers and giggling. It isn’t helping us. In many ways, the DNC is not so different from thousands of other small or mid-size organizations who hold sensitive information. Where is the list of effective practice for them to follow? How different is the set of recommendations in this instance from other instances? Where’s the list of “security 101” that the organization should have followed before hiring these consultants? (My list is at “Security 101: Show Your List!.”)
We should define the “smoke alarms and sprinklers” of cyber. Really, why does an organization like the DNC need to pay $60,000 for a cybersecurity policy? It’s a little hard to make sense of, but I think that the net operating expenditures of $5.1m is a decent proxy for the size of the organization, and (if that’s right) 1% of their net operating expenses went to a policy review. Was that a good use of money? How else should they have spent that? What concrete things do we, as a profession or as a community, say they should have done? Is there a reference architecture, or a system in a box which they could have run that we expect would resist Russian intelligence?
We cannot expect every small org to re-invent this wheel. We have to help them better.
“Better safe than sorry” are the closing words in a NYT story, “A Colorado Town Tests Positive for Marijuana (in Its Water).”
Now, I’m in favor of safety, and there’s a tradeoff being made. Shutting down a well reduces safety by limiting the supply of water, and in this case, they closed a pool, which makes it harder to stay cool in 95 degree weather.
At Wired, Nick Stockton does some math, and says “IT WOULD TAKE A LOT OF THC TO CONTAMINATE A WATER SUPPLY.” (Shouting theirs.)
High-potency THC extract is pretty expensive. One hundred dollars for a gram of the stuff is not an unreasonable price. If this was an accident, it was an expensive one. If this was a prank, it was a financed by Bill Gates…Remember, the highest concentration of THC you can physically get in a liter of water is 3 milligrams.
Better safe than sorry is a tradeoff, and we should talk about it ask such.
Even without drinking the, ummm, kool-aid, this doesn’t pass the giggle test.