20 Year Software: Engineering and Updates

Twenty years ago, Windows 95 was the most common operating system. Yahoo and Altavista were our gateways to the internet. Steve Jobs just returned to Apple. Google didn’t exist yet. America Online had just launched their Instant Messenger. IPv6 was coming soon. That’s part of the state of software in 1997, twenty years ago. We need to figure out what engineering software looks like for a twenty year lifespan, and part of that will be really doing such engineering, because theory will only take us to the limits of our imaginations.

Today, companies are selling devices that will last twenty years in the home, such as refrigerators and speakers, and making them with network connectivity. That network connectivity is both full internet connectivity, that is, Internet Protocol stacks, and also local network connectivity, such as Bluetooth and Zigbee.

We have very little idea how to make software that can survive as long as AOL IM did. (It’s going away in December, if you missed the story.)

Recently, there was a news cycle around Sonos updating their privacy policy. I don’t want to pick on Sonos, because I think what they’re experiencing is just a bit of the future, unevenly distributed. Responses such as this were common: “Hardware maker: Give up your privacy and let us record what you say in your home, or we’ll destroy your property:”

“The customer can choose to acknowledge the policy, or can accept that over time their product may cease to function,” the Sonos spokesperson said, specifically.

Or, as the Consumerist, part of Consumer Reports, puts it in “Sonos Holds Software Updates Hostage If You Don’t Sign New Privacy Agreement:

Sonos hasn’t specified what functionality might no longer work in the future if customers don’t accept the new privacy policy.

There are some real challenges here, both technical and economic. Twenty years ago, we didn’t understand double-free or format string vulnerabilities. Twenty years of software updates aren’t going to be cheap. (I wrote about the economics in “Maintaining & Updating Software.”)

My read is that Sonos tried to write a privacy policy that balanced their anticipated changes, including Alexa support, with a bit of future-proofing. I think that the reason they haven’t specified what might not work is because they really don’t know, because nobody knows.

The image at the top is the sole notification that I’ve gotten that Office 2011 is no longer getting security updates. (Sadly, it’s only shown once per computer, or perhaps once per user of the computer.) Microsoft, like all tech companies, will cut functionality that it can’t support, like <"a href="https://www.macworld.com/article/1154785/business/welcomebackvisualbasic.html">Visual Basic for Mac and also “end of lifes” its products. They do so on a published timeline, but it seems wrong to apply that to a refrigerator, end of lifeing your food supply.

There’s probably a clash coming between what’s allowable and what’s economically feasible. If your contract says you can update your device at any time, it still may be beyond “the corners of the contract” to shut it off entirely. Beyond economically challenging, it may not even be technically feasible to update the system. Perhaps the chip is too small, or its power budget too meager, to always connect over TLS4.2, needed addresses the SILLYLOGO attack.

What we need might include:

  • A Dead Software Foundation, dedicated to maintaining the software which underlies IoT devices for twenty years. This is not only the Linux kernel, but things like tinybox and openssl. Such a foundation could be funded by organizations shipping IoT devices, or even by governments, concerned about the externalities, what Sean Smith called “the Cyber Love Canal” in The Internet of Risky Things. The Love Canal analogy is apt; in theory, the government cleans up after the firms that polluted are gone. (The practice is far more complex.)
  • Model architectures that show how to engineer devices, such as an internet speaker, so that it can effectively be taken offline when the time comes. (There’s work in the mobile app space on making apps work offline, which is related, but carries the expectation that the app will eventually reconnect.)
  • Conceptualization of the legal limits of what you can sign away in the fine print. (This may be everything; between severability and arbitration clauses, the courts have let contract law tilt very far towards the contract authors, but Congress did step in to write the Consumer Review Fairness Act.) The FTC has commented on issues of device longevity, but not (afaik) on limits of contracts.

What else do we need to build software that survives for twenty years?

It’s Not The Crime, It’s The Coverup or the Chaos

Well, Richard Smith has “resigned” from Equifax.

The CEO being fired is a rare outcome of a breach, and so I want to discuss what’s going on and put it into context, which includes the failures at DHS, and Deloitte breach. Also, I aim to follow the advice to praise specifically and criticize in general, and break that pattern here because we can learn so much from the specifics of the cases, and in so learning, do better.

Smith was not fired because of the breach. Breaches happen. Executives know this. Boards know this. The breach is outside of their control. Smith was fired because of the post-breach chaos. Systems that didn’t work. Tweeting links to a scam site for two weeks. PINS that were recoverable. Weeks of systems saying “you may have been a victim.” Headlines like “Why the Equifax Breach Stings So Bad” in the NYTimes. Smith was fired in part because of the post-breach chaos, which was something he was supposed to control.

But it wasn’t just the chaos. It was that Equifax displayed so much self-centeredness after the breach. They had the chutzpah to offer up their own product as a remedy. And that self-dealing comes from seeing itself as a victim. From failing to understand how the breach will be seen in the rest of the world. And that’s a very similar motive to the one that leads to coverups.

In The New School Andrew and I discussed how fear of firing was one reason that companies don’t disclose breaches. We also discussed how, once you agree that “security issues” are things which should remain secret or shared with a small group, you can spend all your energy on rules for information sharing, and have no energy left for actual information sharing.

And I think that’s the root cause of “U.S. Tells 21 States That Hackers Targeted Their Voting Systems” a full year after finding out:

The notification came roughly a year after officials with the United States Department of Homeland Security first said states were targeted by hacking efforts possibly connected to Russia.

A year.

A year.

A year after states were first targeted. A year in which “Obama personally warned Mark Zuckerberg to take the threats of fake news ‘seriously.’” (Of course, the two issues may not have been provably linkable at the time.) But. A year.

I do not know what the people responsible for getting that message to the states were doing during that time, but we have every reason to believe that it probably had to do with (and here, I am using not my sarcastic font, but my scornful one) “rules of engagement,” “traffic light protocols,” “sources and methods” and other things which are at odds with addressing the issue. (End scornful font.) I understand the need for these things. I understand protecting sources is a key role of an intelligence service which wants to recruit more sources. And I also believe that there’s a time to risk those things. Or we might end up with a President who has more harsh words for Australia than the Philippines. More time for Russia than Germany.

In part, we have such a President because we value secrecy over disclosure. We accept these delays and view them as reasonable. Of course, the election didn’t turn entirely on these issues, but on our electoral college system, which I discussed at some length, including ways to fix it.

All of which brings me to the Deloitte breach, “Deloitte hit by cyber-attack revealing clients’ secret emails.” Deloitte, along with the others who make up the big four audit firms, gets access to its clients deepest secrets, and so you might expect that the response to the breach would be similar levels of outrage. And I suspect a lot of partners are making a lot of hat-in-hand visits to boardrooms, and contritely trying to answer questions like “what the flock were you people doing?” and “why the flock weren’t we told?” I expect that there’s going to be some very small bonuses this year. But, unlike our relationship with Equifax, boards do not feel powerless in relation to their auditors. They can pick and swap. Boards do not feel that the system is opaque and unfair. (They sometimes feel that the rules are unfair, but that’s a different failing.) The extended reporting time will likely be attributed to the deep analysis that Deloitte did so it could bring facts to its customers, and that might even be reasonable. After all, a breach is tolerable; chaos afterwards may not be.

The two biggest predictors of public outrage are chaos and coverups. No, that’s not quite right. The biggest causes are chaos and coverups. (Those intersect poorly with data brokerages, but are not limited to them.) And both are avoidable.

So what should you do to avoid them? There’s important work in preparing for a breach, and in preventing one.

  • First, run tabletop response exercises to understand what you’d do in various breach scenarios. Then re-run those scenarios with the principals (CEO, General Counsel) so they can practice, too.
  • To reduce the odds of a breach, realize that you need continuous and integrated security as part of your operational cycles. Move from focusing on pen tests, red teams and bug bounties to a focus on threat modeling, so you can find problems systematically and early.

I’d love to hear what other steps you think organizations often miss out on.

Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?

Whiteboard

  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

Journal of Terrorism and Cyber Insurance

At the RMS blog, we learn they are “Launching a New Journal for Terrorism and Cyber Insurance:”

Natural hazard science is commonly studied at college, and to some level in the insurance industry’s further education and training courses. But this is not the case with terrorism risk. Even if insurance professionals learn about terrorism in the course of their daily business, as they move into other positions, their successors may begin with hardly any technical familiarity with terrorism risk. It is not surprising therefore that, even fifteen years after 9/11, knowledge and understanding of terrorism insurance risk modeling across the industry is still relatively low.

There is no shortage of literature on terrorism, but much has a qualitative geopolitical and international relations focus, and little is directly relevant to terrorism insurance underwriting or risk management.

This is particularly exciting as Gordon Woo was recommended to me as the person to read on insurance math in new fields. His Calculating Catastrophe is comprehensive and deep.

It will be interesting to see who they bring aboard to complement the very strong terrorism risk team on the cyber side.

Open Letters to Security Vendors

John Masserini has a set of “open letters to security vendors” on Security Current.

Everyone involved in product or sales at a security startup should read them. John provides insight into what it’s like to be pitched by too many startups, and provides a level of transparency that’s sadly hard to find. Personally, I learned a great deal about what happens when you’re pitched while I was at a large company, and I can vouch for the realities he puts forth. The sooner you understand those realities and incorporate them into your thinking, the more successful we’ll all be.

After meeting with dozens of startups at Black Hat a few weeks ago, I’ve realized that the vast majority of the leaders of these new companies struggle to articulate the value their solutions bring to the enterprise.

Why does John’s advice make us all more successful? Because each organization that follows it moves towards a more efficient state, for themselves and for the folks who they’re pitching.

Getting more efficient means you waste less time per prospect. When you focus on qualified leads who care about the problem you’re working on, you get more sales per unit of time. What’s more, by not wasting the time of those who won’t buy, you free up their time for talking to those who might have something to provide them. (One banker I know said “I could hire someone full-time to reject startup pitches.” Think about what that means for your sales cycle for a moment.)

Go read “An Open Letter to Security Vendors” along with part 2 (why sales takes longer) and part 3 (the technology challenges most startups ignore).

The Evolution of Secure Things

One of the most interesting security books I’ve read in a while barely mentions computers or security. The book is Petroski’s The Evolution of Useful Things.

Evolution Of useful Things Book Cover

As the subtitle explains, the book discusses “How Everyday Artifacts – From Forks and Pins to Paper Clips and Zippers – Came to be as They are.”

The chapter on the fork is a fine example of the construction of the book.. The book traces its evolution from a two-tined tool useful for holding meat as it was cut to the 4 tines we have today. Petroski documents the many variants of forks which were created, and how each was created with reference to the perceived failings of previous designs. The first designs were useful for holding meat as you cut it, before transferring it to your mouth with the knife. Later designs were unable to hold peas, extract an oyster, cut pastry, or meet a variety of other goals that diners had. Those goals acted as evolutionary pressures, and drove innovators to create new forms of the fork.

Not speaking of the fork, but rather of newer devices, Petroski writes:

Why designers do not get things right the first time may be more understandable than excusable. Whether electronics designers pay less attention to how their devices will be operated, or whether their familiarity with the electronic guts of their own little monsters hardens them against these monsters’ facial expressions, there is a consensus among consumers and reflective critics like Donald Norman, who has characterized “usable design” as the “next competitive frontier,” that things seldom live up to their promise. Norman states flatly, “Warning labels and large instruction manuals are signs of failures, attempts to patch up problems that should have been avoided by proper design in the first place.” He is correct, of course, but how is it that designers have, almost to a person, been so myopic?

So what does this have to do with security?

(No, it’s not “stick a fork in it, it’s done fer.”)

Its a matter of the pressures brought to bear on the designs of even what (we now see) as the very simplest technologies. It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

In security, we ask for perfection against an ill-defined and ever-growing list of hard-to-understand properties, such as “double-free safety.”

Computer security is in a process of moving from expressing “security” to expressing more precise goals, and the evolution of useful tools for finding, naming, and discussing vulnerabilities will help us express what we want in secure software.

The various manifestations of failure, as have been articulated in case studies throughout this book, provide the conceptual underpinning for understanding the evolving form of artifacts and the fabric of technology into which they are inextricably woven. It is clearly the perception of failure in existing technology that drives inventors, designers, and engineers to modify what others may find perfectly adequate, or at least usable. What constitutes failure and what improvement is not totally objective, for in the final analysis a considerable list of criteria, ranging from the functional to the aesthetic, from the economic to the moral, can come into play. Nevertheless, each criterion must be judged in a context of failure, which, though perhaps much easier than success to quantify, will always retain an aspect of subjectivity. The spectrum of subjectivity may appear to narrow to a band of objectivity within the confines of disciplinary discussion, but when a diversity of individuals and groups comes together to discuss criteria of success and failure, consensus can be an elusive state.

Even if you’ve previously read it, re-reading it from a infosec perspective is worthwhile. Highly recommended.

[As I was writing this, Ben Hughes wrote a closely related post on the practical importance of tradeoffs, “A Dockery of a Sham.”]

Phishing and Clearances

Apparently, the CISO of US Homeland Security, a Paul Beckman, said that:

“Someone who fails every single phishing campaign in the world should not be holding a TS SCI [top secret, sensitive compartmentalized information—the highest level of security clearance] with the federal government” (Paul Beckman, quoted in Ars technica)

Now, I’m sure being in the government and trying to defend against phishing attacks is a hard problem, and I don’t want to ignore that real frustration. At the same time, GAO found that the government is having trouble hiring cybersecurity experts, and that was before the SF-86 leak.

Removing people’s clearances is one repsonse. It’s not clear from the text if these are phishing (strictly defined, an attempt to get usernames and passwords), or malware attached to the emails.

In each case, there are other fixes. The first would be multi-factor authentication for government logins. This was the subject of a push, and if agencies aren’t at 100%, maybe getting there is better than punitive action. Another fix could be to use an email client which makes seeing phishing emails easier. For example, an email client could display the RFC-822 sender address (eg, “<account.management@gmail.com>” for any email address that that email client hasn’t sent email to, rather than the friendly text. They could provide password management software with built-in anti-phishing (checking the domain before submitting the password. They could, I presume, do other things which minimize the request on the human being.

When Rob Reeder, Ellen Cram Kowalczyk and I created the “NEAT” guidance for usable security, we didn’t make “Necessary” first just because the acronym is neater that way, we put it first because the poor person is usually overwhelmed, and they deserve to have software make the decisions that software can make. Angela Sasse called this the ‘compliance budget,’ and it’s not a departmental budget, it’s a human one. My understanding is that those who work for the government already have enough things drawing on that budget. Making people anxious that they’ll lose their clearance and have to take a higher-paying private sector job should not be one of them.

Survey for How to Measure Anything In Cybersecurity Risk

This is a survey from Doug Hubbard, author of How To Measure Anything and he is
currently writing another book with Richard Seiersen (GM of Cyber Security at
GE Healthcare) titled How to Measure Anything in Cybersecurity Risk. As part of
the research for this book, they are asking for your assistance as an
information security professional by taking the following survey. It
is estimated to take 10 to 25 minutes to complete. In return for participating
in this survey, you will be invited to attend a webinar for a sneak-peek on the
book’s content. You will also be given a summary report of this survey. The
survey also includes requests for feedback on the survey itself.

https://www.surveymonkey.com/r/VMHDQ7N

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)

What Happened At OPM?

I want to discuss some elements of the OPM breach and what we know and what we don’t. Before I do, I want to acknowledge the tremendous and justified distress that those who’ve filled out the SF-86 form are experiencing. I also want to acknowledge the tremendous concern that those who employ those with clearances must be feeling. The form is designed as an (inverted) roadmap to suborning people, and now all that data is in the hands of a foreign intelligence service.

The National Journal published A Timeline of Government Data Breaches:
OPM Data Breach

I asked after the root cause, and Rich Bejtlich responded “The root cause is a focus on locking doors and windows while intruders are still in the house” with a pointer to his “Continuous Diagnostic Monitoring Does Not Detect Hackers.”

And while I agree with Richard’s point in that post, I don’t think that’s the root cause. When I think about root cause, I think about approaches like Five Whys or Ishikawa. If we apply this sort of approach then we can ask, “Why were foreigners able to download the OPM database?” There are numerous paths that we might take, for example:

  1. Because of a lack of two-factor authentication (2FA)
  2. Why? Because some critical systems at OPM don’t support 2FA.
  3. Why? Because of a lack of budget for upgrades & testing (etc)

Alternately, we might go down a variety of paths based on the Inspector General Report. We might consider Richard’s point:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because someone there knows how to lock doors and windows.
  3. Why? Because lots of organizations hire out of government agencies.
  4. Why? Because they pay better
  5. [Alternate] Employees don’t like the clearance process

But we can go down alternate paths:

  1. A focus on locking doors and windows while intruders are still in the house.
  2. Why? Because finding intruders in the house is hard, and people often miss those stealthy attackers.
  3. Why? Because networks are chaotic and change frequently
  4. [Alternate] Because not enough people publish lists of IoCs, so defenders don’t know what to look for.

What I’d really like to see are specific technical facts laid out. (Or heck, if the facts are unknowable because logs rotated or attackers deleted them, or we don’t even know why we can’t know, let’s talk about that, and learn from it.)

OPM and Katherine Archuleta have already been penalized. Let’s learn things beyond dates. Let’s put the facts out there, or, as I quoted in my last post we “should declare the causes which impel them to the separation”, or “let Facts be submitted to a candid world.” Once we have facts about the causes, we can perform deeper root cause analysis.

I don’t think that the OIG report contains those causes. Each of those audit failings might play one of several roles. The failing might have been causal, and fixing it would have stopped the attack. The failing might have been casual and the attacker would have worked around it. The failing might be irrelevant (for example, I’ve rarely seen an authorization to operate prevent an attack, unless you fold it up very small and glue it into a USB port). The failings might even have been distracting, taking attention away from other work that might have prevented the attack.

A collection of public facts would enable us to have a discussion about those possibilities. We could have a meta-conversation about those categorizations of failings, and if there’s other ones which make more sense.

Alternately, we can keep going the way we’re going.

So. What happened at OPM?