$35M for Covering up A Breach

The remains of Yahoo just got hit with a $35 million fine because it didn’t tell investors about Russian hacking.” The headline says most of it, but importantly, “‘We do not second-guess good faith exercises of judgment about cyber-incident disclosure. But we have also cautioned that a company’s response to such an event could be so lacking that an enforcement action would be warranted. This is clearly such a case,’ said Steven Peikin, Co-Director of the SEC Enforcement Division.”

A lot of times, I hear people, including lawyers, get very focused on “it’s not material.” Those people should study the SEC’s statement carefully.

Designing for Good Social Systems

There’s a long story in the New York Times, “Where Countries Are Tinderboxes and Facebook Is a Match:”

A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing. Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.

I’ve written previously about the drama triangle, how social media drives engagement through dopamine and hatred, and a tool to help you breathe through such feelings.

These social media tools are dangerous, not just to our mental health, but to the health of our societies. They are actively being used to fragment, radicalize and undermine legitimacy. The techniques to drive outrage are developed and deployed at rates that are nearly impossible for normal people to understand or engage with. We, and these platforms, need to learn to create tools that preserve the good things we get from social media, while inhibiting the bad. And in that sense, I’m excited to read about “20 Projects Will Address The Spread Of Misinformation Through Knight Prototype Fund.”

We can usefully think of this as a type of threat modeling.

  • What are we working on? Social technology.
  • What can go wrong? Many things, including threats, defamation, and the spread of fake news. Each new system context brings with it new types of fail. We have to extend our existing models and create new ones to address those.
  • What are we going to do about it? The Knight prototypes are an interesting exploration of possible answers.
  • Did we do a good job? Not yet.

These emergent properties of the systems are not inherent. Different systems have different problems, and that means we can discover how design choices interact with these downsides. I would love to hear about other useful efforts to understand and respond to these emergent types of threats. How do we characterize the attacks? How do we think about defenses? What’s worked to minimize the attacks or their impacts on other systems? What “obvious” defenses, such as “real names,” tend to fail?

Image: Washington Post

Gartner on DevSecOps Toolchain

I hadn’t seen “Integrating Security Into the DevSecOps Toolchain,” which is a Gartner piece that’s fairly comprehensive, grounded and well-thought through.

If you enjoyed my “Reasonable Software Security Engineering,” then this Gartner blog does a nice job of laying out important aspects which didn’t fit into that ISACA piece.

Thanks to Stephen de Vries of Continuum for drawing my attention to it.

Speculative Execution Threat Model

There’s a long and important blog post from Matt Miller, “Mitigating speculative execution side channel hardware vulnerabilities.”

What makes it important is that it’s a model of these flaws, and helps us understand their context and how else they might appear. It’s also nicely organized along threat modeling lines.

What can go wrong? There’s a set of primitives (conditional branch misprediction, indirect branch misprediction, and exception delivery or deferral). These are built into gadgets for windowing and disclosure gadgets.

There’s also models for mitigations including classes of ways to prevent speculative execution, removing sensitive content from memory and removing observation channels.

Pen Testing The Empire

[Updated with a leaked copy of the response from Imperial Security.]

To: Grand Moff Tarkin
Re: “The Pentesters Strike Back” memo
Classification: Imperial Secret/Attorney Directed Work Product

Sir,

We have received and analyzed the “Pentesters Strike Back” video, created by Kessel Cyber Security Consulting, in support of their report 05.25.1977. This memo analyzes the video, presents internal analysis, and offers strategies for response to the Trade Federation.

In short, this is typical pen test slagging of our operational security investments, which meet or exceed all best practices. It is likely just a negotiating tactic, albeit one with catchy music.

Finding 1.3: “Endpoints unprotected against spoofing.” This is true, depending on a certain point of view. Following the execution of Order 66, standing policy has been “The Jedi are extinct. Their fire has gone out of the universe.” As such, Stormtrooper training has been optimized to improve small arms accuracy, which has been a perennial issue identified in after-action reports.

Finding 2.1: “Network Segmentation inadequate.” This has been raised repeatedly by internal audit, perhaps this would be a good “area for improvement” in response to this memo.

Finding 4.2: “Data at rest not encrypted.” This is inaccurate. The GalactiCAD server in question was accessed from an authorized endpoint. As such, it decrypted the data, and sent it over an encrypted tunnel to the endpoint. The pen testers misunderstand our network architecture, again.

Finding 5.1: “Physical access not controlled.” Frankly, sir, this battle station is the ultimate power in the universe. It has multiple layers of physical access control, including the screening units of Star Destroyers and Super SDs, Tie Fighters, Storm Trooper squadrons in each landing bay, [Top Secret-1], and [Top Secret-2]. Again, the pen testers ignore facts to present “findings” to their clients.

Finding 5.2: “Unauthorized mobile devices allows network access.” This is flat-out wrong. In the clip presented, TK-427 is clearly heard authorizing the droids in question. An audit of our records indicate that both driods presented authorization certificates signed by Lord Vader’s certificate authority. As you know, this CA has been the source of some dispute over time, but the finding presented is, again, simply wrong.

Finding 8.3: “Legacy intruder-tracking system inadequately concealed.” Again, this claim simply has no basis in fact. The intruder-tracking system worked perfectly, allowing the Imperial Fleet to track the freighter to Yavin. In analyzing the video, we expect that General Orgena’s intuition was “Force”-aided.

In summary, there are a few minor issues identified which require attention. However, the bulk of the report presents mis-understandings, unreasonable expectations, and focuses heavily on a set of assumptions that just don’t bear up to scrutiny. We are in effective compliance with PCI-DSS, this test did not reveal a single credit card number, and the deal with the Trade Federation should not be impeded.

Via Bruce Schneier.

Portfolio Thinking: AppSec Radar

At DevSecCon London, I met Michelle Embleton, who is doing some really interesting work around what she calls an AppSec Radar. The idea is to visually show what technologies, platforms, et cetera are being evaluated, adopted and in use, along with what’s headed out of use.

Surprise technology deployments always make for painful conversations.

This strikes me as a potentially quite powerful way to improve communication between security and other teams, and worth some experimentation in 2018.

Vulnerabilities Equities Process and Threat Modeling

[Update: More at DarkReading, “ The Critical Difference Between Vulnerabilities Equities & Threat Equities.”]

The Vulnerabilities Equities Process (VEP) is how the US Government decides if they’ll disclose a vulnerability to the manufacturer for fixing. The process has come under a great deal of criticism, because it’s never been clear what’s being disclosed, what fraction of vulnerabilities are disclosed, if the process is working, or how anyone without a clearance is supposed to evaluate that beyond “we’re from the government, we’re here to help,” or perhaps “I know people who managed this process, they’re good folks.” Neither of those is satisfactory.

So it’s a very positive step that on Wednesday, White House Cybersecurity Coordinator Rob Joyce published “Improving and Making the Vulnerability Equities Process Transparent is the Right Thing to Do,” along with the process. Schneier says “I am less [pleased]; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes.”

I have two overall questions, and an observation.

The first question is, was the published policy written when we had commitments to international leadership and being a fair dealer, or was it created or revised with an “America First” agenda?

The second question relates to there being four equities to be considered. These are the “major factors” that senior government officials are supposed to consider in exercising their judgement. But, surprisingly, there’s an “additional” consideration. (“At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound.”) Does that imply that those officials are not required to weigh public desire for resilient and safe systems? What does it mean that the “additionally” sentence is not an equity being considered?

Lastly, the observation is that the VEP is all about vulnerabilities, not about flaws or design tradeoffs. From the charter, page 9-10:

The following will not be considered to be part of the vulnerability evaluation process:

  • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.
  • Misuse of available device features that enables non-standard operation.
  • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.
  • Stating/discovering that a device/system has no inherent security features by design.

Threat Modeling is the umbrella term for security engineering to discover and deal with these issues. It’s what I spend my days on, because I see the tremendous effort in dealing with vulnerabilities is paying off, and we see fewer of them in well-engineered systems.

In October, I wrote about the fact we’re getting better at dealing with vulnerabilities, and need to think about design issues. I closed:

In summary, we’re doing a great job at finding and squishing bugs, and that’s opening up new and exciting opportunities to think more deeply about design issues. (Emergent Design Issues)

Here, I’m going to disagree with Bruce, because I think that this disclosure shows us an important detail that we didn’t previously know. Publication exposes it, and lets us talk about it.

So, I’m going to double-down on what I wrote in October, and say that we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure. As Bill Gates wrote 15 years ago, we need systems that people “will always be able to rely on, [] to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony.”

We cannot achieve that goal with the VEP being narrowly scoped. It must evolve to deal with the sorts of flaws and design tradeoffs that threat modeling helps us find.

Photo by David Clode on Unsplash.

The Fights We Have to Fight: Fixing Bugs

One of the recurring lessons from Petroski is how great engineers overcome not only the challenges of physical engineering: calculating loads, determining build orders, but they also overcome the real world challenges to their ideas, including financial and political ones. For example:

Many a wonderful concept, beautifully drawn by an inspired structural artist, has never risen off the paper because its cost could not be justified. Most of the great bridges of the nineteenth century, which served to define bridge building and other technological achievements for the twentieth century, were financed by private enterprise, often led by the expanding railroads. Engineers acting as entrepreneurs frequently put together the prospectuses, and in some cases almost single-handedly promoted their dreams to the realists. […] Debates over how to pay for them were common. (Engineers of Dreams: Great Bridge Builders and the Spanning of America, Henry Petroski)

Many security professionals have a hobby of griping that products get rushed to market, maybe to be secured later. We have learned to be more effective at building security in, and in doing so, reduce product costs and increase on-time delivery. But some products were built before we knew how to do that, and others are going to get built by companies who choose not to do that. And in that sense, Collin Greene’s retrospective, “Fixing Security Bugs” is very much worth your time. It’s a retrospective on the Vista security program from a pen-test perspective.

Hacking: Exciting.
Finding bugs: Exciting.
Fixing those bugs: Not exciting.
The thing is, the finish line for our job in security is getting bugs fixed¹, not just found and filed. Doing this effectively is not a technology problem. It is a communications, organizational² and psychology problem.

I joined Microsoft while the Vista pen test was finishing up, and so my perspective is complimentary. I’d like to add a few additional perspectives to his points.

First, he asks “is prioritization correct?” After Vista, the SDL team created security bug bars, and then later refined them to align with the MSRC update priorities. That alignment with the MSRC priorities was golden. It made it super-clear that if you didn’t fix this before ship, you were going to have to do an update later. As a security engineer, you need to align your prioritization to the all up delivery priorities. Having everything be “extremely critical,” “very critical,” or “moderately critical” means you don’t know what matters, and so nothing does.

Second, “why security matters” was still a fight to be fought in Vista. By Windows 7, security had completed its “move left.” The spec form contained sections for security and privacy. Threat model review was a gate for start of coding. These process changes happened while developers were “rebelling” against Vista’s “overweight” engineering process. They telegraphed that security mattered to management and executives. As a security engineer, you need to get management to spend time talking about how security is balanced with other priorities.

Third, he points out that escalating to a manager can feel bad, but he’s right: “Often the manager has the most context on priorities.” Management saying “get this fixed” is an expression of prioritization. If you’ve succeeded in your work on “why security matters,” then management will know that they need to reinforce that message. Bringing the issues to them, responsibly, helps them get their job done. If it feels bad to escalate, then it’s worth asking if you have full buy in on security.

Now, I’m talking about security as if it matters to management. More and more, that’s the case. Something in the news causes leadership to say “we have to do better,” and they believe that there are things that they can do. In part that belief is because very large companies have been talking about how to make it work. But when that belief isn’t there, it’s your job as an engineer to, as Petroski says, single-handedly promote your dreams to the realists. Again, Greene’s post is full of good ideas.

Lastly, not everything is a bug. I discussed vulnerabilities versus design recently in “Emergent Design Issues.”

(Photo: https://www.pexels.com/photo/black-and-brown-insect-37733/)