Pen Testing The Empire

[Updated with a leaked copy of the response from Imperial Security.]

To: Grand Moff Tarkin
Re: “The Pentesters Strike Back” memo
Classification: Imperial Secret/Attorney Directed Work Product

Sir,

We have received and analyzed the “Pentesters Strike Back” video, created by Kessel Cyber Security Consulting, in support of their report 05.25.1977. This memo analyzes the video, presents internal analysis, and offers strategies for response to the Trade Federation.

In short, this is typical pen test slagging of our operational security investments, which meet or exceed all best practices. It is likely just a negotiating tactic, albeit one with catchy music.

Finding 1.3: “Endpoints unprotected against spoofing.” This is true, depending on a certain point of view. Following the execution of Order 66, standing policy has been “The Jedi are extinct. Their fire has gone out of the universe.” As such, Stormtrooper training has been optimized to improve small arms accuracy, which has been a perennial issue identified in after-action reports.

Finding 2.1: “Network Segmentation inadequate.” This has been raised repeatedly by internal audit, perhaps this would be a good “area for improvement” in response to this memo.

Finding 4.2: “Data at rest not encrypted.” This is inaccurate. The GalactiCAD server in question was accessed from an authorized endpoint. As such, it decrypted the data, and sent it over an encrypted tunnel to the endpoint. The pen testers misunderstand our network architecture, again.

Finding 5.1: “Physical access not controlled.” Frankly, sir, this battle station is the ultimate power in the universe. It has multiple layers of physical access control, including the screening units of Star Destroyers and Super SDs, Tie Fighters, Storm Trooper squadrons in each landing bay, [Top Secret-1], and [Top Secret-2]. Again, the pen testers ignore facts to present “findings” to their clients.

Finding 5.2: “Unauthorized mobile devices allows network access.” This is flat-out wrong. In the clip presented, TK-427 is clearly heard authorizing the droids in question. An audit of our records indicate that both driods presented authorization certificates signed by Lord Vader’s certificate authority. As you know, this CA has been the source of some dispute over time, but the finding presented is, again, simply wrong.

Finding 8.3: “Legacy intruder-tracking system inadequately concealed.” Again, this claim simply has no basis in fact. The intruder-tracking system worked perfectly, allowing the Imperial Fleet to track the freighter to Yavin. In analyzing the video, we expect that General Orgena’s intuition was “Force”-aided.

In summary, there are a few minor issues identified which require attention. However, the bulk of the report presents mis-understandings, unreasonable expectations, and focuses heavily on a set of assumptions that just don’t bear up to scrutiny. We are in effective compliance with PCI-DSS, this test did not reveal a single credit card number, and the deal with the Trade Federation should not be impeded.

Via Bruce Schneier.

Threat Modeling Tooling from 2017

As I reflect back on 2017, I think it was a tremendously exciting year for threat modeling tooling. Some of the highlights for me include:

  • OWASP Threat Dragon is a web-based tool, much like the MS threat modeling tool, and explained in Open Source Threat Modeling, and the code is at https://github.com/mike-goodwin/owasp-threat-dragon. What’s exciting is not that it’s open source, but that it’s web-driven, and that enables modern communication and collaboration in the way that’s rapidly replacing emailing documents around.
  • Tutamen is an exciting tool because it’s simplicity forced me to re-think what threat modeling tooling could be. Right now, you upload a Visio diagram, and you get back a threat list in Excel, covering OWASP, STRIDE, CWE and CAPEC. If Threat Dragon is an IDE, Tutamen is a compiler.
  • We’re seeing real action in security languages. Fraser Scott is driving an OWASP Cloud Security project to create structured stories about threats and controls. If Tutamen is a compiler, this project lets us think about different include files. (The two are not yet, and may never be, integrated.) And closely related, Continuum Security has a BDD-Security project
  • Continuum’s also doing interesting work with IriusRisk, which they describe as “a single integrated console to manage application security risk throughout the software development process.” If the tools above are about depth, IriusRisk is about helping large organizations with breadth.

Did you see anything that was exciting that I missed? Please let me know in the comments!

Portfolio Thinking: AppSec Radar

At DevSecCon London, I met Michelle Embleton, who is doing some really interesting work around what she calls an AppSec Radar. The idea is to visually show what technologies, platforms, et cetera are being evaluated, adopted and in use, along with what’s headed out of use.

Surprise technology deployments always make for painful conversations.

This strikes me as a potentially quite powerful way to improve communication between security and other teams, and worth some experimentation in 2018.

Learning from Near Misses

[Update: Steve Bellovin has a blog post]

One of the major pillars of science is the collection of data to disprove arguments. That data gathering can include experiments, observations, and, in engineering, investigations into failures. One of the issues that makes security hard is that we have little data about large scale systems. (I believe that this is more important than our clever adversaries.) The work I want to share with you today has two main antecedents.

First, in the nearly ten years since Andrew Stewart and I wrote The New School of Information Security, and called for more learning from breaches, we’ve seen a dramatic shift in how people talk about breaches. Unfortunately, we’re still not learning as much as we could. There are structural reasons for that, primarily fear of lawsuits.

Second, last year marked 25 years of calls for an “NTSB for infosec.” Steve Bellovin and I wrote a short note asking why that was. We’ve spent the last year asking what else we might do. We’ve learned a lot about other Aviation Safety Programs, and think there are other models that may be better fits for our needs and constraints in the security realm.

Much that investigation has been a collaboration with Blake Reid, Jonathan Bair, and Andrew Manley of the University of Colorado Law School, and together we have a new draft paper on SSRN, “Voluntary Reporting of Cybersecurity Incidents.”

A good deal of my own motivation in this work is to engineer a way to learn more. The focus of this work, on incidents rather than breaches, and on voluntary reporting and incentives, reflects lessons learned as we try to find ways to measure real world security. The writing and abstract reflect the goal of influencing those outside security to help us learn better:

The proliferation of connected devices and technology provides consumers immeasurable amounts of convenience, but also creates great vulnerability. In recent years, we have seen explosive growth in the number of damaging cyber-attacks. 2017 alone has seen the Wanna Cry, Petya, Not Petya, Bad Rabbit, and of course the historic Equifax breach, among many others. Currently, there is no mechanism in place to facilitate understanding of these threats, or their commonalities. While information regarding the causes of major breaches may become public after the fact, what is lacking is an aggregated data set, which could be analyzed for research purposes. This research could then provide clues as to trends in both attacks and avoidable mistakes made on the part of operators, among other valuable data.

One possible regime for gathering such information would be to require disclosure of events, as well as investigations into these events. Mandatory reporting and investigations would result better data collection. This regime would also cause firms to internalize, at least to some extent, the externalities of security. However, mandatory reporting faces challenges that would make this regime difficult to implement, and possibly more costly than beneficial. An alternative is a voluntary reporting scheme, modeled on the Aviation Safety Reporting System housed within NASA, and possibly combined with an incentive scheme. Under it, organizations that were the victims of hacks or “near misses” would report the incident, providing important details, to some neutral party. This database could then be used both by researchers and by industry as a whole. People could learn what does work, what does not work, and where the weak spots are.

Please, take a look at the paper. I’m eager to hear your feedback.

The Carpenter Case

On Wednesday, the supreme court will consider whether the government must obtain a warrant before accessing the rich trove of data that cellphone providers collect about cellphone users’ movements. Among scholars and campaigners, there is broad agreement that the case could yield the most consequential privacy ruling in a generation. (“Supreme court cellphone case puts free speech – not just privacy – at risk.”)

Bruce Schneier has an article in the Washington Post, “How the Supreme Court could keep police from using your cellphone to spy on you,” as does Stephen Sachs:

The Supreme Court will hear arguments this Wednesday in Carpenter v. United States, a criminal case testing the scope of the Fourth Amendment’s right to privacy in the digital age. The government seeks to uphold Timothy Carpenter’s conviction and will rely, as did the lower court, on the court’s 1979 decision in Smith v. Maryland, a case I know well.

I argued and won Smith v. Maryland when I was Maryland’s attorney general. I believe it was correctly decided. But I also believe it has long since outlived its suitability as precedent. (“The Supreme Court’s privacy precedent is outdated.”)

I am pleased to have been able to help with an amicus brief in the case, and hope that the Supreme Court uses this opportunity to protect all of our privacy. Good luck to the litigants!

Averting the Drift into Failure

This is a fascinating video from the Devops Enterprise Summit:

“the airline that reports more incidents has a lower passenger mortality rate. Now what’s fascinating about this … we see this replicated this data across various domains, construction, retail, and we see that there is this inverse correlation between the number of incidents reported, the honesty, the willingness to take on that conversation about what might go wrong and things actually going wrong.”

The speaker’s website is sidneydekker.com/, there’s some really interesting material.

Vulnerabilities Equities Process and Threat Modeling

[Update: More at DarkReading, “ The Critical Difference Between Vulnerabilities Equities & Threat Equities.”]

The Vulnerabilities Equities Process (VEP) is how the US Government decides if they’ll disclose a vulnerability to the manufacturer for fixing. The process has come under a great deal of criticism, because it’s never been clear what’s being disclosed, what fraction of vulnerabilities are disclosed, if the process is working, or how anyone without a clearance is supposed to evaluate that beyond “we’re from the government, we’re here to help,” or perhaps “I know people who managed this process, they’re good folks.” Neither of those is satisfactory.

So it’s a very positive step that on Wednesday, White House Cybersecurity Coordinator Rob Joyce published “Improving and Making the Vulnerability Equities Process Transparent is the Right Thing to Do,” along with the process. Schneier says “I am less [pleased]; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes.”

I have two overall questions, and an observation.

The first question is, was the published policy written when we had commitments to international leadership and being a fair dealer, or was it created or revised with an “America First” agenda?

The second question relates to there being four equities to be considered. These are the “major factors” that senior government officials are supposed to consider in exercising their judgement. But, surprisingly, there’s an “additional” consideration. (“At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound.”) Does that imply that those officials are not required to weigh public desire for resilient and safe systems? What does it mean that the “additionally” sentence is not an equity being considered?

Lastly, the observation is that the VEP is all about vulnerabilities, not about flaws or design tradeoffs. From the charter, page 9-10:

The following will not be considered to be part of the vulnerability evaluation process:

  • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.
  • Misuse of available device features that enables non-standard operation.
  • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.
  • Stating/discovering that a device/system has no inherent security features by design.

Threat Modeling is the umbrella term for security engineering to discover and deal with these issues. It’s what I spend my days on, because I see the tremendous effort in dealing with vulnerabilities is paying off, and we see fewer of them in well-engineered systems.

In October, I wrote about the fact we’re getting better at dealing with vulnerabilities, and need to think about design issues. I closed:

In summary, we’re doing a great job at finding and squishing bugs, and that’s opening up new and exciting opportunities to think more deeply about design issues. (Emergent Design Issues)

Here, I’m going to disagree with Bruce, because I think that this disclosure shows us an important detail that we didn’t previously know. Publication exposes it, and lets us talk about it.

So, I’m going to double-down on what I wrote in October, and say that we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure. As Bill Gates wrote 15 years ago, we need systems that people “will always be able to rely on, [] to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony.”

We cannot achieve that goal with the VEP being narrowly scoped. It must evolve to deal with the sorts of flaws and design tradeoffs that threat modeling helps us find.

Photo by David Clode on Unsplash.