Averting the Drift into Failure

This is a fascinating video from the Devops Enterprise Summit:

“the airline that reports more incidents has a lower passenger mortality rate. Now what’s fascinating about this … we see this replicated this data across various domains, construction, retail, and we see that there is this inverse correlation between the number of incidents reported, the honesty, the willingness to take on that conversation about what might go wrong and things actually going wrong.”

The speaker’s website is sidneydekker.com/, there’s some really interesting material.

Vulnerabilities Equities Process and Threat Modeling

The Vulnerabilities Equities Process (VEP) is how the US Government decides if they’ll disclose a vulnerability to the manufacturer for fixing. The process has come under a great deal of criticism, because it’s never been clear what’s being disclosed, what fraction of vulnerabilities are disclosed, if the process is working, or how anyone without a clearance is supposed to evaluate that beyond “we’re from the government, we’re here to help,” or perhaps “I know people who managed this process, they’re good folks.” Neither of those is satisfactory.

So it’s a very positive step that on Wednesday, White House Cybersecurity Coordinator Rob Joyce published “Improving and Making the Vulnerability Equities Process Transparent is the Right Thing to Do,” along with the process. Schneier says “I am less [pleased]; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes.”

I have two overall questions, and an observation.

The first question is, was the published policy written when we had commitments to international leadership and being a fair dealer, or was it created or revised with an “America First” agenda?

The second question relates to there being four equities to be considered. These are the “major factors” that senior government officials are supposed to consider in exercising their judgement. But, surprisingly, there’s an “additional” consideration. (“At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound.”) Does that imply that those officials are not required to weigh public desire for resilient and safe systems? What does it mean that the “additionally” sentence is not an equity being considered?

Lastly, the observation is that the VEP is all about vulnerabilities, not about flaws or design tradeoffs. From the charter, page 9-10:

The following will not be considered to be part of the vulnerability evaluation process:

  • Misconfiguration or poor configuration of a device that sacrifices security in lieu of availability, ease of use or operational resiliency.
  • Misuse of available device features that enables non-standard operation.
  • Misuse of engineering and configuration tools, techniques and scripts that increase/decrease functionality of the device for possible nefarious operations.
  • Stating/discovering that a device/system has no inherent security features by design.

Threat Modeling is the umbrella term for security engineering to discover and deal with these issues. It’s what I spend my days on, because I see the tremendous effort in dealing with vulnerabilities is paying off, and we see fewer of them in well-engineered systems.

In October, I wrote about the fact we’re getting better at dealing with vulnerabilities, and need to think about design issues. I closed:

In summary, we’re doing a great job at finding and squishing bugs, and that’s opening up new and exciting opportunities to think more deeply about design issues. (Emergent Design Issues)

Here, I’m going to disagree with Bruce, because I think that this disclosure shows us an important detail that we didn’t previously know. Publication exposes it, and lets us talk about it.

So, I’m going to double-down on what I wrote in October, and say that we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure. As Bill Gates wrote 15 years ago, we need systems that people “will always be able to rely on, [] to be available and to secure their information. Trustworthy Computing is computing that is as available, reliable and secure as electricity, water services and telephony.”

We cannot achieve that goal with the VEP being narrowly scoped. It must evolve to deal with the sorts of flaws and design tradeoffs that threat modeling helps us find.

Photo by David Clode on Unsplash.

The Fights We Have to Fight: Fixing Bugs

One of the recurring lessons from Petroski is how great engineers overcome not only the challenges of physical engineering: calculating loads, determining build orders, but they also overcome the real world challenges to their ideas, including financial and political ones. For example:

Many a wonderful concept, beautifully drawn by an inspired structural artist, has never risen off the paper because its cost could not be justified. Most of the great bridges of the nineteenth century, which served to define bridge building and other technological achievements for the twentieth century, were financed by private enterprise, often led by the expanding railroads. Engineers acting as entrepreneurs frequently put together the prospectuses, and in some cases almost single-handedly promoted their dreams to the realists. […] Debates over how to pay for them were common. (Engineers of Dreams: Great Bridge Builders and the Spanning of America, Henry Petroski)

Many security professionals have a hobby of griping that products get rushed to market, maybe to be secured later. We have learned to be more effective at building security in, and in doing so, reduce product costs and increase on-time delivery. But some products were built before we knew how to do that, and others are going to get built by companies who choose not to do that. And in that sense, Collin Greene’s retrospective, “Fixing Security Bugs” is very much worth your time. It’s a retrospective on the Vista security program from a pen-test perspective.

Hacking: Exciting.
Finding bugs: Exciting.
Fixing those bugs: Not exciting.
The thing is, the finish line for our job in security is getting bugs fixed¹, not just found and filed. Doing this effectively is not a technology problem. It is a communications, organizational² and psychology problem.

I joined Microsoft while the Vista pen test was finishing up, and so my perspective is complimentary. I’d like to add a few additional perspectives to his points.

First, he asks “is prioritization correct?” After Vista, the SDL team created security bug bars, and then later refined them to align with the MSRC update priorities. That alignment with the MSRC priorities was golden. It made it super-clear that if you didn’t fix this before ship, you were going to have to do an update later. As a security engineer, you need to align your prioritization to the all up delivery priorities. Having everything be “extremely critical,” “very critical,” or “moderately critical” means you don’t know what matters, and so nothing does.

Second, “why security matters” was still a fight to be fought in Vista. By Windows 7, security had completed its “move left.” The spec form contained sections for security and privacy. Threat model review was a gate for start of coding. These process changes happened while developers were “rebelling” against Vista’s “overweight” engineering process. They telegraphed that security mattered to management and executives. As a security engineer, you need to get management to spend time talking about how security is balanced with other priorities.

Third, he points out that escalating to a manager can feel bad, but he’s right: “Often the manager has the most context on priorities.” Management saying “get this fixed” is an expression of prioritization. If you’ve succeeded in your work on “why security matters,” then management will know that they need to reinforce that message. Bringing the issues to them, responsibly, helps them get their job done. If it feels bad to escalate, then it’s worth asking if you have full buy in on security.

Now, I’m talking about security as if it matters to management. More and more, that’s the case. Something in the news causes leadership to say “we have to do better,” and they believe that there are things that they can do. In part that belief is because very large companies have been talking about how to make it work. But when that belief isn’t there, it’s your job as an engineer to, as Petroski says, single-handedly promote your dreams to the realists. Again, Greene’s post is full of good ideas.

Lastly, not everything is a bug. I discussed vulnerabilities versus design recently in “Emergent Design Issues.”

(Photo: https://www.pexels.com/photo/black-and-brown-insect-37733/)

Data Flow Diagrams 3.0

In the Brakesec podcast, I used a new analogy for why we need to name our work. When we talk about cooking, we have very specific recipes that we talk about: Julia Child’s beef bourguignon. Paul Prudhomme’s blackened fish. We hope that new cooks will follow the recipes until they get a feel for them, and that they can then start adapting and modifying them, as they generate mental models of what they’re doing.

But we talk about threat modeling we don’t label our recipes. We say this is how to threat model, as if that’s not as broad as “this is how to cook.”

And in that podcast, I realized that I’ve been guilty of definition drift in how I talk about data flow diagrams. Data flow diagrams, DFDs are also called ‘threat model diagrams’ because they’re so closely associated with threat modeling. And as I’ve used them over the course of a decade, there have been many questions:

  • Do you start with a context diagram?
  • What’s a multi-process, and when should I use one?
  • Do I really need to draw single-headed arrows? They make my diagram hard to read!
  • Is this process inside this arc? Is an arc the best way to show a trust boundary?
  • Should I color things?

Those questions I’ve initiated changes, such as showing a process as a rounded rectangle (versus a circle), eliminating rules such as all arrows are uni-directional, and advocating for trust boundaries as labeled boxes.

What I have not done is been crisp about what these changes are in a way that lets a team say “we use v3 DFDs” the way they might say “we use Python 3.” (ok, no one says either, I know!)

I’m going to retroactively label all of these changes as DFD3.0. DFD v1 was a 1970s construct. DFD2 was the critical addition of trust boundaries. And a version 3 DFD is defined as follows:

  1. It uses 5 symbols. A rectangle represents an external entity, a person or code outside your control. A rounded rectangle represents a process. They’re connected by arrows, which can be single or double headed. Data stores are represented by parallel lines. A trust boundary is a closed shape, usually a box. All lines are solid, except those used for trust boundaries, which are dashed or dotted. (There is no “multi-process” symbol in DFD3.)
  2. It must not* depend on the use of color, but can use color for additional information.
  3. All elements should have a label.
  4. You may have a context diagram if the system is complex. One is not required.

* Must, must not, should, should not are used per IETF norms.

This also allows us to talk about what might be in a DFD3.1. I know that I usually draw disks with the “drum” symbol, and I see a lot of people using that. It seems like a reasonable addition.


Using specific naming also allows us to fork. If you want to define a different type of DFD, have at it. If we have a bunch, we can figure out how keep things clear. Oh, and speaking of forking, I put this on github: DFD3.

Using specific naming allows us to talk about testing and maturity in the sense of “this is in alpha test.” “This has been used for several years, we took feedback, adjusted, and now it’s release quality.” I think that DFD3 is release quality, but it probably needs some beta testing for the definitions.

Similarly, DREAD has a bunch of problems, including a lack of definition. I use mention of DREAD as a way to see if people are threat modeling well. And one challenge there is that people silently redefine DREAD to mean something other than what it meant when Michael Howard and David LeBlanc talked about it in Writing Secure Code (2nd ed, 2003). If you want to build something new, your customers and users need to understand that it’s new, so they don’t get confused by it. Therefore, you need to give your new thing a new name. You could call it DREAD2, a DRE4D, DRECK, I don’t really care. What I care about is that it’s easily distinguished, and the first step towards that is a new name.

[Update: What’s most important is not the choices that I’ve made for what’s in DFD3, but the grouping of those choices into DFD3, so that you can make your own choices and our tools can compete in the market.

Why is “Reply” Not the Strongest Signal?

So apparently my “friends” at outlook.com are marking my email as junk today, with no explanation. They’re doing this to people who have sent me dozens of emails over the course of months or years.

Why does no spam filter seem to take repeated conversational turns into account? Is there a stronger signal that I want to engage with someone than…repeatedly engaging?