Category: Software Engineering

Actionable Followups from the Capital One Breach

Alexandre Sieira has some very interesting and actionable advice from looking at the Capital One Breach in “Learning from the July 2019 Capital One Breach.”

Alex starts by saying “The first thing I want to make clear is that I sympathize with the Capital One security and operations teams at this difficult time. Capital One is a well-known innovator in cloud security, has very competent people dedicated to this and has even developed and high quality open source solutions such as Cloud Custodian that benefit the entire community.” I share that perspective – I’ve spent a lot of time at OWASP, DevSecCon and other events talking with the smart folks at Capital One.

One thing I’ll add to his post is that the advice to “Avoid using * like the plague” is easy to implement with static analysis, by which I mean grep or diff in a commit hook. Similarly, if you want to block the grant of ListBuckets, you can look for that specific string.

Over time, you can evolve to check that the permissions are from a small subset of permissions you agree should be granted. One of the nice things about the agile approach to security is that you can start tomorrow, and then evolve.

At Blackhat next week, Dino Dai Zovi will be talking about how “Every Security Team is a Software Team Now.” Part of that thinking is how can we take advice, like Alex’s, and turn it into code that enforces our goals.

As we learn from breaches, as we share the code we build to address these problems, we’ll see fewer and fewer incidents like these.

Safety and Security in Automated Driving

Safety First For Automated Driving” is a big, over-arching whitepaper from a dozen automotive manufacturers and suppliers.

One way to read it is that those disciplines have strongly developed safety cultures, which generally do not consider cybersecurity problems. This paper is the cybersecurity specialists making the argument that cyber will fit into safety, and how to do so.

In a sense, this white paper captures a strategic threat model. What are we working on? Autonomous vehicles. What can go wrong? Security issues of all types. What are we going to do? Integrate with and extend the existing safety discipline. Give specific threat information and mitigation strategies to component designers.

I find some parts of it surprising. (I would find it more surprising if I were to look at a 150 page document and not find anything surprising.)

Contrary to the commonly used definition of an [minimal risk condition, (MRC)], which describes only a standstill, this publication expands the definition to also include degraded operation and takeovers by the vehicle operator. Final MRCs refer to MRCs that allow complete deactivation of the automated driving system, e.g. standstill or takeover by the vehicle operator.

One of the “minimal risk” maneuvers listed (table 4) is an emergency stop. And while an emergency stop may certainly be a risk minimizing action in some circumstances, describing it as such is surprising, especially when presented in contrast to a “safe stop” maneuver.

It’s important to remember that driving is incredibly dangerous. In the United States in 2018, an estimated 40,000 people lost their lives in car crashes, and 4.5 million people were seriously injured. (I’ve seen elsewhere that a million of those are hospitalized.) A great many of those injuries are caused by either drunk or distracted drivers, and autonomous vehicles could save many lives, even if imperfect.

Which brings me to a part that I really like, which is the ‘three dimensions of risk treatment’ figure (Figure 8, shown). Words like “risk” and “risk management” encompass a lot, and this figure is a nice side contribution of the paper.

I also like Figure 27 & 28 (shown), showing risks associated with a generic architecture. Having this work available allows systems builders to consider the risks to various components they’re working on. Having it available lets us have a conversation about the systematic risks that exist, but also, allows security experts to ask “is this the right set of risks for systems builders to think about?”

A chart of system components in an autonomous vehicle

The Road to Mediocrity

Google Docs has chosen to red-underline the word “feasible,” which, as you can see, is in its dictionary, to suggest “possible.” “Possible,” possibly, was not the word I selected, because it means something different.

Good writing is direct. Good writing respects the reader. Good writing doesn’t tax the reader accidentally. It uses simple words when possible, effectively utilizing, no wait, utilize means you’re attempting to make your writing sound fancier than it need be. Never use “utilize” when its feasible to say “use.”

Good writing tools are unobtrusive. They don’t randomize the writer away from what they’re working on to try to figure out why in holy hell it’s wrong to be using the word feasible and why it needs to be replaced.

The road to mediocre writing is paved with over-simplification and distraction.

My current go-to is Pinker’s The Sense of Style. What else helps you think about writing?

Threat Modeling as Code

Omer Levi Hevroni has a very interesting post exploring ways to represent threat models as code.

The closer threat modeling practices are to engineering practices already in place, the more it will be impactful, and the more it will be a standard part of delivery.

There’s interesting work in both transforming threat modeling thinking into code, and using code to reduce the amount of thinking required for a project. These are importantly different. Going from analysis to code is work, and selecting the right code to represent your project is work. Both, like writing tests, are an investment of effort now to increase productivity later.

It’s absolutely worth exploring ways to reduce the unique thinking that a project requires, and I’m glad to see this work being done.

Structures, Engineering and Security

J.E. Gordon’s Structures, or Why Things Don’t Fall Down is a fascinating and accessible book. Why don’t things fall down? It turns out this is a simple question with some very deep answers. Buildings don’t fall down because they’re engineered from a set of materials to meet the goals of carrying appropriate loads. Those materials have very different properties than the ways you, me, and everything from grass to trees have evolved to keep standing. Some of these structures are rigid, while others, like tires, are flexible.

The meat of the book, that is, the part that animates the structural elements, really starts with Robert Hooke, and an example of a simple suspension structure, a brick hanging by a string. Gordon provides lively and entertaining explanations of what’s happening, and progresses fluidly through the reality of distortion, stress and strain. From there he discusses theories of safety including the delightful dualism of factors of safety versus factors of ignorance, and the dangers (both physical and economic) of the approach.

Structures is entertaining, educational and a fine read that is worth your time. But it’s not really the subject of this post.

To introduce the real subject, I shall quote:

We cannot get away from the fact that every branch of technology must be concerned, to a greater or lesser extent, with questions of strength and deflection.

The ‘design’ of plants and animals and of the traditional artefacts did not just happen. As a rule, both the shape and the materials of any structure which has evolved over a long period of time in a competitive world represent an optimization with regard to the loads which it has to carry and to the financial and metabolic cost. We should like to achieve this sort of optimization in modern technology; but we are not always very good at it.

The real subject of this post is engineering cybersecurity. If every branch of technology includes cybersecurity, and if one takes the author seriously, then we ought to be concerned with questions of strength and deflection, and to the second quote, we are not very good at it.

We might take some solace from the fact that descriptions of laws of nature took from Hooke, in the 1600s, until today. Or far longer, if we include the troubles that the ancient Greeks had in making roofs that didn’t collapse.

But our troubles in describing the forces at work in security, or the nature or measure of the defenses that we seek to employ, are fundamental. If we really wish to optimize defenses, we cannot layer this on that, and hope that our safety factor, or factor of ignorance, will suffice. We need ways to measure stress or strain. How cracks develop and spread. Our technological systems are like ancient Greek roofs — we know that they are fragile, we cannot describe why, and we do not know what to do.

Perhaps it will take us hundreds of years, and software will continue to fail in surprising ways. Perhaps we will learn from our engineering peers and get better at it faster.

The journey to an understanding of structures, or why they do not fall down, is inspiring, instructive, and depressing. Nevertheless, recommended.

Congratulations to the 2016 winners!

  • Dan Geer, Chief Information Security Officer at In-Q-Tel;
  • Lance J. Hoffman, Distinguished Research Professor of Computer Science, The George Washington University;
  • Horst Feistel, Cryptographer and Inventor of the United States Data Encryption Standard (DES);
  • Paul Karger, High Assurance Architect, Prolific Writer and Creative Inventor;
  • Butler Lampson, Adjunct Professor at MIT, Turing Award and Draper Prize winner;
  • Leonard J. LaPadula, Co-author of the Bell-LaPadula Model of Computer Security; and
  • William Hugh Murray, Pioneer, Author and Founder of the Colloquium for Information System Security Education (CISSE)

In a world where influence seems to be measured in likes, re-tweets and shares, the work by these 7 fine people really stands the test of time. For some reason this showed up on Linkedin as “Butler was mentioned in the news,” even though it’s a few years old. Again, test of time.

Navigation