Star Wars, Star Trek and Getting Root on a Star Ship

It’s time for some Friday Star Wars blogging!

Reverend Robert Ballecer, SJ tweeted: “as a child I learned a few switches & 4 numbers gives you remote code ex on a 23rd century starship.” I responded, asking “When attackers are on the bridge and can flip switches, how long a password do you think is appropriate?”

It went from there, but I’d like to take this opportunity to propose a partial threat model for 23rd century starships.

First, a few assumptions:

  • Sometimes, officers and crewmembers of starships die, are taken prisoner, or are otherwise unable to complete their duties.
  • It is important that the crew can control the spaceship, including software and computer hardware.
  • Unrestricted physical access to the bridge means you control the ship (with possible special cases, and of course, the Holodeck because lord forgive me, they need to shoot a show every week. Scalzi managed to get a surprisingly large amount from this line of inquiry in Red Shirts. But I digress.)

I’ll also go so far as to say that as a derivative of the assumptions, the crew may need a rapid way to assign “Captain” privileges to someone else, and starship designers should be careful to design for that use case.

So the competing threats here are denial of service (and possibly denial of future service) and elevation of privilege. There’s a tension between designing for availability (anyone on the bridge can assume command relatively easily) and proper authorization. My take was that the attackers on the bridge are already close to winning, and so defenses which impede replacing command authority are a mistake.

Now, in responding, I thought that “flipping switches” meant physically being there, because I don’t recall the episode that he’s discussing. But further in further conversation, what became clear is that the switches can be flipped remotely, which dramatically alters the need for a defense.

It’s not clear what non-dramatic requirement such remote switch flipping serves, and so on balance, it’s easy to declare that the added risk is high and we should not have remote switch flipping. It is always easy to declare that the risk is high, but here I have the advantage that there’s no real product designer in the room arguing for the feature. If there was, we would clarify the requirement, and then probably engineer some appropriate defenses, such as exponential backoff for remote connections. Of course, in the future with layers of virtualization, what a remote connection is may be tricky to determine in software.

Which brings me to another tweet, by Hongyi Hu, who said he was “disappointed that they still use passwords for authentication in the 23rd century. I hope the long tail isn’t that long! 😛” What can I say but, “we’ll always have passwords.” We’ll just use them for less.

As I’ve discussed, the reason I use Star Wars over Star Trek in my teaching and examples is that no one is confused about the story in the core movies. I made precisely this mistake.

Image: The Spaceship Discovery, rendered by Trekkie5000. Alert readers will recall issues that could have been discovered with better threat modeling.

Threat Modeling and Star Wars

IANS members should have access today to a new faculty report I wrote, entitled “Threat Modeling in An Agile World.” Because it’s May the Fourth, I thought I’d share the opening:

As Star Wars reaches its climax, an aide approaches Grand Moff Tarkin to say, “We’ve analyzed their attack pattern, and there is a danger.” In one of the final decisions he makes, Tarkin brushes aside those concerns. Likewise, in Rogue One, we hear Galen Urso proclaim that the flaw is too subtle to be found. But that’s bunk. There’s clearly no blow-out sections or baffles around the reactor: if there’s a problem, the station is toast. A first year engineering student could catch it.

You don’t have to be building a Death Star to think about what might go wrong before you complete the project. The way we do that is by “threat modeling,” an umbrella term for anticipating and changing the problems that a system might experience. Unfortunately, a lot of the advice you’ll hear about threat modeling makes it seem a little bit like the multi-year process of building a Death Star.

Kyber Crystal and the Death Star

Death star construction

This post has spoilers for Rogue One, and also Return of the Jedi.

We learn in Rogue One that the Death Star’s main gun is powered by Kyber crystal. We know from various sources that it’s rare.

Then the Death Star is tested, destroying Jedah, where they’re mining the crystals. Note that both times its fired, they give the order “single reactor ignition.” Are they testing the reactors and power systems, or conserving kyber crystal?

Really, how much “ammo” did the original Death Star have on board? How many times could they fire the main gun?

Was ten or fifteen shots considered sufficient, because after a demonstration, fear will keep the local systems in line? Where did they find enought kyber crystal for the second Death Star?

Rogue One: The Best Star Wars Yet?

NewImageSomeone once asked me why I like Star Wars more than Star Trek. I was a bit taken aback, and he assumed that since I use it so much, I obviously prefer it. The real reason I use Star Wars is not that it’s better, but that there’s a small canon, and I don’t have to interrupt the flow of a talk to explain the scene where Darth Vader is strangling someone. But let’s face it, Star Trek was often better as science fiction. There are four or five bright lights that rank up there as some of the very best storytelling of the last few decades.

Trek at its most poignant was a transparent mirror to the world. The original series commented on Vietnam and race repeatedly in ways which let people see another way of looking at a situation. Moral nuance is easier to see when the ox being gored isn’t yours.

Rogue One is the first Star Wars with moral complexity. If you haven’t seen it, I find your lack of faith…disturbing. But when there’s a guy who cost you your limbs, your children, and threw the galaxy into civil war, throwing him in the reactor core isn’t a very complex choice. In fact, the whole “dark side” is a bit of a giveaway. In case you miss that, the Jedi were guardians of peace and justice throughout the galaxy. Are we clear yet? No? How about the Nazi uniforms? I could go on, but we’re gonna get to spoilers. My point is, the first four films were great action movies. Maybe we’ll see some moral complexity when someone finally gets around to filming the tragic fall of Anakin Skywalker, reputedly the core story of I-III. But I’m betting they’ll be action movies with talking teddy bears for the kids.

Speaking of morality, if you’re just now noticing that your political world resembles the Empire’s, or if you’re angry that the script seems to mock your party…maybe you should look at your world through that mirror and ask if you’re on the right side of morality or history. After all, that’s what makes for great science fiction. The opportunity to see the world through a new lens. And the fact is, the story was not substantially re-written. “Rogue One’s Discarded Dialog” and See 46 shots that were cut from Rogue One” show a story with a little less character, a little more army, but not a sympathetic, racially and species-diverse Empire. The movie wasn’t re-written as a commentary on 2016.

Structurally, Rogue One is a war story, not an action story. It’s not about the hero’s journey, or Luke growing up. It’s a story about the chaos that follows a civil war, and it’s messy and has characters who make choices from a set of bad options.

When Cassian shoots the fellow so he can escape at the start? Galen Erso’s decision to work on the Death Star, delay it, and insert a flaw (or two?) These are perhaps the wrong choices in bad situations. We don’t see why Saw Gerrera and the Rebel Alliance split. We see the Rebellion at its worst — unable to take action in the face of imminent destruction, and then impulsively chasing Rogue One into battle. (What Rogue One Teaches Us About the Rebel Alliance’s Military Chops is a great dissection of this.)

But we can look to Galen Erso’s decision to work on the Death Star, and have a conversation about what he should have done. Gone to a labor camp and let someone else build it with a better reactor core? What if that someone else had put more shielding over the thermal exhaust ports? (Speaking of which, don’t miss “The Death Star Architect Speaks Out,” and perhaps even my commentary, “Governance Lessons from the Death Star Architect.” I think the governance questions are even more interesting now, if the Empire were to conduct a blameless post mortem, but we know they don’t.) We can use that decision to talk, abstractly, about taking a job in the Trump Administration with less of the horrible emotional weight that that carries.

That mirror on the world is what great science fiction offers us, and that’s what makes Rouge One the best Star Wars yet.

Security Lessons from C-3PO

C3PO telling Han Solo the odds

C-3PO: Sir, the possibility of successfully navigating an asteroid field is approximately 3,720 to 1.

Han Solo: Never tell me the odds.

I was planning to start this with a C-3PO quote, and then move to a discussion of risk and risk taking. But I had forgotten just how rich a vein George Lucas tapped into with 3PO’s lines in The Empire Strikes Back. So I’m going to talk about his performance prior to being encouraged to find a non-front-line role with the Rebellion.

In case you need a refresher on the plot, having just about run out of options, Han Solo decides to take the known, high risk of flying into an asteroid field. When he does, 3PO interrupts to provide absolutely useless information. There’s nothing about how to mitigate the risk (except surrendering to the Empire). There’s nothing about alternatives. Then 3PO pipes up to inform people that he was previously wrong:

C-3PO: Artoo says that the chances of survival are 725 to 1. Actually Artoo has been known to make mistakes… from time to time… Oh dear…

I have to ask: How useless is that? “My first estimate was off by a factor of 5, you should trust this new one?”

C-3PO: I really don’t see how that is going to help! Surrender is a perfectly acceptable alternative in extreme circumstances! The Empire may be gracious enough to… [Han signals to Leia, who shuts 3PO down.]

Most of the time, being shut down in a meeting isn’t this extreme. But there’s a point in a discussion, especially in high-pressure situations, where the best contribution is silence. There’s a point at which talking about the wrong thing at the wrong time can cost credibility that you’d need later. And while the echo in the dialogue is for comic effect, the response certainly contains a lesson for us all:

C-3PO: The odds of successfully surviving an attack on an Imperial Star Destroyer are approximately…

Leia: Shut up!

And the eventual outcome:

C-3PO: Sir, If I may venture an opinion…

Han Solo: I’m not really interested in your opinion 3PO.

Does C-3PO resemble any CSOs you know? Any meetings you’ve been in? Successful business people are excellent at thinking about risk. Everything from launching a business to hiring employees to launching a new product line involves risk tradeoffs. The good business people either balance those risks well, or they transfer them away in some way (ideally, ethically). What they don’t want or need is some squeaky-voiced robot telling them that they don’t understand risk.

So don’t be C-3PO. Provide useful input at useful times, with useful options.

Originally appeared in Dark Reading, “Security Lessons from C-3PO, Former CSO of the Millennium Falcon,” as part of a series I’m doing there, “security lessons from..”. So far in the series, lessons from: my car mechanic, my doctor, The Gluten Lie, and my stock broker.

Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department

In “The Galactic Empire Has Terrible Cybersecurity,” Alex Grigsby looks at a number of high-profile failures, covered in “A New Hope” and the rest of the Star Wars canon.

Unfortunately, the approach he takes to the Galactic Empire obscures the larger, more dangerous issue is its cybersecurity culture. There are two errors in Grigsby’s analysis, and they are worth examining. As Yoda once said, “Much to learn you still have.”

Grigsby’s first assumption is that more controls leads to better security. But controls need to be deployed judiciously to allow operations to flow. For example, when you have Stormtroopers patrolling in the Death Star, adding layers of access controls may in fact hamper operations. The Shuttle with outdated keys in Return of the Jedi shows that security issues are rampant, and officers are used to escalations. Security processes that are full of routine escalations desensitize people. They get accustomed to saying OK, and are thus unlikely to give their full attention to each escalation.

The second issue is that Grigsby focuses on a few flaws that have massive impact. The lack of encryption and problematic location of the Death Star’s exhaust port matter not so much as one-offs, but rather reveal the larger security culture at play in the Empire.

There is a singular cause for these failures: Darth Vader. His habit of force choking those who have failed him. The culture of terror that he fosters prevents those under his command from learning from their mistakes and ensures that opportunities for learning will be missed; finger-pointing and blame passing will rule the day. Complaints to the Empire’s human resources department will go unanswered and those who made the complaints probably go missing.

This is the precise opposite of the culture created by Etsy—the online marketplace for handmade and vintage items (including these Star Wars cufflinks). Etsy’s engineers engage in what they call “Blameless Post-Mortems and Just Culture,” where people feel safe coming clean about making mistakes so that they can learn from them. After a problem, engineers are encouraged to write up what happened, why it happened, what they learned, and share that knowledge widely. Executives are committed to not placing blame or finger pointing.

The Empire needs a better way to deal with its mistakes, and so do we. Fortunately, we don’t have to fear Lord Vader and can learn from things that have gone wrong.

For example, the DatalossDB, a project of the non-profit Open Security Foundation, has tracked thousands of incidents that involve the loss, theft of exposure of personally-identifiable information since 2008. The Mercatus Center has analyzed Government Accountability Office data, and found upwards of 60,000 incidents per year for the last two years. Sadly, while we know of these incidents, including what sorts of data was taken and how many victims there were, in many of them, we do not know what happened to a degree of detail that allows us to address the problem. In the first years of public breach reporting (roughly starting in 2004), there were a raft of breaches associated with stolen computers, most of them laptops. All commercial operating systems now ship with full disk encryption software as a result. But that may be the only lesson broadly learned so far.

It’s easy to focus on spectacular incidents like the destruction of a Death Star. It’s easy to look to the mythic aspects of the story. It’s harder to understand what went wrong. Was there an architect who brought up the unshielded thermal exhaust port vulnerability? What happened to the engineering change request? What can we learn from that? Did an intrusion detection analyst notice that unauthorized devices were plugged into the network? Were they overwhelmed by a rash of new devices as the new facility was staffed up?

Even given the very largest breaches, there is often a paucity of information about what went wrong. Sometimes, no one wants to know. Sometimes, it’s a set of finger-pointing. Sometimes, whatever went wrong happened long enough ago that there are no logs. The practice of “Five Whys” analysis is rare.

And when, against all odds, an organization digs in and asks what happened, the lawyers are often there to announce that under no circumstances should it be shown to anyone. After all, there will be lawsuits. (While I am not a lawyer, it seems to me that such lawsuits happen regardless of the existence or availability of a post-mortem report, and a good analysis of what went wrong might be seen as evidence of a mature, learning practice.)

What does not happen, given our fear of lawsuits and other phantom menaces, is learning from mistakes. And so R2-D2 plugs into every USB port in sight, and does so for more than twenty years.

We know from a variety of fields including aircraft safety, nuclear safety, and medical safety that high degrees of safety and security are an outcome of just culture, and willingness to discuss what’s gone wrong. Attention to “near misses” allows organizations to learn faster.

This is what the National Transportation Safety Board does when a plane crashes or a train derails.

We need to get better at post-mortems for cybersecurity. We need to publish them so we can learn the analysis methods others are developing. We need to publish them so we can assess if the conclusions are credible. We need to publish them so we can perform statistical analyses. We need to publish them so that we can do science.

There are many reasons to prevaricate. The First Order — the bad guys in The Force Awakens — can’t afford another Death Star, and we cannot afford to keep doing what we’ve been doing and hoping it will magically get better.

It’s not our only hope, but it certainly would be a new hope.

(Originally appeared on the Council on Foreign Relations Net Politics blog.)

Governance Lessons from the Death Star Architect

I had not seen this excellent presentation by the engineer who built the Death Star’s exhaust system.

In it, he discusses the need to disperse energy from a battle station with the power draw to destroy planets, and the engineering goals he had to balance.

I’m reminded again of “The Evolution of Useful Things” and how it applies to security. Security engineering involves making tradeoffs, and those tradeoffs sometimes have unfortunate results. Threat modeling is a family of techniques for thinking about the tradeoffs and what’s likely to go wrong. Doing it well means that things are less likely to go wrong, not that nothing ever will.

It’s easy, after the fact, to point out the problem with the exhaust ports. But as your risk management governance improves, you get to the point of asking “what did we know when we made these decisions?” and “could we have made these decisions better?”

At the engineering level, you want to develop a cybersecurity culture that’s open to discussing failures, not one in which you have to fear being force-choked. (More on that topic in my guest post at the Council on Foreign Relations, “Cybersecurity Lessons from Star Wars: Blame Vader, Not the IT Department.”)

More broadly, organizational leadership needs to focus on questions about appropriate policy and governance being in place. That sounds jargony, so let me unpack it a little. Policy is what you intend to do: such as perform risk analysis that lets executives make good risk management decisions about the competing aspects of the business. Is a PHP vuln acceptable? If it happened to be in the Force Awakened site this week, taking that site down would have been an expensive choice. It’s tempting to ask what geek would do more than add a comment? And that gets into questions of attacker motivation, and it’s easy to get it wrong. Even Star Wars has critics (one minute video, worth sharing for the reveal at the end):

If policy is about knowing what you intend to do in a way that lets people do it, governance is about making sure it happens properly. There are all sorts of reasons that it’s hard to map technology risk to business risk. Tech risk involves the bad things which might happen, and the interesting ways technologies are tightly woven make it hard to say, a priori, that an exhaust port technical issue might have a bad business impact, or that an HVAC system having a bad password might lead to a bad business impact.

Exhaust is likely to generate turbulence in an exhaust shaft, and that such turbulence will act as a compensating control for a lack of port shielding. That is, whatever substrate carries heat will do so unevenly, and in a shaft the size of a womp rat, that turbulence will batter any projectile into exploding somewhere less harmful.

A good policy will ask for such analysis, a good governance process will ask if it happened, and, after a failure, if the failure is likely to happen again. We need to help executives form the questions, and we need to do a better job at supplying answers.

What Good is Threat Intelligence Going to do Against That?

As you may be aware, I’m a fan of using Star Wars for security lessons, such as threat modeling or Saltzer and Schroeder. So I was pretty excited to see Wade Baker post “Luke in the Sky with Diamonds,” talking about threat intelligence, and he gets bonus points for crossover title. And I think it’s important that we see to fixing a hole in their argument.

So…Pardon me for asking, but what good is threat intelligence going to do against that?

In many ways, the diamond that Wade’s built shows a good understanding of the incident. (It may focus overmuch on Jedi Panda, to the practical exclusion of R2-D2, who we all know is the driving force through the movies.) The facts are laid out, they’re organized using the model, and all is well.

Most of my issues boil down to two questions. The first is how could any analysis of the Battle of Yavin fail to mention the crucial role played by Obi Wan Kenobi, and second, what the heck do you do with the data? (And a third, about the Diamond Model itself — how does this model work? Why is a lightsaber a capability, and an X-Wing a bit of infrastructure? Why is The Force counted as a capability, not an adversary to the Dark Side?)

To the first question, that of General Kenobi. As everyone knows, General Kenobi had infiltrated and sabotaged the Death Star that very day. The public breach reports state that “a sophisticated actor” was only able to sabotage a tractor beam controller before being caught, but how do we know that’s all he did? He was on board the station for hours, and could easily have disabled tractor beams that worked in the trenches, or other defenses that have not been revealed. We know that his associate, Yoda, was able to see into the future. We have to assume that they used this ability, and, in using it, created for themselves a set of potential outcomes, only one of which is modeled.

The second question is, okay, we have a model of what went wrong, and what do we do with it? The Death Star has been destroyed, what does all that modeling tell us about the Jedi Panda? About the Fortressa? (Which, I’ll note, is mentioned as infrastructure, but not in the textual analysis.) How do we turn data into action?

Depending on where you stand, it appears that Wade falls into several traps in this post. They are:

  • Adversary modeling and missing something. The analysis misses Ben Kenobi, and it barely touches on the fact that the Rebel Alliance exists. Getting all personal might lead an Imperial Commander to be overly focused on Skywalker, and miss the threat from Lando Calrissian, or other actors, to a second Death Star. Another element which is missed is the relationship between Vader and Skywalker. And while I don’t want to get choked for this, there’s a real issue that the Empire doesn’t handle failure well.
  • Hindsight biases are common — so common that the military has a phenomenon it calls ‘fighting the last war.’ This analysis focuses in on a limited set of actions, the ones which succeeded, but it’s not clear that they’re the ones most worth focusing on.
  • Actionability. This is a real problem for a lot of organizations which get interesting information, but do not yet have the organizational discipline to integrate it into operations effectively.

The issues here are not new. I discussed them in “Modeling Attackers and their Motives,” and I’ll quote myself to close:

Let me lay it out for you: the “sophisticated” attackers are using phishing to get a foothold, then dropping malware which talks to C&C servers in various ways. The phishing has three important variants you need to protect against: links to exploit web pages, documents containing exploits, and executables disguised as documents. If you can’t reliably prevent those things, detect them when you’ve missed, and respond when you discover you’ve missed, then digging into the motivations of your attackers may not be the best use of your time.

What I don’t know about the Diamond Model is how it does a better job at avoiding the traps and helping those who use it do better than other models. (I’m not saying it’s poor, I’m saying I don’t know and would like to see some empirical work on the subject.)