The Dope Cycle and a Deep Breath

Back in January, I wrote about “The Dope Cycle and the Two Minutes Hate.” In that post, I talked about:

Not kidding: even when you know you’re being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They’re caught up in a Red Queen Race, where they must engage faster and faster just to stay in place. And when you’re in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That’s dopamine, that’s adrenaline, that’s every hormone that’s been covered in Popular Psychology. It’s a dope cycle, and you can read that in every sense of the word dope.

I just discovered a fascinating tool from a company called Dopamine Labs. Dopamine Labs is a company that helps their corporate customers drive engagement: “Apps use advanced software tools that shape and control user behavior. We know because [we sell] it to them.” They’ve released a tool called Space: “Space uses neuroscience and AI to help you kick app addiction. No shame. No sponsors. Just a little breathing room to help you take back control.” As they say: “It’s the same math that we use to get people addicted to apps, just run backwards.”

Space app
There are some fascinating ethical questions involved in selling both windows and bricks. I’m going to say that you participants in a red queen race might as well learn what countermeasures to their techniques are by building them. Space works as a Chrome plugin and as an iOS and Android App. I’ve installed it, and I like it more than I like another tool I’ve been using (Dayboard). I really like Dayboard’s todo list, but feel that it cuts me off in the midst of time wasting, rather than walking me away.)

The app is at http://youjustneedspace.com/.

As we go into big conferences, it might be worth installing. (Also as we head into conferences, be excellent to each other. Know and respect your limits and those of others. Assume good intent. Avoid getting pulled into a “Drama Triangle.”)

Threat Modeling Password Managers

There was a bit of a complex debate last week over 1Password. I think the best article may be Glenn Fleishman’s “AgileBits Isn’t Forcing 1Password Data to Live in the Cloud,” but also worth reading are Ken White’s “Who moved my cheese, 1Password?,” and “Why We Love 1Password Memberships,” by 1Password maker AgileBits. I’ve recommended 1Password in the past, and I’m not sure if I agree with Agilebits that “1Password memberships are… the best way to use 1Password.” This post isn’t intended to attack anyone, but to try to sort out what’s at play.

This is a complex situation, and you’ll be shocked, shocked to discover that I think a bit of threat modeling can help. Here’s my model of

what we’re working on:

Password manager

Let me walk you through this: There’s a password manager, which talks to a website. Those are in different trust boundaries, but for simplicity, I’m not drawing those boundaries. The two boundaries displayed are where the data and the “password manager.exe” live. Of course, this might not be an exe; it might be a .app, it might be Javascript. Regardless, that code lives somewhere, and where it lives is important. Similarly, the passwords are stored somewhere, and there’s a boundary around that.

What can go wrong?

If password storage is local, there is not a fat target at Agilebits. Even assuming they’re stored well (say, 10K iterations of PBKDF2), they’re more vulnerable if they’re stolen, and they’re easier to steal en masse [than] if they’re on your computer. (Someone might argue that you, as a home user, are less likely to detect an intruder than Agilebits. That might be true, but that’s a way to detect; the first question is how likely is an attacker to break in? They’ll succeed against you and they’ll succeed against Agilebits, and they’ll get a boatload more from breaking into Agilebits. This is not intended as a slam of Agilebits, it’s an outgrowth of ‘assume breach.’) I believe Agilebits has a simpler operation than Dropbox, and fewer skilled staff in security operations than Dropbox. The simpler operation probably means there are fewer usecases, plugins, partners, etc, and means Agilebits is more likely to notice some attacks. To me, this nets out as neutral. Fleishman promises to explain “how AgileBits’s approach to zero-knowledge encryption… may be less risky and less exposed in some ways than using Dropbox to sync vaults.” I literally don’t see his argument, perhaps it was lost in the complexity of writing a long article? [Update: see also Jeffrey Goldberg’s comment about how they encrypt the passwords. I think of what they’ve done as a very strong mitigation; with a probably reasonable assumption they haven’t bolluxed their key generation. See this 1Password Security Design white paper.]

To net it out: local storage is more secure. If your computer is compromised, your passwords are compromised with any architecture. If your computer is not compromised, and your passwords are nowhere else, then you’re safe. Not so if your passwords are somewhere else and that somewhere else is compromised.

The next issue is where’s the code? If the password manager executable is stored on your device, then to replace it, the attacker either needs to compromise your device, or to install new code on it. An attacker who can install new code on your computer wins, which is why secure updates matter so much. An attacker who can’t get new code onto your computer must compromise the password store, discussed above. When the code is not on your computer but on a website, then the ease of replacing it goes way up. There’s two modes of attack. Either you can break into one of the web server(s) and replace the .js files with new ones, or you can MITM a connection to the site and tamper with the data in transit. As an added bonus, either of those attacks scales. (I’ll assume that 1Password uses certificate pinning, but did not chase down where their JS is served.)

Netted out, getting code from a website each time you run is a substantial drop in security.

What should we do about it?

So this is where it gets tricky. There are usability advantages to having passwords everywhere. (Typing a 20 character random password from your phone into something else is painful.) In their blog post, Agilebits lists more usability and reliability wins, and those are not to be scoffed at. There are also important business advantages to subscription revenue, and not losing your passwords to a password manager going out of business is important.

Each 1Password user needs to make a decision about what the right tradeoff is for them. This is made complicated by family and team features. Can little Bobby move your retirement account tables to the cloud for you? Can a manager control where you store a team vault?

This decision is complicated by walls of text descriptions. I wish is that Agilebits would do a better job of crisply and cleanly laying out the choice that their customers can make, and the advantages and disadvantages of each. (I suggest a feature chart like this one as a good form, and the data should also be in each app as you set things up.) That’s not to say that Agilebits can’t continue to choose and recommend a default.

Does this help?

After years of working in these forms, I think it’s helpful as a way to break out these issues. I’m curious: does it help you? If not, where could it be better?

Secure updates: A threat model

Software updates

Post-Petya there have been a number of alarming articles on insecure update practices. The essence of these stories is that tax software, mandated by the government of Ukraine, was used to distribute the first Petya, and that this can happen elsewhere. Some of these stories are a little alarmist, with claims that unnamed “other” software has also been used in this way. Sometimes the attack is easy because updates are unsigned, other times its because they’re also sent over a channel with no security.

The right answer to these stories is to fix the damned update software before people get more scared of updating. That fear will survive long after the threat is addressed. So let me tell you, [as a software publisher] how to do secure upadtes, in a nutshell.

The goals of an update system are to:

  1. Know what updates are available
  2. Install authentic updates that haven’t been tampered with
  3. Strongly tie updates to the organization whose software is being updated. (Done right, this can also enable whitelisting software.)

Let me elaborate on those requirements. First, know what updates are available — the threat here is that an attacker stores your message “Version 3.1 is the latest revision, get it here” and sends it to a target after you’ve shipped version 3.2. Second, the attacker may try to replace your update package with a new one, possibly using your keys to sign it. If you’re using TLS for channel security, your TLS keys are only as secure as your web server, which is to say, not very. You want to have a signing key that you protect.

So that’s a basic threat model, which leads to a system like this:

  1. Update messages are signed, dated, and sequenced. The code which parses them carefully verifies the signatures on both messages, checks that the date is later than the previous message and the sequence number is higher. If and only if all are true does it…
  2. Get the software package. I like doing this over torrents. Not only does that save you money and improve availability, but it protects you against the “Oh hello there Mr. Snowden” attack. Of course, sometimes a belief that torrents have the “evil bit” set leads to blockages, and so you need a fallback. [Note this originally called the belief “foolish,” but Francois politely pointed out that that was me being foolish.]
  3. Once you have the software package, you need to check that it’s signed with the same key as before.
    Better to sign the update and the update message with a key you keep offline on a machine that has no internet connectivity.

  4. Since all of the verification can be done by software, and the signing can be done with a checklist, PGP/GPG are a fine choice. It’s standard, which means people can run additional checks outside your software, it’s been analyzed heavily by cryptographers.

What’s above follows the four-question framework for threat modeling: what are we working on? (Delivering updates securely); what can go wrong? (spoofing, tampering, denial of service); what are we going to do about it? (signatures and torrents). The remaining question is “did we do a good job?” Please help us assess that! (I wrote this quickly on a Sunday morning. Are there attacks that this design misses? Defenses that should be in place?)

Bicycling and Threat Modeling

Bikeshare

The Economist reports on the rise of dockless bike sharing systems in China, along with the low tech ways that the system is getting hacked:

The dockless system is prone to abuse. Some riders hide the bikes in or near their homes to prevent others from using them. Another trick involves photographing a bike’s QR code and then scratching it off to stop others from scanning it. With the stored image, the rider can then monopolise the machine. But customers caught misbehaving can have points deducted from their accounts, making it more expensive for them to rent the bikes.

Gosh, you mean you give people access to expensive stuff and they ride off into the sunset?

Threat modeling is an umbrella for a set of practices that let an organization find these sorts of attacks early, while you have the greatest flexibility in choosing your response. There are lots of characteristics we could look for: practicality, cost-effectiveness, consistency, thoroughness, speed, et cetera, and different approaches will favor one or the other. One of those characteristics is useful integration into business.

You can look at thoroughness by comparing bikes to the BMW carshare program I discussed in “The Ultimate Stopping Machine,” I think that the surprise that ferries trigger an anti-theft mechanism is somewhat surprising, and I wouldn’t dismiss a threat modeling technique, or criticize a team too fiercely for missing it. That is, there’s nuance. I’d be more critical of a team in Seattle missing the ferry issue than I would be of a team in Boulder.)

In the case of the dockless bikes, however, I would be skeptical of a technique that missed “reserving” a bike for your ongoing use. That threat seems like an obvious one from several perspectives, including that the system is labelled “dockless,” so you have an obvious contrast with a docked system.

When you find these things early, and iterate around threats, requirements and mitigations, you find opportunities to balance and integrate security in better ways than when you have to bolt it on later. (I discuss that iteration here and here.

For these bikes, perhaps the most useful answer is not to focus on misbehavior, but to reward good behavior. The system wants bikes to be used, so reward people for leaving the bikes in a place where they’re picked up soon? (Alternately, perhaps make it expensive to check out the same bike more than N times in a row, where N is reasonably large, like 10 or 15.)

Photo by Viktor Kern.

The Ultimate Stopping Machine?

a metal Immobilizer around a tireSecurity is hard in the real world. There’s an interesting story on Geekwire, “BMW’s ReachNow investigating cases of cars getting stuck on Washington State Ferries.” The story:

a ReachNow customer was forced to spend four hours on the Whidbey Island ferry this weekend because his vehicle’s wheels were locked, making the vehicle immovable unless dragged. The state ferry system won’t let passengers abandon a car on the ferry because of security concerns.

BMW’s response:

We believe that the issue is related to a security feature built into the vehicles that kicks in when the car is moving but the engine is turned off and the doors are closed.

I first encountered these immobilizing devices on a friend’s expensive car in 1999 or so. The threat is thieves equipped with a towtruck. It’s not super-surprising to discover that a service like Reachnow, where “random” people can get into a car and drive it away will have tracking devices in those cars. It’s a little more surprising that there are immobilizers in them.

Note the competing definitions of security (emphasis added in both quotes above):

  • BMW is worried about theft.
  • The state ferry system is worried about car bombs.
  • Passengers might worry about being detained next to a broken car, or about bugs in the immobilization technology. What if that kicks in on the highway because “a wire gets loose”?

In “The Evolution of Secure Things,” I wrote:

It’s about the constant imperfection of products, and how engineering is a response to perceived imperfections. It’s about the chaotic real world from which progress emerges. In a sense, products are never perfected, but express tradeoffs between many pressures, like manufacturing techniques, available materials, and fashion in both superficial and deep ways.

Surprise! There’s a way to move a vehicle a long distance with the engine off, and it’s not a tow truck!

Real products, introduced into the real world, will often involve surprises like this. One characteristic of a good security architecture is that there’s the right degree of adjustability in the product, and judging that is still a matter of engineering experience.

Similarly, one of the lessons of entrepreneurship is that the problems you experience are often surprising. Investors look for flexibility in the leaders they back because they know that they’ll be surprised along the way.

Threat Modeling & IoT

Threat modeling internet-enabled things is similar to threat modeling other computers, with a few special tensions that come up over and over again. You can start threat modeling IoT with the four question framework:

  1. What are you building?
  2. What can go wrong?
  3. What are you going to do about it?
  4. Did we do a good job?

But there are specifics to IoT, and those specifics influence how you think about each of those questions. I’m helping a number of companies who are shipping devices, and I would love to fully agree that “consumers shouldn’t have to care about the device’s security model. It should just be secure. End of story.” I agree with Don Bailey on the sentiment, but frequently the tensions between requirements mean that what’s secure is not obvious, or that “security” conflicts with “security.” (I model requirements as part of ‘what are you building.’)

When I train people to threat model, I use this diagram to talk about the interaction between threats, mitigations, and requirements:

Threats mitigations requirements

The interaction has a flavor in working with internet-enabled things, and that interaction changes from device to device. There are some important commonalities.

When looking at what you’re building, IoT devices typically lack sophisticated input devices like keyboards or even buttons, and sometimes their local output is a single LED. One solution is to put a web server on the device listening, and to pay for a sticker with a unique admin password, which then drives customer support costs. Another solution is to have the device not listen but to reach out to your cloud service, and let customers register their devices to their cloud account. This has security, privacy, and COGS downsides tradeoffs. [Update: I said downsides, but it’s more a different set of attack vectors become relevant in security. COGS is an ongoing commitment to operations; privacy is dependent on what’s sent or stored.]

When asking what can go wrong, your answers might include “a dependency has a vulnerability,” or “an attacker installs their own software.” This is an example of security being in tension with itself is the ability to patch your device yourself. If I want to be able to recompile the code for my device, or put a safe version of zlib on there, I ought to be able to do so. Except if I can update the device, so can attackers building a botnet, and 99.n% of typical consumers for a smart lightbulb are not going to patch themselves. So we get companies requiring signed updates. Then we get to the reality that most consumer devices last longer than most silicon valley companies. So we want to see a plan to release the key if the company is unable to deliver updates. And that plan runs into the realities of bankruptcy law, which is that that signing key is an asset, and its hard to value, and bankruptcy trustees are unlikely to give away your assets. There’s a decent pattern (allegedly from the world of GPU overclocking), which is that you can intentionally make your device patchable by downloading special software and moving a jumper. This requires a case that can be opened and reclosed, and a jumper or other DFU hardware input, and can be tricky on inexpensive or margin-strained devices.

That COGS (cost of goods sold) downside is not restricted to security, but has real security implications, which brings us to the question of what are you going to do about it. Consumers are not used to subscribing to their stoves, nor are farmers used to subscribing to their tractors. Generally, both have better things to do with their time than to understand the new models. But without subscription revenue, it’s very hard to make a case for ongoing security maintenance. And so security comes into conflict with consumer mental models of purchase.

In the IoT world, the question of did we do a good job becomes have we done a good enough job? Companies believe that there is a first mover advantage, and this ties to points that Ross Anderson made long ago about the tension between security and economics. Good threat modeling helps companies get to those answers faster. Sharing the tensions help us understand what the tradeoffs look like, and with those tensions, organizations can better define their requirements and get to a consistent level of security faster.


I would love to hear your experiences about other issues unique to threat modeling IoT, or where issues weigh differently because of the IoT nature of a system!

(Incidentally, this came from a question on Twitter; threading on Twitter is now worse than it was in 2010 or 2013, and since I have largely abandoned the platform, I can’t figure out who’s responding to what. A few good points I see include:

How Not to Design an Error Message

SC07FireAlarm

The voice shouts out: “Detector error, please see manual.” Just once, then a few hours later. And when I did see the manual, I discovered that it means “Alarm has reached its End of Life

No, really. That’s how my fire alarm told me that it’s at its end of life. By telling me to read the manual. Why it doesn’t say “device has reached end of life?” That would be direct and to the point. But no. When you press the button, it says “please see manual.” Now, this was a 2009 device, so maybe, just maybe, there was a COGS issue in how much storage was needed.

But sheesh. Warning messages should be actionable, explanatory and tested. At least it was loud and annoying.

2017 and Tidal Forces

There are two great blog posts at Securosis to kick off the new year:

Both are deep and important and worth pondering. I want to riff on something that Rich said:

On the security professional side I have trained hundreds of practitioners on cloud security, while working with dozens of organizations to secure cloud deployments. It can take years to fully update skills, and even longer to re-engineer enterprise operations, even without battling internal friction from large chunks of the workforce…

It’s worse than that. Yesterday Recently on Emergent Chaos, I talked about Red Queen Races, where you have to work harder and harder just to keep up.

In the pre-cloud world, you could fully update your skills. You could be an expert on Active Directory 2003, or Checkpoint’s Firewall-1. You could generate friction over moving to AD2012. You no longer have that luxury. Just this morning, Amazon launched a new rev of something. Google is pushing a new rev of its G-Suite to 5% of customers. Your skillset with the prior release is now out of date. (I have no idea if either really did this, but they could have.) Your skillset can no longer be a locked-in set of skills and knowledge. You need the meta-skills of modeling and learning. You need to understand what your model of AWS is, and you need to allocate time and energy to consciously learning about it.

That’s not just a change for individuals. It’s a change for how organizations plan for training, and it’s a change for how we should design training, as people will need lots more “what’s new in AWS in Q1 2017” training to augment “intro to AWS.”

Tidal forces, indeed.

The Dope Cycle and the Two Minutes Hate

[Updated with extra links at the bottom.]

There’s a cycle that happens as you engage on the internet. You post something, and wait, hoping, for the likes, the favorites, the shares, the kind comments to come in. You hit reload incessantly even though the site doesn’t need it, hoping to get that hit that jolt even a little sooner. That dopamine release.

A Vicious cycle of pain, cravings, more drugs, and guilt

Site designers refer to this by benign names, like engagement or gamification and it doesn’t just happen on “social media” sites like Twitter or Instagram. It is fundamental to the structure of LinkedIn, of Medium, StackExchange, of Flickr. We are told how popular are the things we observe, and we are told to want that popularity. Excuse me, I mean that influence. That reach. And that brings me to the point of today’s post: seven tips to increase your social media impactfulness. Just kidding.

Not kidding: even when you know you’re being manipulated into wanting it, you want it. And you are being manipulated, make no mistake. Site designers are working to make your use of their site as pleasurable as possible, as emotionally engaging as possible. They’re caught up in a Red Queen Race, where they must engage faster and faster just to stay in place. And when you’re in such a race, it helps to steal as much as you can from millions of years of evolution. [Edit: I should add that this is not a moral judgement on the companies or the people, but rather an observation on what they must do to survive.] That’s dopamine, that’s adrenaline, that’s every hormone that’s been covered in Popular Psychology. It’s a dope cycle, and you can read that in every sense of the word dope.

This wanting is not innocent or harmless. Outrage, generating a stronger response, wins. Sexy, generating a stronger response, wins. Cuteness, in the forms of awwws, wins. We are awash in messages crafted to generate strong emotion. More, we are awash in messages crafter to generate stronger emotion than the preceding or following message. This is not new. What is new is that the analytic tools available to its creators are so strong that the Red Queen Race is accelerating (by the way, that’s bait for outraged readers to insist I misunderstand the Red Queen Race, generating views for this post). The tools of 20th century outrage are crude and ineffective. Today’s outrage cycle over the House cancelling its cancellation of its ethics office is over, replaced by outrage over … well, it’s not year clear what will replace it, but expect it to be replaced.

When Orwell wrote of the Two Minutes Hate, he wrote:

The horrible thing about the Two Minutes Hate was not that one was obliged to act a part, but that it was impossible to avoid joining in. Within thirty seconds any pretense was always unnecessary. A hideous ecstasy of fear and vindictiveness, a desire to kill, to torture, to smash faces in with a sledge hammer, seemed to flow through the whole group of people like an electric current, turning one even against one’s will into a grimacing, screaming lunatic. And yet the rage that one felt was an abstract, undirected emotion which could be switched from one object to another like the flame of a blowlamp.

I am reminded of Hoder’s article, “The Web We Have to Save” (4.4K hearts, 165 balloons, and no easy way to see on Medium how many sites link to it). Also of related interest is Good-bye to All That Twitter and “Seattle author Lindy West leaves Twitter, calls it unusable for ‘anyone but trolls, robots and dictators’” but I don’t think Twitter, per se, is the problem. Twitter has a number of aspects which make trolling (especially around gender and race issues, but not limited to them) especially emotionally challenging. Those are likely closely tied to the anticipation of positivity in “mentions”, fulfilled by hate. But the issues are made worse by site design that successfully increases engagement.

I don’t know what to do with this observation. I have tried to reduce use of sites that use the structures of engagement: removing them from my reading in the morning, taking their apps off my phone. But I find myself typing their URLs when I’m task switching. I am reluctant to orient around addiction, as it drags with it a great deal of baggage around free will and ineffective regulation.

But removing myself from Twitter doesn’t really address the problem of the two minutes hate, nor of the red queen race of dope cycles. I’d love to hear your thoughts on what to do about them.


[Update: Related, “Hacking the Attention Economy,” by danah boyd.]

[Update (8 Feb): Hunter Walk writes “Why Many Companies Mistakingly Think Trolls & Harassment Are Good for Business,” and I’d missed Tim Wu writing on “The Attention Merchants.”]

Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?

Whiteboard

  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?