Last week I did a podcast with Dennis Fisher. In it, we touched on what I might change in the book. Take a listen at: “Adam Shostack on Methods of Compromise, the New School and Learning“
Imagine if the US government, with no notice or warning, raided a small but popular magazine’s offices over a Thanksgiving weekend, seized the company’s printing presses, and told the world that the magazine was a criminal enterprise with a giant banner on their building. Then imagine that it never arrested anyone, never let a trial happen, and filed everything about the case under seal, not even letting the magazine’s lawyers talk to the judge presiding over the case. And it continued to deny any due process at all for over a year, before finally just handing everything back to the magazine and pretending nothing happened. I expect most people would be outraged. I expect that nearly all of you would say that’s a classic case of prior restraint, a massive First Amendment violation, and exactly the kind of thing that does not, or should not, happen in the United States.
But, in a story that’s been in the making for over a year, and which we’re exposing to the public for the first time now, this is exactly the scenario that has played out over the past year — with the only difference being that, rather than “a printing press” and a “magazine,” the story involved “a domain” and a “blog.”
Read the whole thing at “Breaking News: Feds Falsely Censor Popular Blog For Over A Year, Deny All Due Process, Hide All Details…“
I think we agree on most things, but I sense a little semantic disconnect in some things that he says:
The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.
I consider the word “bug” to refer to an error or unintended functionality in the existing code, not a potential vulnerability in what is (hopefully) still a theoretical design. So if you’re doing whiteboard threat modeling, the output should be “things not to do going forward.”
As a result, you’re stuck with something to mitigate, probably by putting in extra security controls that you otherwise wouldn’t have needed. I consider this a to-do list, not a bug list.
(“That’s not a bug, it’s a creature. “, Wendy Nather)
I don’t disagree here, but want to take it one step further. I see a list of “things not to do going forward” and a “todo list” as an excellent start for a set of tests to confirm that those things happen or don’t. So you file bugs, and those bugs get tracked and triaged and ideally closed as resolved or fixed when you have a test that confirms that they ain’t happening. If you want to call this something else, that’s fine–tracking and managing bugs can be too much work. The key to me is that the “things not to do” sink in, and to to-do list gets managed in some good way.
And again, I agree with her points about probability, and her point that it’s lurking in people’s minds is an excellent one, worth repeating:
the conversation with the project manager, business executives, and developers is always, always going to be about probability, even as a subtext. Even if they don’t come out and say, “But who would want to do that?” or “Come on, we’re not a bank or anything,” they’ll be thinking it when they estimate the cost of fixing the bug or putting in the mitigations.
I simply think the more you focus threat modeling on the “what will go wrong” question, the better. Of course, there’s an element of balance: you don’t usually want to be movie plotting or worrying about Chinese spies replacing the hard drive before you worry about the lack of authentication in your network connections.
I really like Gunnar Peterson’s post on “Top 5 Security Influencers:”
Its December and so its the season for lists. Here is my list of Top 5 Security Influencers, this is the list with the people who have the biggest (good and/or bad) influence on your company and user’s security:
My list is slightly different:
- The Person Coding Your App
- Your DBA
- Your Testers
- Your Ops team
- The person with the data
- Uma Thurman
That’s right, without data to argue an effective case for investing in security, you have less influence than Uma Thurman. And even if you have more influence than her, if you want to be in the top 5, you better be the person bringing the data.
As long as we’re hiding everything that might allow us to judge comparative effectiveness, we’re going to continue making no progress.
Ahh, but which Uma?
Update: Chris Hoff asks “But WHICH Uma? Kill Bill Uma or Pulp Fiction Uma?” and sadly, I have to answer: The Truth About Cats and Dogs Uma. You remember. Silly romantic comedy where guy falls in love with radio veterinarian Janeane Garofalo, who’s embarrassed about her looks? And Uma plays her gorgeous but vapid neighbor? That’s the Uma with the more influence than you. The one who spends time trying to not be bubbly when her audition for a newscaster job leads off with “hundreds of people feared dead in a nuclear accident?” Yeah. That Uma. Because at least she’s nice to look at while going on about stuff no one cares about. But you know? If you show up with some chops and some useful data to back your claims, you can do better than that.
On the downside, you’re unlikely to ever be as influential as Kill Bill Uma. Because, you know, she has a sword, and a demonstrated willingness to slice the heads off of people who argue with her, and a don’t-care attitude about jail. It’s hard to top that for short term influence. Just ask the 3rd guy trying to code your app, and hoping it doesn’t crash. He’s got eyes for no one not carrying that sword.
There are semi-regular suggestions to allow people to copyright facts about themselves as a way to fix privacy problems. At Prawfsblog, Brooklyn Law School Associate Professor Derek Bambauer responds in “Copyright and your face.”
One proposal raised was to provide people with copyright in their faceprints or facial features. This idea has two demerits: it is unconstitutional, and it is insane. Otherwise, it seems fine.
As an aside, Bambauer is incorrect. The idea has a third important problem, which he also points out in his post: “It’s also stupid.”
Read the whole thing here.
- RT @daveaitel Tests Show Most Store Honey Isn't Honey http://t.co/2oI3O6RK << Will anyone go to jail for fraud? #
- RT @jdp23 Look at the list of the FTC complaints — huge issues. And basically no consequnces to FB. So why should they change? #privchat #
- RT @threatpost $56 Billion Later and Airport #Security Is Still Junk – http://t.co/mUtNiMc7 – via @wired << a touching story! #
- White House unveils cybersec strategy incl "Inducing Change", "Developing Science Foundations" http://t.co/UN1usMVf (Could be New School-y) #
- New White House cybersec strategy "Inducing Change", "Developing Science Foundations" seems fairly New School http://t.co/TIIYNVok #
- New blog "Podtrac.com and listener privacy" http://t.co/KAZTGtg7 #
- "In Major Gaffe, Obama Forgets To Dumb It Down" http://t.co/bINRJom1 #
- This shows the success of our current "information sharing" "infrastructures": Data gathering for ssh attacks: http://t.co/nHe0hICE #
- MT @alexhutton I'll be on a webinar w/ @joshcorman in an 1/2 hour. It will only be fun if you join us http://t.co/mGUFOvMN #
- RT @nickm_tor The key idea from our field that we need most to export to other system designers is not IMO anonymity but unlinkability. #
- RT @alexhutton Best security/risk job description I've ever seen: http://t.co/yfRg0UX1 << this link is dangerous! (Dangerously New School.) #
- RT @_nomap Proud to announce that I'm officially a PhD. Now looking forward to new challenges! << Congratulations! #
- RT @techdirt Breaking News: Feds Falsely Censor Popular Blog For Over A Year, Deny All Due Process, Hide All Details.. http://t.co/SCJvhGAT #
- RT @marciahofmann Join me in fighting for the users! Become an @EFF member today and your donation will get a 4x match http://t.co/ISaGGamW #
- New blog: Outrage of the Day: Police Violence http://t.co/KJPoubWA #
- MT @josephmenn Very good books from many perspectives: Important 2011 Tech Policy & Law Books http://t.co/u5g41hF8 < Nothing new in infosec? #
- For bloggers – "Conbis" is stealing New School & other blog content. Their "about us" is Lorum Ipsum. http://t.co/SvYlU5f4 #
- RT @ehasbrouck I'd rather pay for Twitter than pay w/wasted time, distracting features that earn them $, & have them monetize info about me #
- New blog: "Threat Modeling & Risk Assessment" follows up on conversation with @451wendy http://t.co/iFCRCJW3 #
- Wow. IEEE won't even give me a citation without an account. http://t.co/l8AKCBEx Why do people let IEEE lock up their work? #
Powered by Twitter Tools
Yesterday, I got into a bit of a back and forth with Wendy Nather on threat modeling and the role of risk management, and I wanted to respond more fully.
So first, what was said:
(Wendy) As much as I love Elevation of Privilege, I don’t think any threat modeling is complete without considering probability too.
(me) Thanks! I’m not advocating against risk, but asking when. Do you evaluate bugs 2x? Once in threat model & once in bug triage?
(Wendy) Yes, because I see TM as being important in design, when the bugs haven’t been written in yet. 🙂
I think Wendy and I are in agreement that threat modeling should happen early, and that probability is important. My issue is that I think issues discovered by threat modeling are, in reality, dealt with by only a few of Gunnar’s top 5 influencers.
I think there are two good reasons to consider threat modeling as an activity that produces a bug list, rather than a prioritized list. First is that bugs are a great exit point for the activity, and second, bugs are going to get triaged again anyway.
First, bugs are a great end point. An important part of my perspective on threat modeling is that it works best when there’s a clear entry and exit point, that is, when developers know when the threat modeling activity is done. (Window Snyder, who knows a thing or two about threat modeling, raised this as the first thing that needed fixing when I took my job at Microsoft to improve threat modeling.) Developers are familiar with bugs. If you end a strange activity, such as threat modeling, with a familiar one, such as filing bugs, developers feel empowered to take a next step. They know what they need to do next.
And that’s my second point: developers and development organizations triage bugs. Any good development organization has a way to deal with bugs. The only two real outputs I’ve ever seen from threat modeling are bugs and threat model documents. I’ve seen bugs work far better than documents in almost every case.
So if you expect that bugs will work better then you’re left with the important question that Wendy is raising: when do you consider probability? That’s going to happen in bug triage anyway, so why bother including it in threat modeling? You might prune the list and avoid entering silly bugs. That’s a win. But if you capture your risk assessment process and expertise within threat modeling, then what happens in bug triage? Will the security expert be in the room? Do you have a process for comparing security priority to other priorities? (At Microsoft, we use security bug bars for this, and a sample is here.)
My concern, and the reason I got into a back and forth, is I suspect that putting risk assessment into threat modeling keeps organizations from ensuring that expertise is in bug triage, and that’s risky.
(As usual, these opinions are mine, and may differ from those of my employer.)
[Updated to correct editing issues.]
When the LAPD finally began arresting those of us interlocked around the symbolic tent, we were all ordered by the LAPD to unlink from each other (in order to facilitate the arrests). Each seated, nonviolent protester beside me who refused to cooperate by unlinking his arms had the following done to him: an LAPD officer would forcibly extend the protestor’s legs, grab his left foot, twist it all the way around and then stomp his boot on the insole, pinning the protestor’s left foot to the pavement, twisted backwards. Then the LAPD officer would grab the protestor’s right foot and twist it all the way the other direction until the non-violent protestor, in incredible agony, would shriek in pain and unlink from his neighbor.
It was horrible to watch, and apparently designed to terrorize the rest of us. At least I was sufficiently terrorized. I unlinked my arms voluntarily and informed the LAPD officers that I would go peacefully and cooperatively. I stood as instructed, and then I had my arms wrenched behind my back, and an officer hyperextended my wrists into my inner arms. It was super violent, it hurt really really bad, and he was doing it on purpose. When I involuntarily recoiled from the pain, the LAPD officer threw me face-first to the pavement. He had my hands behind my back, so I landed right on my face. The officer dropped with his knee on my back and ground my face into the pavement. It really, really hurt and my face started bleeding and I was very scared. I begged for mercy and I promised that I was honestly not resisting and would not resist.
From Keith Weinbaum, Director of Information Security of Quicken Loans Inc.
From the job posting:
WARNING: If you believe in implementing security only for the sake of security or only for the sake of checking a box, then this is not the job for you. ALSO, if your primary method of justifying security solutions is to sell FUD to decision makers, then we STRONGLY suggest that you close this page right now as it’s POSSIBLE that reading this job posting will infect your computer with a worm, virus, trojan, nasty bacterium, and/or bovine spongiform encephalopathy OH MY!!! In fact, you should just stop using the scary interwebs all together!
Kudos, Keith. You’ve made the Alex Hutton Personal Hall of Fame with this one.
It turns out that it’s very hard to subscribe to many podcasts without talking to Podtrac.com servers. (Technical details in the full post, below.) So I took a look at their privacy statement:
Podtrac provides free services to podcasters whereby Podtrac gathers data specific to individual podcasts (e.g. audience survey data, content ratings, measurement data, etc). This podcast data is not considered personally identifiable information and may be shared by Podtrac with member advertisers. (“Podtrac Client Privacy Statement,” undated, unversioned.)
It’s not clear to me who doesn’t consider what they collect to be personal data, because the passive voice is annoyingly used. So I’ll ask: precisely what data is collected? And under what set of laws or even perspectives is the data they’re collecting is not considered personally identifiable? For example, are they collecting IP addresses, which I understand are PII in the EU?
Enquiring minds with privacy officials might want to ask those officials.