I was inspired to develop and share my thoughts after Adam’s previous post (magical approaches to threat modeling) regarding selection of the threats and predictions. Since a 140 characters limit quickly annoys me, Adam gave me an opportunity to contribute on his blog, thanks to him I can now explain how I believe in magic during threat modeling.
I have noticed that most of what I do, because it is timeboxed due to carbon based lifeforms constraints, needs to be a finite choice selection from what appears to me as an infinite array of possibilities. I also enjoy pulling computer related magic tricks, or guesses, because it’s amusing and more engaging than reading a checklist. Magic, in this case, is either pure luck or based on some skills the spectators can’t see. I like when I think I’m having both.
During the selection phase of what to do, there’s a few tradeoffs that have been proposed such as coverage, time and skills required. Those are attack based and come from the knowledge of what an attacker can do. While I think that those effectively describe the selection of granular technical efforts, I prefer to look at what are his motivation rather than the constrains he’ll face. And for all that, I have a way or organizing it and showing it.
When I think about the actual threats of a system, I don’t see a list, but rather a tree. That tree has the ultimate goals on top, and then descend into sub-goals that breaks down how you get there. It finally ends up in leaves that are the vulnerabilities to be exploited.
Here’s an unfinished example for an unnamed application:
A fun thing to do with a tree is to apply a weight on a branch. In this case the number represent attacker made tradeoffs and is totally arbitrary.
If you keep it relatively consistent to itself, you end up with an appropriate weighting system. For this example, let’s say it’s the amount of efforts you estimate it takes. You can sum the branches in the tree and get sub-goals weight without having to think about them.
And from that we can get a sum for the root goals:
But then how do I choose to prioritize or just work on something?
I could just say, well I’m going to do the easiest things to do, maybe because finding an SQLi in the application is easier than finding a slow API request, so better start looking at that first.
But regarding to decision, I often decide to do the most common human behavior: just don’t do it myself.
With the help of the tree, I just let the actual business reality do the selection on which root goals to pick. By that I mean the literal definition of reality, although nowadays people seems to forget what it really means:
“reality · noun · 1. the world or the state of things as they actually exist, as opposed to an idealistic or notional idea of them.”
– Google Almighty
I never ask the business line if they think they’ll have SQLi, but rather, if they worry more about denial of service or information stealing.
One advantage of that, is that those decisions are at the root goals. The tree is a hierarchy; the higher level you are, the bigger impact you’ll have. Like spinning a big cog wheel versus a smaller one:
If you were to pick on each vulnerability at the time, you’ll spin your working wheel a lot, while just really advancing the root goal a bit. Work on doing the selection on the root goals, then you’ll see that it’s impact is far greater for about the same amount of time. That’s efficiency to me.
And that’s how I turn magic into engineering 😀
Of course, in order for it to be proper engineering, the next step would be to QA it. And at that point, you can fetch all the checklists or threats repository you can find, and verify that you covered everything in your tree. Simply add what you have missed, and then bask in the glory of perceived completeness.
For the curious practitioners, I’ve used PlantUML in order to generate the tree examples as seen above. The tool let you textually define the tree using simple markup and auto balance it for you when you are updating it. A more detailed example can be found on my Threat Modeling Toolkit presentation.
Also published on Medium as an experiment.