Security Prediction Markets?
In our first open thread, Michael Cloppert asked:
Considering the contributors to this blog often discuss security in
terms of economics, I’m curious what you (and any readers educated on
the topic) think about the utility of using prediction markets to forecast
compromises.
So I’m generally a big fan of markets. I think markets are, as Hayek pointed out, a great way to extract information from systems. The prediction markets function by rewarding those who can make better predictions. So would this work for security, and predicting compromises?
I don’t think so, despite being a huge fan of the value of the chaos that emerges from markets.
Allow me to explain. There are two reasons why it won’t work. Let’s take Alice and Bob, market speculators. Both work in banks. Alice thinks her bank has great security (“oh, those password rules!”). So she bets that her bank has a low likelihood of breach. Bob, in contrast, thinks his bank has rotten security (“oh, those password rules!”). So he bets against it. Perhaps their models are more sophisticated, and I’ll return to that point.
As Alice buys, the price breach futures in her bank rises. As Bob sells, the price of his futures falls. (Assuming fixed numbers of trades, and that they’re not working for the same bank.)
But what do Alice and Bob really know? How much experience does either have to make accurate assessments of their employers’ security? We don’t talk about security failures. We don’t learn from each other’s failures, and so failure strikes arbitrarily.
So I’m not sure who the skilled predictors would be who would make money by entering the market. Without such skilled predictors, or people with better information, the market can’t extract the information.
Now, there may be information which is purely negative which could be usefully extracted. I doubt it, absent baselines that Alice and Bob can use to objectively assess what they see.
There may well be more sophisticated models, where people with more or better information could bet. Setting aside ethical or professional standards, auditors of various sorts might be able to play the market.
I don’t know that there are enough of them to trade effectively. A thinly traded security doesn’t offer up as much information as one that’s being heavily traded.
So I’m skeptical.
I agree that security prediction markets will fail, but I disagree with why.
A true prediction market is predicting something that hasn’t happened yet. A horse hasn’t won a race until it’s ran it. In security, a piece of code has a vuln in it _right now_ or I have 0wned your bank _right now_. There is a correct answer at all points in time.
For this reason, “security” is fundamentally incompatible with the prediction market model.
Adam, if it’s only Alice and Bob, or they truly know nothing, the market will of course fail. If there are enough people who have diverse knowledge (even if potentially only-partially-accurate) and information sources, and there is liquidity, then they are likely to outperform experts. Even if there aren’t any experts in the market there are likely to be some participants who take it seriously enough that they follow the experts.
One potential challenge is breadth of information sources. If everybody’s getting their info from reading Bruce Schneier, Slashdot, and Emergent Chaos [or any small number of sources] or the echo chamber that TechMeme creates, then the market’s not likely to function well. This is why I though the Industry Standard all-male bloglist was so funny: it’s a huge lack of diversity in information sources. From what I could tell the participants in their market similarly suffer from a lack of diversity.
Dan, there are ways to construct the questions in the market to reflect your concerns; for example, futures in the number of patches Microsoft/Apple/Ubuntu will issue this month, next three months, next year. Or it’d be great to do one in conjunction with pwn2own: which system will fall first, and how long until each system falls. So I don’t see it as fundamentally incompatible with security at all.
@Dan,
I don’t think we’d be betting on *if* the vulnerability exists to be exploited (all organizations have some degree of vulnerability), but *when* the existing vulnerabilities would be exploited to produce incident.
—
I’m skeptical about a security prediction market working for a number of reasons, including what Adam has posted there. Heck two years ago, someone pretty smart I know was all about prediction markets, and the greater InfoSec world thought he was *nuts*. We’re just not mature enough for a prediction market.
I’d have to agree that prediction markets in security would fail to predict breeches. In other markets, the investors/consumers have a greater amount of information to base their decisions upon. Processes are somewhat open, policies on how to handle goods are somewhat open, and news of snags in the process spread very quickly. With security, especially in industries such as banking, there is very little information available to base one’s decision upon.
As mentioned above, for the prediction market to work, we need a wide and diverse population, and they need to have a large base of information to make their judgements upon.
One point people may not be considering is a prediction market’s utility at exposing information that may already be known, but under the radar. So to Dan’s point, there IS a failure point in the code, but no one may be willing to talk about it because of office politics, etc. Or it may be that a team was under extreme pressure to succeed and the old hands know inevitably there are going to be failure points. How many, in what area, etc. would be based on how the project was conducted. For example, asking about possible failure points at the beginning of the project and watching the probabilities fluctuate based on how the project is going would be pretty interesting information for the company to know. Involving people beyond the team (employees or business partners) who have tangential knowledge or have worked on these types of projects before would diversify the trading pool and also help expose information that may not have been available. The ROI of course is if you can avoid even ONE failure point before the system goes in to production, you’ve probably just paid for the marketplace 10-fold.
I have been debating the utility of using prediction markets for risk assessments, or more specifically, whether prediction markets could be used instead of the likelihood x impact calculation. Really, you are asking the same thing when you ask people to rank either of those factors.
So, if you had a list of risks that you are trying to rank, rather than existence of a vulnerability, could you use prediction markets to ‘stack rank’ the risks?
I think so.
It’s also interesting to look at using prediction markets in conjunction with other approaches. Situations where the different mechanisms predict different results are especially interesting.
One of the reasons that prediction markets have such a hard time in practice is that experts all have a vested interest in prediction markets failing (or not getting tried). After all, a successful prediction market means that they, the expert, will be out-predicted by a diverse crowd — which includes a whole bunch of people who don’t even know what a buffer overrun is. That drives experts crazy.
Are there any examples of unsuccessful prediction markets in the security space?
Jon,
That’s a really interesting point. I think the closest thing to a security prediction market would be Crispin Cowan’s Sardonix. Or perhaps Poindexter’s Terrorism markets. I did try to be careful to say I’m skeptical, rather than “don’t try this.” That may have been the implied message, but I’m pretty (free here on this blog) saying “that’s silly.”
Adam of the 11:55 comment
I thought about that, and don’t know that there’s enough information on the relation between flaws and breaches to make good calls. On the other hand, maybe that would emerge from the chaos of a market. PS: could you use a different name for the comments here? As the bandleader, I often comment as Adam, and would prefer to avoid confusion about who’s saying what.
It’s Dan again,
@jon – I’m sure that betting on futures will work great when I have a stack of 10 0day in Firefox and release them one at a time, betting on a future for a patch each time. Or when I’m a Firefox developer myself and work on the code in question. I’m also sure that betting on the results of pwn2own will work when I’m sitting on an Apple 0day for 3 months before the contest. Or when one has been passed around non-publicly in the underground before the contract on the market.
In security, SOMEONE knows the RIGHT answer. It is not indeterminate, the code is out there, your network is accessible to me and so on. There’s none of this wishy-washy risk stuff.
@Alex – You are getting closer. Predicting if a incident will occur is one level removed from predicting the discovery of security flaws, but I still see the same problems with it. Let’s take this one from a different angle: I work at corp ABC in the security department. Myself and my coworkers have been tracking an intrusion into our network for the last 8 weeks. We know the answer to “has corp ABC been breached?” and you don’t. So do our auditors. Game over.
@Nathaniel – It is NOT about having a diverse group of opinions. They will fail because you need to make security information “protected” the same way that other banking information is if you want to develop a market for it. If I know corp ABC has been breached, I shouldn’t be able to play in the market with that information, so you need the law to keep my mouth shut. Same for security bugs/patches/exploits.
@Adam – I think you get it.
Until I see something really novel about the way to construct a security prediction market, I don’t think they will work.
A few thoughts about prediction markets: Banks have a vested interest in securing systems from data theft, corruption, and ensuring availability. Internal prediction markets for large organizations might provide insight to help prioritize tools and initiatives used to secure programs and systems.
1) To start, the discussion should be reframmed around internal prediction markets. The feedback of an public prediction market would be abject because of the firmly held belief that masses do not no more than experts. Nonetheless, this doesn’t wholly invalidate @jon’s framework for looking at the problem. From a slightly different angle, a large pool of experts would behave like any large group. I believe theres a physics analogy here, but I’ll leave it alone. So to reposition the argument, we’re interested in knowing what programmers, network engineers, and DB administrators know about “structural” flaws and their likely impact on events (scale of problem increases liklihood and impact), versus application team managers, versus the managers (who sometimes aren’t much different from Alice and Bob due to competing concerns) who establish security strategy and direction.
2) The utility in knowing the difference between perceptions of risk is well documented in the failure of the decision support system at NASA prior to the Challenger catastrophe. Management has an inherent perception bias and there is often dissonance between perceived risk and actual risk. To illustrate, just ask someone about their fear of dying in a airplane crash versus finding the probability that it will actually happen.
3) Google (internal) prediction market makers suggest that in terms of diversity, regional location trumps department affiliation. So to acheive diversity of opionion you not only need participants from a range of disciplines, you also need broad geographic participation.
4) With the correct incentives and market rules in place, knowledgable insiders can be contained and shouldn’t be able to game the system. Granted it ss very difficult to address, and it may be the point of failure for internal prediction markets in large IT organizations.
I’ve been following this stream and commented earlier (i’m the adam that got yelled at about using his first name 🙂 about the utility of prediction markets. I’m the co-founder of a prediction market platform company, Inkling. We’d be happy to facilitate an experiment if you’d like to try out some of the theories bantered around as i think this is an interesting area that has not really been explored in the space. If someone (the original adam) wants to email me we can talk more about getting something set up. (adam [at] inklingmarkets [dot] com)