Babylonian Triginometry

a fresh look at a 3700-year-old clay tablet suggests that Babylonian mathematicians not only developed the first trig table, beating the Greeks to the punch by more than 1000 years, but that they also figured out an entirely new way to look at the subject. However, other experts on the clay tablet, known as Plimpton 322 (P322), say the new work is speculative at best. (“This ancient Babylonian tablet may contain the first evidence of trigonometry.”)

The paper, “Plimpton 322 is Babylonian exact sexagesimal trigonometry” is short and open access, and also contains this gem:

If this interpretation is correct, then P322 replaces Hipparchus’ ‘table of chords’ as the world’s oldest trigonometric table — but it is additionally unique because of its exact nature, which would make it the world’s only completely accurate trigonometric table. These insights expose an entirely new level of sophistication for OB mathematics.

Do Games Teach Security?

There’s a new paper from Mark Thompson and Hassan Takabi of the University of North Texas. The title captures the question:
Effectiveness Of Using Card Games To Teach Threat Modeling For Secure Web Application Developments

Gamification of classroom assignments and online tools has grown significantly in recent years. There have been a number of card games designed for teaching various cybersecurity concepts. However, effectiveness of these card games is unknown for the most part and there is no study on evaluating their effectiveness. In this paper, we evaluate effectiveness of one such game, namely the OWASP Cornucopia card game which is designed to assist software development teams identify security requirements in Agile, conventional and formal development
processes. We performed an experiment where sections of graduate students and undergraduate students in a security related course at our university were split into two groups, one of which played the Cornucopia card game, and one of which did not. Quizzes were administered both before and after the activity, and a survey was taken to measure student attitudes toward the exercise. The results show that while students found the activity useful and would like to see this activity and more similar exercises integrated into the classroom, the game was not easy to understand. We need to spend enough time to familiarize the students with the game and prepare them for the exercises using the game to get the best results.

I’m very glad to see games like Cornucopia evaluated. If we’re going to push the use of Cornucopia (or Elevation of Privilege) for teaching, then we ought to be thinking about how well they work in comparison to other techniques. We have anecdotes, but to improve, we must test and measure.

What Boards Want in Security Reporting

Sub optimal dashboard 3

Recently, some of my friends were talking about a report by Bay Dynamics, “How Boards of Directors Really Feel About Cyber Security Reports.” In that report, we see things like:

More than three in five board members say they are both significantly or very “satisfied” (64%) and “inspired”(65%) after the typical presentation by IT and security executives about the company’s cyber risk, yet the majority (85%) of board members
believe that IT and security executives need to improve the way they report to the board.”
Only one-third of IT and security executives believe the board comprehends the cyber security information provided to them (versus) 70% of board members surveyed report that they understand everything they’re being told by IT and security executives in their presentations

Some of this is may be poor survey design or reporting: it’s hard to survey someone to see if they don’t understand, and the questions aren’t listed in the survey.

But that may be taking the easy way out. Perhaps what we’re being told is consistent. Security leaders don’t think the boards are getting the nuance, while the boards are getting the big picture just fine. Perhaps boards really do want better reporting, and, having nothing useful to suggest, consider themselves “satisfied.”

They ask for numbers, but not because they really want numbers. I’ve come to believe that the reason they ask for numbers is that they lack a feel for the risks of cyber. They understand risks in things like product launches or moving manufacturing to China, or making the wrong hire for VP of social media. They are hopeful that in asking for numbers, they’ll learn useful things about the state of what they’re governing.

So what do boards want in security reporting? They want concrete, understandable and actionable reports. They want to know if they have the right hands on the rudder, and if those hands are reasonably resourced. (Boards also know that no one who reports to them is every really satisfied with their budget.)

(Lastly, the graphic? Overly complex, not actionable, lacks explicit recommendations or requests. It’s what boards don’t want.)

"Cyber" Insurance and an Opportunity

There’s a fascinating article on PropertyCasualty360 “
As Cyber Coverage Soars, Opportunity Clicks
” (thanks to Jake Kouns and Chris Walsh for the pointer). I don’t have a huge amount to add, but wanted to draw attention to some excerpts that drew my attention:

Parisi observes that pricing has also become more consistent over the past 12 months. “The delta of the pricing on an individual risk has gotten smaller. We used to see pricing differences that would range anywhere from 50-100 percent among competing carriers in prior years,” he says.

I’m not quite sure how that pricing claim lines up with this:

“The guys that have been in the business the longest—for example, Ace, Beazley, Hiscox and AIG—their books are now so large that they handle several claims a week,” says Mark Greisiger, president of NetDiligence. Their claims-handling history presumably means these veteran players can now apply a lot of data intelligence to their risk selection and pricing.

but the claim that there’s several breaches a week impacting individual insurers gives us a way to put a lower-bound on breaches that are occurring. It’s somewhat dependent on what you mean by several, but generally, I put “several” above “a couple”, which means 3 breaches per week, or 150 per insurer per year, which is 600 between Ace, Beazley, Hiscox and AIG.

Then there’s this:

Despite a competitive market and significant capacity, underwriting appetite for high-risk classes varies widely. For instance, schools have significant PII exposure and are frequent targets of attacks, such as the October 2012 “ProjectWestWind” action by “hacktivist” group Anonymous to release personal records from more than 100 top universities.

So schools can be hard risks to place. While some U.S. carriers—such as Ace, Chartis and CNA—report being a market for this business class, Kiln currently has no appetite for educational institutions, with Randles citing factors such as schools’ lack of technology controls across multiple campuses, lack of IT budgets and extensive population of users who regularly access data.

Lastly, I’ll add that an insurance company that wants to market itself could easily leap to the front of mind for their prospective customers the way Verizon did. Think back 5 years, to when Verizon launched their DBIR. Then, I wrote:

Sharing data gets your voice out there. Verizon has just catapulted themselves into position as a player who can shape security.

That’s because of their willingness to provide data. I was going to say give away, but they’re really not giving the data away. They’re trading it for respect and credibility. (“Can You Hear Me Now?“)

I look forward to seeing which of the big insurance companies, the folks who are handling “several claims a week”, is first to market with good analysis.

The High Price of the Silence of Cyberwar

A little ways back, I was arguing [discussing cyberwar] with thegrugq, who said “[Cyberwar] by it’s very nature is defined by acts of espionage, where all sides are motivated to keep incidents secret.”

I don’t agree that all sides are obviously motivated to keep incidents secret, and I think that it’s worth asking, is there a strategic advantage to a policy of disclosure?

Before we get there, there’s a somewhat obvious objection that we should discuss, which is that the defender instantly revealing all the attacks they’ve detected is a great advantage for the attacker. So when I talk about disclosing attacks, I want to include some subtlety, where the defender is taking two steps to counter that advantage. The first step is to randomly select some portion of attacks (say, 20-50%) to keep secret, and second, randomly delay disclosure until sometime after cleanup.

With those two tactical considerations in place, a defender can gain several advantages by revealing attacks.

The first advantage is deterrence. If an defender regularly reveals attacks which have taken place, and reveals the attack tools, the domains, the IP addresses and the software installed, then the attacker’s cost of attacking that defender (compared to other defenders) is higher, because those tools will be compromised. That has a deterring effect.

The second advantage is credibility. In today’s debate about cyberwar, all information disclosed seems to come with an agenda. Everyone evaluating the information is forced to look not only at the information, but the motivation for revealing that information. Worse, they can question if the information not revealed is shaped differently from what is revealed. A defender who reveals information regularly and in accordance with a policy will gain credibility, and with it, the ability to better influence the debate.

There’s a third advantage, which is that of improving the information available to all defenders, but that only accrues to the revealer to the extent that others don’t free ride. Since I’m looking to the advantages that accrue to the defender, we can’t count it. However, to the extent that a government cares about the public good, this should weigh in their decision making process.

The United States, like many liberal democracies, has a long history of disclosing a good deal of information about our weaponry and strategies. The debates over nuclear weapons were public policy debates in which we knew how many weapons each side had, how big they were, etc. What’s more, the key thinking about deterrence and mutually assured destruction which informed US policy was all public. That approach helped us survive a 50 year cold war, with weapons of unimaginable destructive potential.

Simultaneously, secrecy around what’s happening pushes the public policy discussions towards looking like ‘he said, she said,’ rather than discussions with facts involved.

Advocates of keeping attacks in which they’ve been victimized a secret, keeping doctrines secret, or keeping strategic thinking secret need to move beyond the assumption that everything about cyberwar is secret, and start justifying the secrecy of anything beyond current operations.

[As thegruq doesn’t have a blog, I’ve posted his response “http://newschoolsecurity.com/2013/01/on-disclosure-of-intrusion-events-in-a-cyberwar/“]

The Fog of Reporting on Cyberwar

There’s a fascinating set of claims in Foreign Affairs “The Fog of Cyberward“:

Our research shows that although warnings about cyberwarfare have become more severe, the actual magnitude and pace of attacks do not match popular perception. Only 20 of 124 active rivals — defined as the most conflict-prone pairs of states in the system — engaged in cyberconflict between 2001 and 2011. And there were only 95 total cyberattacks among these 20 rivals. The number of observed attacks pales in comparison to other ongoing threats: a state is 600 times more likely to be the target of a terrorist attack than a cyberattack. We used a severity score ranging from five, which is minimal damage, to one, where death occurs as a direct result from cyberwarfare. Of all 95 cyberattacks in our analysis, the highest score — that of Stuxnet and Flame — was only a three.

There’s also a pretty chart:

Cyber attacks graphic 411 0

All of which distracts from what seems to me to be a fundamental methodological question, which is “what counts as an incident”, and how did the authors count those incidents? Did they use some database? Media queries? The article seems to imply that such things are trivial, and unworthy of distracting the reader. Perhaps that’s normal for Foreign Policy, but I don’t agree.

The question of what’s being measured is important for assessing if the argument is convincing. For example, it’s widely believed that the hacking of Lockheed Martin was done by China to steal military secrets. Is that a state on state attack which is included in their data? If Lockheed Martin counts as an incident, how about the hacking of RSA as a pre-cursor? There’s a second set of questions, which relates to the known unknowns, the things we know we don’t know about. As every security practitioner knows, we sweep a lot of incidents under the rug. That’s changing somewhat as state laws have forced organizations to report breaches that impact personal information. Those laws are influencing norms in the US and elsewhere, but I see no reason to believe that all incidents are being reported. If they’re not being reported, then they can’t be in the chart.

That brings us to a third question. If we treat the chart as a minimum bar, how far is it from the actual state of affairs? Again, we have no data.

I did search for underlying data, but Brandon Valeriano’s publications page doesn’t contain anything that looks relevant, and I was unable to find such a page for Ryan Maness.

Usable Security: Timing of Information?

As I’ve read Kahneman’s “Thinking, Fast and Slow,” I’ve been thinking a lot about “what you see is all there is” and the difference between someone’s state of mind when they’re trying to decide on an action, and once they’ve selected and are executing a plan.

I think that as you’re trying to figure out how to do something, you might have a goal and a model in mind. For example, “where is that picture I just downloaded?” As you proceed along the path, you take actions which involve making a commitment to a course of action, ultimately choosing to open one file over another. Once you make that choice, you’re invested, and perhaps the endowment effect kicks in, making you less likely to be willing to change your decision because of (say) some stupid dialog box.

Another way to say that is information that’s available as you’re making a decision might be far more influential than information that comes in later. That’s a hypothesis, and I’ve been having trouble finding a study that actually tests that idea.

For example, if we use a scary button like this:

Scary button with spikes

would that work better than this:

File JPG is an application

If someone knows of a user test that might shed light on if this sort of thing matters, I’d be very grateful for a pointer.

The "Human Action" argument is not even wrong

Several commenters on my post yesterday have put forth some form of the argument that hackers are humans, humans are unpredictable, and therefore, information security cannot have a Nate Silver.

This is a distraction, as a moment’s reflection will show.

Muggings, rapes and murders all depend on the actions of unpredictable humans, and we can, in aggregate, study them. We can see if they are rising or falling. We can debate if one or another methodology is a superior way of measuring them. (For example, should we rely on police reports or survey people and see who’s been victimized?)

Now, internet crimes are different from non-internet crimes in a couple of important ways. It is far harder to properly attribute the crimes to particular actors because the crimes are mediated by computers and networks. Another difference is people generally don’t report internet crime to the police, and the police often suggest sweeping the crime under the rug. (This may relate to the challenges and expense of investigating internet crimes.) It’s possible that there are other differences.

But no one bringing up the internet exceptionalism argument have explained why we can’t change the lack of reporting, why we couldn’t study repeated events in the aggregate, or why information security can’t have a Nate Silver.

But none of them have explained how internet crime differs from non-internet crime in ways which make it unmeasurable. My question was really intended as a provocation, to get people to think about measurability in our field. But a reasonable objection is that I am hand-waving with respect to we would have our Nate Silver do, and so I want to be a bit more specific about that.

The best exemplar of this was Martin McKeay asking “Give me an example of what you think we should be able to predict.” I think we should be able to predict the number of vulnerabilities discovered, the number of malware infections per million machines, or the odds that a web server in the top N sites will be serving up attack code. I think we should be able to discuss the odds that a given SSN has been leaked, how that impacts its (ab)use as an authenticator. I also think (see my article, “The Evolution of Information Security“) that we should be able to say that organizations that invest in defense X have fewer incidents than those who invest in Y.

[Edited to add: The reason that I didn’t want to give an example of what the Nate Silver of infosec would measure is to avoid debates in the weeds about one or the other of those things. I think what is usefully measured will surprise many people, including me. By asking why in general, I want to encourage people to think about the over-arching problems, and I hope that we’ll hear more solutions to the general problem than we did yesterday.]

Where is Information Security's Nate Silver?

So by now everyone knows that Nate Silver predicted 50 out of 50 states in the 2012 election. Michael Cosentino has a great picture:

Nate Silver results

Actually, he was one of many quants who predicted what was going to happen via meta-analysis of the data that was available. So here’s my question. Who’s making testable predictions of information security events, and doing better than a coin toss?

Effective training: Wombat's USBGuru

Many times when computers are compromised, the compromise is stealthy. Take a moment to compare that to being attacked by a lion. There, the failure to notice the lion is right there, in your face. Assuming you survive, you’re going to relive that experience, and think about what you can learn from it. But in security, you don’t have that experience to re-live. That means that your ability to form good models of the world is inhibited. Another way of saying that is that our natural learning processes are inhibited.

Wombat Security makes a set of products that are designed to help with those natural learning processes. I like these folks for a variety of reasons, including their use of games, and their data-driven approach to the world. I’d like to be clear that I have no commercial connection to Wombat, I just like what they’re doing.

Their latest product, USBGuru, is a service that allows you to quickly create learning loops for the USB in the parking lot problem. It includes a way to create a USB stick with a small program on it. That program checks the username, and reports it to Wombaat. This allows you to deliver training when the stick is inserted, or when the end user is tricked into running code. It also allows you to track when people fall for the attack, and (over time) measure if the training is having an effect.

So there’s a “teachable moment”, training, and measurement. I think that’s a really cool combination, and want to encourage folks to both check out what Wombat’s USBGuru does, and compare it to other training programs they may have in place.