Author: Chandler

More Bad News for SSL

I haven’t read the paper yet, but Schneier has a post up which points to a paper “Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow,” by Shuo Chen, Rui Wang, XiaoFeng Wang, and Kehuan Zhang.about a new side-channel attack which allows an eavesdropper to infer information about the contents of an SSL connection in certain contexts, some of them fairly common.  For example (from Schneir’s link to Ed Felten’s commentary on the paper):

The new paper shows that this inference-from-size problem gets much, much worse when pages are using the now-standard AJAX programming methods, in which a web “page” is really a computer program that makes frequent requests to the server for information. With more requests to the server, there are many more opportunities for an eavesdropper to make inferences about what you’re doing — to the point that common applications leak a great deal of private information.

Consider a search engine that autocompletes search queries: when you start to type a query, the search engine gives you a list of suggested queries that start with whatever characters you have typed so far. When you type the first letter of your search query, the search engine page will send that character to the server, and the server will send back a list of suggested completions. Unfortunately, the size of that suggested completion list will depend on which character you typed, so an eavesdropper can use the size of the encrypted response to deduce which letter you typed. When you type the second letter of your query, another request will go to the server, and another encrypted reply will come back, which will again have a distinctive size, allowing the eavesdropper (who already knows the first character you typed) to deduce the second character; and so on. In the end the eavesdropper will know exactly which search query you typed. This attack worked against the Google, Yahoo, and Microsoft Bing search engines.

SSL has been touted as a Web security panacea for years, but the harsh reality is that its weaknesses are growing rapidly, made worse by the changing ways that HTTP is used–when the expected SSL-protected transaction was a page request followed by the return of a full page of content, it was extremely difficult to infer the contents of the connection.  Now that the requests and responses are relatively atomic, even down to the characte, this is no longer the case.

And as old assumptions fail, so does security built on top of those assumptions.

Smoke, Fire and SSL

Where there’s smoke, there’s fire, goes the adage.

And in the case of an allegedly-theoretical exploit outlined in a new paper by Chris Soghoian and Sid Stamm (the compelled certificate creation attack), the presence of a product whose only use it to exploit it probably indicates that there’s more going on than one would like there, too.  As Wired’s Threat Level notes:

Normally when a user visits a secure website, such as Bank of America, Gmail, PayPal or eBay, the browser examines the website’s certificate to verify its authenticity.

At a recent wiretapping convention, however, security researcher Chris Soghoian discovered that a small company was marketing internet spying boxes to the feds. The boxes were designed to intercept those communications — without breaking the encryption — by using forged security certificates, instead of the real ones that websites use to verify secure connections. To use the appliance, the government would need to acquire a forged certificate from any one of more than 100 trusted Certificate Authorities.

The attack is a classic man-in-the-middle attack, where Alice thinks she is talking directly to Bob, but instead Mallory found a way to get in the middle and pass the messages back and forth without Alice or Bob knowing she was there.

The existence of a marketed product indicates the vulnerability is likely being exploited by more than just information-hungry governments, according to leading encryption expert Matt Blaze, a computer science professor at University of Pennsylvania.

As the paper explains,
While not known to most users, the CAs are one of the weakest links in the SSL public key infrastructure, a problem amplified by the fact that the major web browsers trust hundreds of different firms to issue certificates. Each of these firms can be compelled by their national government to issue a certificate for any particular website that all web browsers will trust without warning. Thus, users around the world are put in a position where their browser entrusts their private data, indirectly, to a large number of governments (both foreign and domestic) whom these individuals would never ordinarily trust.
The assumption that people are taught with SSL is that if the certificate is valid (meaning it is not expired and matches the hostname of the site they’re visiting) and trusted (meaning either they or a Certificate Authority vouch for its authenticity) then the connection is secure from eavesdropping.
The problem is that the decision of whom to trust is being made by people who may or may not share the same security interests as the browser user.  For example, if the United States’ own National Security Agency (NSA) were to compel a CA such as GoDaddy or Verisign or any of 264 CA’s to issue such a certificate within the guise of a National Security Letter, the CA would not only be compelled to comply, but would also be enjoined from ever talking about or acknowledging that they had been required to do so.
The NSA would insist (and might even believe) that they were doing it in the name of “keeping us safe,” but the people they’re targeting would probably feel differently.  Cybercriminals would likewise argue that they’re keeping their income streams safe.  Either way, outsourcing trust has come back to hurt the user.
Thus, the current system of Trustworthy Certificate Authorities has now scaled to the point of failure.  The entities whom we’ve engaged to determine who is or isn’t trustworthy have increasingly shown to have limited ability to deliver in that role.  The only way to be sure that a connection is trustworthy is to keep the trust relationship aligned with the people you actually know and do trust–i.e. run your own private CA and manage your own certificates.  That’s obviously beyond the capabilities of all but a very tiny fraction of the SSL-using world and of limited usefulness other than ensuring relatively secure transport between a closed set of sources.
In the long run, something else is needed.  I don’t know what, but hopefully Soghoian and Stamm’s paper will continue to help drive the discussion.
(Updated to clarify original links)

Well that didn't take long…

The Guardian has reported the first official incident of misuse of full-body scanner information

The police have issued a warning for harassment against an airport worker after he allegedly took a photo of a female colleague as she went through a full-body scanner at Heathrow airport.

The incident, which occurred at terminal 5 on 10 March, is believed to be the first time an airport worker has been formally disciplined for misusing the scanners.

Here was the chance to set the standard for abuse and all he got was a warning.  Adjust privacy expectations accordingly.  And it doesn’t sound like the co-worker is taking it as well as Shah Rukh did.

Asking the right questions

Schneier points me to lightbluetouchpaper, who note a paper analyzing the potential strength of name-based account security questions, even ignoring research-based attacks, and the findings are good:

Analysing our data for security, though, shows that essentially all human-generated names provide poor resistance to guessing. For an attacker looking to make three guesses per personal knowledge question (for example, because this triggers an account lock-down), none of the name distributions we looked at gave more than 8 bits of effective security except for full names. That is, about at least 1 in 256 guesses would be successful, and 1 in 84 accounts compromised. For an attacker who can make more than 3 guesses and wants to break into 50% of available accounts, no distributions gave more than about 12 bits of effective security. The actual values vary in some interesting ways-South Korean names are much easier to guess than American ones, female first names are harder than male ones, pet names are slightly harder than human names, and names are getting harder to guess over time.

Two important take-aways here.

  1. This is the ceiling on the potential strength of a name-bases authentication system, even ignoring other more vulnerable branches of the attack tree.  No matter how you do it, it’s just not going to be secure.
  2. It’s good to see people questioning the status quo and asking the right questions in security research.

Next, we need better awareness on the part of designers and developers that name-based authentication is Doing It Wrong.

Human Error and Incremental Risk

As something of a follow-up to my last post on Aviation Safety, I heard this story about Toyota’s now very public quality concerns on NPR while driving my not-Prius to work last week.

Driving a Toyota may seem like a pretty risky idea these days. For weeks now, weve been hearing scary stories about sudden acceleration, failing brakes and car recalls. But as NPRs Jon Hamilton reports, assessing the risk of driving a Toyota may have more to do with emotion than statistics.

Emotion trumping statistics in a news article?  Say it isn’t so!

Mr. LEONARD EVANS (Physicist, author, Traffic Safety): The whole history of U.S. traffic safety has been one focusing on the vehicle, one of the least important factors that affects traffic safety.

HAMILTON: Studies show that the vehicle itself is almost never the sole cause of the accident. Drivers, on the other hand, are wholly to blame most of the time. A look at data on Toyotas from the National Highway Traffic Safety Administration confirms this pattern.

Evans says his review of the data show that in the decade ending in 2008, about 22,000 people were killed in vehicles made by Toyota or Lexus.

Mr. EVANS: All these people were killed because of factors that had absolutely nothing to do with any vehicle defect.

HAMILTON: Evans says during that same period, its possible, though not yet certain, that accelerator problems in Toyotas played a role in another 19 deaths, or about two each year. Evans says people should take comfort in the fact that even if an accelerator does stick, drivers should usually be able to prevent a crash.

(bold mine)

From 1998 to 2008, about 2,200 people per year (out of a total of about 35,000 total vehicle deaths per year) died in Toyotas because of some sort of non-engineering failure.  During that same period, just under two people were killed per year due to the possible engineering failure.  So all this ado is about, at most, a 0.09% increase in the Toyota-specific death rate and a 0.005% increase in the overall traffic death rate.

So why is the response so excessive to the actual scope of the problem?  Because the risk is being imposed on the driver by the manufacturer.

Mr. ROPEIK[(Risk communication consultant)]: Imposed risk always feels much worse than the same risk if you chose to do it yourself. Like if you get into one of these Toyotas and they work fine, but you drive 90 miles an hour after taking three drinks. That won’t feel as scary, even though its much riskier, because you’re choosing to do it yourself.

And, lest we forget, even in the case where the accelerator did stick there was still a certain degree of human error:

Mr. EVANS: The weakest brakes are stronger than the strongest engine. And the normal instinctive reaction when you’re in trouble ought to be to apply the brakes.

My frustration is when I compare the reality of the data with most of the reporting on the subject, I think of Hicks’ Hudson’s NSFW “Game Over” rant. (Corrected per the comments.  Thanks, 3 of 5!)

After all, given that you’re more likely to die in your home (41%) than in your car (35%), you’re still statistically safer taking to the road than sitting home cowering in fear of your Prius.

Human Error

In his ongoing role of “person who finds things that I will find interesting,” Adam recently sent me a link to a paper titled “THE HUMAN FACTORS ANALYSIS AND CLASSIFICATION SYSTEM–HFACS,” which discusses the role of people in aviation accidents.  From the abstract:

Human error has been implicated in 70 to 80% of all civil and military aviation accidents. Yet, most accident reporting systems are not designed around any theoretical framework of human error. As a result, most accident databases are not conducive to a traditional human error analysis, making the identification of intervention strategies onerous. What is required is a general human error framework around which new investigative methods can be designed and existing accident databases restructured. Indeed, a comprehensive human factors analysis and classification system (HFACS) has recently been developed to meet those needs.

Consider that pilots, whether private, commercial, or military, are one of the more stringently trained and regulated groups of people on the planet.  This is due, at least in part, to the history of aviation.  As the report notes,

In the early years of aviation, it could reasonably be said that, more often than not, the aircraft killed the pilot. That is, the aircraft were intrinsically unforgiving and, relative to their modern counterparts, mechanically unsafe. However, the modern era of aviation has witnessed an ironic reversal of sorts. It now appears to some that the aircrew themselves are more deadly than the aircraft they fly (Mason, 1993; cited in Murray, 1997). In fact, estimates in the literature indicate that between 70 and 80 percent of aviation accidents can be attributed, at least in part, to human error (Shappell & Wiegmann, 1996).

One upon a time, operating an airplane was so dangerous that only highly-skilled experts could do it, and even then the equipment would get out of their control and crash.  Later (yet still almost twenty years ago), the equipment improved to the point that equipment failure no longer overshadowed operator error, but planes still get out of control and crash.

Other than the fact that pilots are almost universally still highly-skilled and/or trained operators, this doesn’t sound all that different from the evolution of computing.

Flight has obviously never really had the adoption rate explode like PC’s in the Age of the Web, but there is still a strong parallel between aircraft accidents and Information Security failures.  This assertion becomes even more true once the paper gets into James Reason’s “Swiss Cheese” model of understanding root causes of aircraft accidents.

Reason identifies four factors that interact with each other increase accident rates, which I’ll paraphrase as:

  1. Unsafe Acts — This is the cause of the active failure (i.e. crash), such as a poor decision or a failure to watch the instruments or otherwise recognize the unsafe situation was forming or occurring
  2. Preconditions for Unsafe Acts– Situations that increase risk of an accident, such as miscommunication between aircrew members or with others outside the aircraft, such as air traffic control
  3. Unsafe Supervision– failures of management or leadership to recognize when they are, for example, pairing inexperienced pilots together in less-than-optimal conditions
  4. Organizational Influences — Usually business-level decisions, such as reducing training hours to reduce costs

How familiar does this sound?  If you’ve ever read an IT Audit report, this should seem painfully familiar, even if only analogously.  The paper provides a strong taxonomy within each area, and I could easily drill down at least one more level into each one.  Read the paper to learn more and become a better professional problem solver, security-related or otherwise.

For example, using a real-world case I dealt with recently.  This is an easy example which ties the four levels together more neatly than many, so consider it an “Example-Size Problem” and extend as you see appropriate.

The incident was the loss of sensitive business information, which I personally believe hurt the company in a negotiation:

  1. Unsafe Act:  The VP left his unencrypted laptop unattended while at a meeting — this was the Active Failure/Unsafe Act that led to the Mishap
  2. Preconditions:  The VP assumed that others were watching his laptop, but did not explicitly confirm this fact
  3. Unsafe Supervision:  Despite knowing that Executives are high-risk users with regards to sensitive information on their laptops, the IT Executive Support Team had recommended against deploying Full-Disk Encryption on executives’ laptops because they feared being held accountable if an executive lost information due to an encryption system failure
  4. Organizational Influences:  While a Laptop Encryption Policy existed and specified that the VP should have been encrypted for multiple reasons, the policy was widely ignored, there was no cultural pressure to ensure that mobile information was protected, and thus compliance was unacceptably low.  No pressure to comply was generated by Executive management because the cost associated with doing so was considered to be prohibitive.

In this case, the damage (opportunity cost) of lost revenue due to that single lost laptop was many multiples of the complete cost of deploying a Full-Disk Encryption system.  Unfortunately, in the absence of a comprehensive analysis of the series of failures leading up to the unsafe act, the real root cause of an incident may be ignored or mis-assigned, leading to either an incomplete or unsustainable remediation course.

When incidents occur, it’s rare to see a true and honest assessment not just what went wrong, but why.  Too often, in fact, the culture seems to be to put it down to, “nobody could have predicted it.”  Reject these assessments.  To improve an organization, we must refuse to accept these explanations.  Instead, find the root cause–all the way up to the Organizational Influences–and then Fix It.

V-22 Osprey Metrics

Metrics seem to be yet another way in which Angry Bear noticed that the V-22 Osprey program has hidden from its failure to deliver on its promises:

Generally, mission capability runs 20% higher than availability, but availability is hidden on new stuff, while shouted about on older stuff, because there would be severe embarrassment if you considered that 40% of the brand new V-22 were not available (okay 60% available sounds much better, buy a car which is broke 40% of the time, how good does the warranty service need to be?).
The Navy and GAO are not sure which metrics to use. One of the reasons that US quality fell in the 70’s was avoiding measuring the hard things [that] gets you in trouble; a weakness of the DoD acquisition process. But the spending is more important than meaningful results.
Missing mission capable suggests that basic reliability and maintenance performance are not part of V-22 repertoire. Quality may not have been affordable during the long development cycle, and the savings are now costing in added support and lost use of the V-22

And as one commenter notes, the problem is even more fundamental than poor quality–the Osprey “cannot do a lot of what it is replacing:  HH 53 and HH 46.”  I would pretty much guarantee that no one is measuring the number of missions that are not performed by the Osprey but which could have been by the helicopters it replaced.

Metrics are powerful tools, but they can be as much a force for evil as a force for good.  Choosing the easy-to-gather metrics or the metrics that make the thing being measured look better may play well in Slide-Deck-Land, but it doesn’t change the fact that there is still a reality lurking underneath there which isn’t going away just because someone refuses to measure it.

What people choose to measure can tell you a lot about both their competence and their motivations.  Ignore it at your peril.

How not to do security, Drone Video Edition

This is probably considered to be “old news” by many, but I’m high-latency in my news at the moment.

Much was made of the fact that the US Military’s enemies are now eavesdropping on the video feeds from US Drones on the battlefield using cheaply available commercial technology.  But it’s OK, because according to the Military, there was a Good Reason why it wasn’t encrypted:

The reason the U.S. military didn’t encrypt video streams from drone aircraft flying over war zones is that soldiers without security clearances needed access to the video, and if it were encrypted, anyone using it would require security clearance, a military security expert says.

I can only hope that this is not really what passes for logic among the security decision-makers in the U.S. Military and their contractors.  There is additional information in the article which tells us that they at least performed a risk assessment, but the assessment seems to have been flawed.

It’s always easy to second-guess decisions in hindsight, but if the rationale given is even minimally truthful, then what they have essentially said is, The video feed was not encrypted because the policies which would have then applied would have been too onerous.

That’s not to say that my summary of the rationale is not sound in certain cases–after all, the processes necessary to comply are part of the cost of a countermeasure.  But in this case, the policy was clearly flawed  Who wants to bet that the same un-cleared soldiers never have access to encrypted radio links, or that they use military Web sites encrypted with SSL?

Access to (shared or symetrical) encryption keys probably does (and probably should) require a clearance, but claiming that requirement would extend to utilizing the encrypted link as rationalization for not doing so strikes me as a bit absurd.

Similarly, this justification:

…the video information loses its value so rapidly that the military may have decided it wasn’t worth the effort to encrypt it. “Even if it were a feed off a drone with attack capabilities, and even if the bad guys saw that the drone was flying over where they were at that moment, they wouldn’t have the chance to respond before the missile was fired,”

also fails to pass muster.

A key element of insurgency and counter-insurgency is the hide-and-seek aspect of it.  The initial value of drones was their ability to monitor large areas in real time and loiter on-scene for much longer (and more cheaply) than conventional aircraft.   As a result, drones are a huge force multiplier for the US and its allies in counter-insurgency operations.  If the insurgents are able to determine where the US forces are looking for them, that is extremely valuable intelligence to the insurgents, since they can then identify which logistical routes or encampments are potentially compromised and re-route forces accordingly.

Using drones as a delivery platform for munitions, on the other hand, is relatively rare and was not, in fact, even in-scope for the drones when initially deployed.

As a general rule, justifications for risk acceptance based on exceptional cases should be taken as evidence that the  decision was bad.  This is not an exception to that rule.

Airplane Terrorism, Data-Driven Edition

I’m just off a flight from London back to the United States and I’m hesitant to attempt to think while jet-lagged.  I’ll have some more thoughts and first-hand observations once my head clears, however.

In the meantime, Nate Silver has broken down the risk of terror attacks on airplanes so I don’t have to.  Summarizing his points, the odds of a terror attack can be variously expressed as:

  • one terrorist incident per 16,553,385 departures
  • one terrorist incident per 11,569,297,667 miles flown. This distance is equivalent to 1,459,664 trips around the diameter of the Earth, 24,218 round trips to the Moon, or two round trips to Neptune.
  • one incident per 3,105 years airborne
  • the odds of being on given departure which is the subject of a terrorist incident have been 1 in 10,408,947 over the past decade. By contrast, the odds of being struck by lightning in a given year are about 1 in 500,000
  • One point that Nate mentions up front, but doesn’t elaborate on, is that these are the odds of being on a plane that’s attacked.   A third of those attacks failed and no one but the terrorist was injured (Richard Reid and the latest Christmas Day attack).

    That’s right, you are twenty times more likely to be struck by lightning than to be on a plane that’s the target of an attack, and almost thirty times more likely to be struck by lightning than to be killed or injured in a terrorist attack.

    Pick your preferred typical comparison, but you’ll find that they’re all several orders of magnitude more likely than airline terrorism.

    Navigation