2013 PET Award for Outstanding Research in Privacy Enhancing Technologies

You are invited to submit nominations to the 2013 PET Award.

The PET Award is presented annually to researchers who have made an outstanding contribution to the theory, design, implementation, or deployment of privacy enhancing technology. It is awarded at the annual Privacy Enhancing Technologies Symposium (PETS).

The PET Award carries a prize of 3000 USD thanks to the generous support of Microsoft. The crystal prize itself is offered by the Office of the Information and Privacy Commissioner of Ontario, Canada.

Any paper by any author written in the area of privacy enhancing technologies is eligible for nomination. However, the paper must have appeared in a refereed journal, conference, or workshop with proceedings published in the period from April 16, 2011 until March 31, 2013.

The complete Award rules including eligibility requirements can be found at http://petsymposium.org/award/rules.php.

Anyone can nominate a paper by sending an email message containing the following to award-chairs13@petsymposium.org:

  • Paper title
  • Author(s)
  • Author(s) contact information
  • Publication venue and full reference
  • Link to an available online version of the paper
  • A nomination statement of no more than 500 words.

How to Ask Good Questions at RSA

So this week is RSA, and I wanted to offer up some advice on how to engage. I’ve already posted my “BlackHat Best Practices/Survival kit.

First, if you want to ask great questions, pay attention. There are things more annoying than a question that was answered while the questioner was tweeting, but you still don’t want to be that person.

Second, if you want to ask a good question, ask a question that you think others will want to hear answered. If your question is narrow, go up to the speaker afterwards.

Now, there are some generic best practice questions that I love to ask, and want to encourage you to ask.

  • You claimed “X”, but didn’t explain why. Could you briefly cover your methodology and data for that claim?
  • You said “X” is a best practice. Can you cover what practices you would cut to ensure there’s resources available to do “X”?
  • You said “if you get breached, you’ll go out of business. Last year, 2600 companies announced data breaches. How many of them are out of business?”
  • You said that “X” dramatically increased your organization’s security. Since we live in an era of ‘assume breach’, can I assume that your organization is now committed to publishing details of any breaches that happen despite X?
      I’m sure there’s other good questions, please share your favorites, and I’ll try for a new post tomorrow.

Is there "Room for Debate?" in Breach Disclosure?

The New York Times has a “Room for Debate” on “Should Companies Tell Us When They Get Hacked?

It currently has 4 entries, 3 of which are dramatically in favor of more disclosure. I’m personally fond of Lee Tien’s “
We Need Better Notification Laws
.”

My personal preference is of course (ahem) fascinating to you, since you’re reading this blog, but more seriously, it’s not what I expect anyone else to find interesting.

What’s interesting to me is that the only person who they could find to say no is Alexander Tabb, whose bio states that he “is a partner at TABB Group, a capital markets research and consulting firm.” I don’t want to insult Mr Tabb, and so found a fuller bio here, which includes “Mr. Tabb is an expert in the field of international affairs, with specialization in the developing world, crisis management, international security, supply chain security and travel safety and security. He joined Tabb Group in October 2004. [From 2001 to 2004] Mr. Tabb served as an Associate Managing Director of Security Services Group of Kroll Inc., the international risk consulting company.”

I find it fascinating that someone of his background is the naysayer. Perhaps the Times was unable to find anyone practicing in information security to claim that companies should not tell us when they’ve been hacked?

HIPAA's New Breach Rules

Law firm Proskauer has published a client alert that “HHS Issues HIPAA/HITECH Omnibus Final Rule Ushering in Significant Changes to Existing Regulations.” Most interesting to me was the breach notice section:

Section 13402 of the HITECH Act requires covered entities to
provide notification to affected individuals and to the Secretary of
HHS following the discovery of a breach of unsecured protected
health information. HITECH requires the Secretary to post on an
HHS Web site a list of covered entities that experience breaches of
unsecured protected health information involving more than 500
individuals. The Omnibus Rule substantially alters the definition of
breach. Under the August 24, 2009 interim final breach notification
rule, breach was defined as the “acquisition, access, use, or
disclosure of protected health information in a manner not permitted
under [the Privacy Rule] which compromises the security or privacy
of the protected health information.” The phrase “compromises the
security or privacy of [PHI]” was defined as “pos[ing] a significant risk
of financial, reputational, or other harm to the individual.”

According to HHS, “some persons may have interpreted the risk of
harm standard in the interim final rule as setting a much higher
threshold for breach notification than we intended to set. As a result
we have clarified our position that breach notification is necessary in
all situations except those in which the covered entity or business
associate, as applicable, demonstrates that there is a low probability
that the protected health information has been compromised. . . .”

The client alert goes on to lay out the four risk factors that must be considered.

I’m glad to see this. The prior approach has been a full employment act for lawyers, and a way for organizations to weasel out of their ethical and legal obligations. We are likely to see more regulatory updates of this form, despite intensive lobbying.

If organizations want a different risk threshold, it’s up to them to propose one that’s credible to regulators and the public.

New School Blog Attacked with 0day

We were hacked again.

The vuln used was 0day, and has now been patched, thanks to David Mortman and Matt Johansen, and the theme has also been updated, thanks to Rodrigo Galindez. Since we believe in practicing the transparency we preach, I wanted to discuss what happened and some options we considered.

Let me dispense with the markety-speak.

Alun Jones found an XSS attack, and let us know about it discretely. It’s tempting to throw around words like 0day because it makes us seem less lame. Actually, it’s tempting because it makes me seem less lame.

As I’ve said before, we run this blog on the cheap as a way to share ideas. We don’t have any income here, and that means that we use free resources like WordPress and Modernist. We could take money out of our beer budget or time away from our families to run security scans, but haven’t.

This is much like many organizations. They have limited infosec budget. There’s always more you could be doing, and in hindsight, probably should have been doing, but identifying it advance is tough because we don’t know how compromises tend to happen.

I gave serious consideration to announcing the vuln before we fixed it, to enable people to make risk management decisions. I decided against that on two grounds. The first and more important was that we’d be exposing the other folks who use the theme to risk that they might not be set up to respond to. The second was that in our case, the impact seems relatively constrained. We work hard to ensure you don’t need to run code to read our blog, and I’d be shocked to discover that anyone making security choices with things like NoScript or Trusted Zones has this blog in such a whitelist.

If you’ve made the decision to let this blog run code, I recommend you fix that, because we are not investing in securing our site in line with that expectation. If you’re a security pro using Windows, I urge you to use EMET, and in any event to limit where your browser will run code to a carefully selected whitelist.

Anyway, back to the vuln. We’re a little disappointed to not be targeted by this Java 0day. We’d feel much better if this was “serious” 0day. But you know what? This blog could be pwned and used to distribute that Java stuff. And XSS is serious, even if it is common.

One option we gave serious consideration was “offensive security.” We have chosen to not hack back, but if we did, we do not believe we owe a duty of confidentiality to other “victims” of this hacking spree. (We don’t know how many victims Alun has, but we bet it’s a lot more than fit on a postcard.) We would believe that there’s a reasonable public interest served by naming those victims, so that their shareholders can assess if the breaches are material and should have been disclosed.

Guns, Homicides and Data

I came across a fascinating post at Jon Udell’s blog, “Homicide rates in context ,” which starts out with this graph of 2007 data:

A map showing gun ownership and homicide rates, and which look very different

Jon’s post says more than I care to on this subject right now, and points out questions worth asking.

As I said in my post on “Thoughts on the Tragedies of December 14th,” “those who say that easy availability of guns drives murder rates must do better than simply cherry picking data.”

I’m not sure I believe that the “more guns, less crime” claim made by A.W.R. Hawkins claim is as causative as it sounds, but the map presents a real challenge to simplistic responses to tragic gun violence.

Privacy, Facebook and Fatigue

Facebook’s new Graph search is a fascinating product, and I want to use it. (In fact, I wanted to use it way back when I wrote about “Single Serving Friend” in 2005.)

Facebook’s Graph Search will incent Facebook users to “dress” themselves in better meta-data, so as to be properly represented in all those new structured results. People will start to update their profiles with more dates, photo tags, relationship statuses, and, and, and…you get the picture. No one wants to be left out of a consideration set, after all. (“Facebook is no longer flat“, John Battelle)

But privacy rears its predictable head, not just in the advocacy world:

Independent studies suggest that Facebook users are becoming more careful about how much they reveal online, especially since educators and employers typically scour Facebook profiles.

A Northwestern University survey of 500 young adults in the summer of 2012 found that the majority avoided posting status updates because they were concerned about who would see them. The study also found that many had deleted or blocked contacts from seeing their profiles and nearly two-thirds had untagged themselves from a photo, post or check-in. (“Search Option From Facebook Is a Privacy Test“, NYTimes)

Perhaps a small set of people will, as Batelle suggests, slow down their use of ironic, silly, or outraged likes, but the fundamental problem is that such uses are situated in a context, and when those contexts overlap, their meanings are harder to tease out with algorithms. People engage with systems like Yelp or LinkedIn in a much more constrained way, and in that constraint, make a much simpler set of meanings. But even in those simple meanings, ‘the street finds its own uses for things.’ For example, I get the idea that this 5-star review may be about something more than the design on a shirt.

There’s another study on “Facebook Fatigue:”

Bored or annoyed by Facebook? You’re not alone. A majority of people surveyed by the Pew Internet and American Life Project said they had taken sabbaticals from the social network at some point, to escape the drama, or the tedium. (“Study: Facebook fatigue — it’s real“, Jennifer Van Grove, CNet)

When our nuanced and evolved social systems are overlaid with technology, it’s intensely challenging to get the balance of technology and social right. I think the Pew research shows that Facebook has its work cut out for it.

HHS & Breach Disclosure

There’s good analysis at “HHS breach investigations badly backlogged, leaving us in the dark

To say that I am frequently frustrated by HHS’s “breach tool” would be an understatement. Their reporting form and coding often makes it impossible to know – simply by looking at their entries – what type of breach occurred. Consider this description from one of their entries:

“Theft, Unauthorized Access/Disclosure”,”Laptop, Computer, Network Server, Email”

So what happened there? What was stolen? Everything? And what types of patient information were involved?

Or how about this description:

“Unauthorized Access/Disclosure,Paper”

What happened there? Did a mailing expose SSN in the mailing labels or did an employee obtain and share patients’ information with others for a tax refund fraud scheme? Your guess is as good as mine. And HHS’s breach tool does not include any data type fields that might let us know whether patients’ SSN, Medicare numbers, diagnoses, or other information were involved.

What can I say but, I agree?

Disclosures should talk about the incident and the data. Organizations are paying the PR cost, let’s start learning.

The incident should be specified using either the Broad Street taxonomy (covered in the MS Security Intel Report here) or Veris. It would be helpful to include details like the social engineering mail used (so we can study tactics), and detection rates for the malware, from something like VirusTotal.

For the data, it would be useful to explain (as Dissent says) what was taken. This isn’t simply a matter of general analysis, but can be used for consumer protection. For example, if you use knowledge-based backup authentication, then knowing that every taxpayer in South Carolina has had their addresses exposed tells you something about the efficacy of a question about what address you lived in in 2000. (I don’t know if that data was exposed in the SC tax breach, I’m just picking a recent example.)

Anyway, the post is worth reading, and the question of how we learn from breaches is worth discussing in depth.