The Carpenter Case

On Wednesday, the supreme court will consider whether the government must obtain a warrant before accessing the rich trove of data that cellphone providers collect about cellphone users’ movements. Among scholars and campaigners, there is broad agreement that the case could yield the most consequential privacy ruling in a generation. (“Supreme court cellphone case puts free speech – not just privacy – at risk.”)

Bruce Schneier has an article in the Washington Post, “How the Supreme Court could keep police from using your cellphone to spy on you,” as does Stephen Sachs:

The Supreme Court will hear arguments this Wednesday in Carpenter v. United States, a criminal case testing the scope of the Fourth Amendment’s right to privacy in the digital age. The government seeks to uphold Timothy Carpenter’s conviction and will rely, as did the lower court, on the court’s 1979 decision in Smith v. Maryland, a case I know well.

I argued and won Smith v. Maryland when I was Maryland’s attorney general. I believe it was correctly decided. But I also believe it has long since outlived its suitability as precedent. (“The Supreme Court’s privacy precedent is outdated.”)

I am pleased to have been able to help with an amicus brief in the case, and hope that the Supreme Court uses this opportunity to protect all of our privacy. Good luck to the litigants!

20 Year Software: Engineering and Updates

Twenty years ago, Windows 95 was the most common operating system. Yahoo and Altavista were our gateways to the internet. Steve Jobs just returned to Apple. Google didn’t exist yet. America Online had just launched their Instant Messenger. IPv6 was coming soon. That’s part of the state of software in 1997, twenty years ago. We need to figure out what engineering software looks like for a twenty year lifespan, and part of that will be really doing such engineering, because theory will only take us to the limits of our imaginations.

Today, companies are selling devices that will last twenty years in the home, such as refrigerators and speakers, and making them with network connectivity. That network connectivity is both full internet connectivity, that is, Internet Protocol stacks, and also local network connectivity, such as Bluetooth and Zigbee.

We have very little idea how to make software that can survive as long as AOL IM did. (It’s going away in December, if you missed the story.)

Recently, there was a news cycle around Sonos updating their privacy policy. I don’t want to pick on Sonos, because I think what they’re experiencing is just a bit of the future, unevenly distributed. Responses such as this were common: “Hardware maker: Give up your privacy and let us record what you say in your home, or we’ll destroy your property:”

“The customer can choose to acknowledge the policy, or can accept that over time their product may cease to function,” the Sonos spokesperson said, specifically.

Or, as the Consumerist, part of Consumer Reports, puts it in “Sonos Holds Software Updates Hostage If You Don’t Sign New Privacy Agreement:

Sonos hasn’t specified what functionality might no longer work in the future if customers don’t accept the new privacy policy.

There are some real challenges here, both technical and economic. Twenty years ago, we didn’t understand double-free or format string vulnerabilities. Twenty years of software updates aren’t going to be cheap. (I wrote about the economics in “Maintaining & Updating Software.”)

My read is that Sonos tried to write a privacy policy that balanced their anticipated changes, including Alexa support, with a bit of future-proofing. I think that the reason they haven’t specified what might not work is because they really don’t know, because nobody knows.

The image at the top is the sole notification that I’ve gotten that Office 2011 is no longer getting security updates. (Sadly, it’s only shown once per computer, or perhaps once per user of the computer.) Microsoft, like all tech companies, will cut functionality that it can’t support, like <"a href="https://www.macworld.com/article/1154785/business/welcomebackvisualbasic.html">Visual Basic for Mac and also “end of lifes” its products. They do so on a published timeline, but it seems wrong to apply that to a refrigerator, end of lifeing your food supply.

There’s probably a clash coming between what’s allowable and what’s economically feasible. If your contract says you can update your device at any time, it still may be beyond “the corners of the contract” to shut it off entirely. Beyond economically challenging, it may not even be technically feasible to update the system. Perhaps the chip is too small, or its power budget too meager, to always connect over TLS4.2, needed addresses the SILLYLOGO attack.

What we need might include:

  • A Dead Software Foundation, dedicated to maintaining the software which underlies IoT devices for twenty years. This is not only the Linux kernel, but things like tinybox and openssl. Such a foundation could be funded by organizations shipping IoT devices, or even by governments, concerned about the externalities, what Sean Smith called “the Cyber Love Canal” in The Internet of Risky Things. The Love Canal analogy is apt; in theory, the government cleans up after the firms that polluted are gone. (The practice is far more complex.)
  • Model architectures that show how to engineer devices, such as an internet speaker, so that it can effectively be taken offline when the time comes. (There’s work in the mobile app space on making apps work offline, which is related, but carries the expectation that the app will eventually reconnect.)
  • Conceptualization of the legal limits of what you can sign away in the fine print. (This may be everything; between severability and arbitration clauses, the courts have let contract law tilt very far towards the contract authors, but Congress did step in to write the Consumer Review Fairness Act.) The FTC has commented on issues of device longevity, but not (afaik) on limits of contracts.

What else do we need to build software that survives for twenty years?

Amicus brief in “Carpenter” Supreme Court Case

“In an amicus brief filed in the U.S. Supreme Court, leading technology experts represented by the Knight First Amendment Institute at Columbia University argue that the Fourth Amendment should be understood to prohibit the government from accessing location data tracked by cell phone providers — “cell site location information” — without a warrant.”

For more, please see “In Supreme Court Brief, Technologists Warn Against Warrantless Access to Cell Phone Location Data.” [Update: Susan Landau has a great blog post “Phones Move – and So Should the Law” in which she frames the issues at hand.]

I’m pleased to be one of the experts involved.

Warrants for Cleaning Malware in Kelihos

This is an thought-provoking story:

And perhaps most unusual, the FBI recently obtained a single warrant in Alaska to hack the computers of thousands of victims in a bid to free them from the global botnet, Kelihos.

On April 5, Deborah M. Smith, chief magistrate judge of the US District Court in Alaska, greenlighted this first use of a controversial court order. Critics have since likened it to a license for mass hacking. (“FBI allays some critics with first use of new mass-hacking warrant,” Aliya Sternstein, Ars Technica)

One of the issues in handling malware at scale is that the law prohibits unauthorized access to computers. And that’s not a bad principle, but it certainly makes it challenging to go and clean up infections at scale.

So the FBI getting a warrant to authorize that may be an interesting approach, with many cautions about the very real history of politicized spying, of COINTEL, of keeping the use of 0days secret. But I don’t want to focus on those here. What I want to focus on is what did a judge actually authorize in this case. Is this a warrant for mass hacking? It doesn’t appear to be, but it does raise issues of what’s authorized, how those things were presented to the judge, and points to future questions about what authorization might include.


So what’s authorized?

The application for a warrant is fairly long, at 34 pages, much of it explaining why the particular defendant is worthy of a search, and the “time and manner of execution of search” starts on page 30.

What the FBI apparently gets to do is to operate a set of supernodes for the Kelihos botnet, and “The FBI’s communications, however, will not contain any commands, nor will they contain IP addresses of any of the infected computers. Instead, the FBI replies will contain the IP and routing information for the FBI’s ‘sinkhole’ server.”

What questions does that raise?

A first technical point is that for the FBI’s replies to reach those infected computers, there must be packets sent over the internet. For those packets to reach the infected computer’s, they need addressing, in the form of an IP address. Now you can argue that those IP addresses of infected computers are in the headers, not the content of the packets. The nuance of content versus headers is important in some laws. In fact, the warrant para 66 explicitly states that the FBI will gather that data, and then provide it to ISPs, who they hope will notify end users. (I’ve written about that experience in “The Worst User Experience In Computer Security?.”)

Another technical point is that the warrant says “The FBI with the assistance of private parties…” It’s not clear to me what constraints might apply to those parties. Can they record netflow or packet captures? (That might be helpful in demonstrating exactly what they did later, and also create a privacy conundrum which the FBI takes apparent pains to avoid.) What happens if an unrelated party captures that data? For example, let’s say one of the parties chooses to operate their sinkhole in an AWS node. I suspect AWS tracks netflows. A warrant to obtain that data might seem pretty innocent to a judge.


The idea that the FBI will not send any commands is, on the surface, a fine restriction, but it eliminates many possibilities for cleaning up. What could we imagine in the future?

For example, a command that could be useful would be to remove Kelihos from startup items? How about remove C:\Program Files\Kelihos.exe? Removing files from my computer without permission is pretty clearly a form of unauthorized access. It’s a form that’s probably in the interests of the vast majority of the infected. We might want to allow a well-intentioned party to do so.

But what if the commands fail? When Sony built a tool to remove the rootkit their DRM installed, the cleanup tool opened a big security hole. It’s hard to write good code. It’s very hard to write code that’s free of side effects or security issues.

What if there’s disagreement over what fits within the definition of well-intentioned? What if someone wants to remove duplicate files to help me save disk space? What if they want to remove blasphemous or otherwise illegal files?

What's Classified, Doc? (The Clinton Emails and the FBI)

So I have a very specific question about the “classified emails”, and it seems not to be answered by “Statement by FBI Director James B. Comey on the Investigation of Secretary Hillary Clinton’s Use of a Personal E-Mail System .” A few quotes:

From the group of 30,000 e-mails returned to the State Department, 110 e-mails in 52 e-mail chains have been determined by the owning agency to contain classified information at the time they were sent or received. Eight of those chains contained information that was Top Secret at the time they were sent; 36 chains contained Secret information at the time; and eight contained Confidential information, which is the lowest level of classification. Separate from those, about 2,000 additional e-mails were “up-classified” to make them Confidential; the information in those had not been classified at the time the e-mails were sent.


For example, seven e-mail chains concern matters that were classified at the Top Secret/Special Access Program level when they were sent and received. These chains involved Secretary Clinton both sending e-mails about those matters and receiving e-mails from others about the same matters. There is evidence to support a conclusion that any reasonable person in Secretary Clinton’s position, or in the position of those government employees with whom she was corresponding about these matters, should have known that an unclassified system was no place for that conversation.


Separately, it is important to say something about the marking of classified information. Only a very small number of the e-mails containing classified information bore markings indicating the presence of classified information. But even if information is not marked “classified” in an e-mail, participants who know or should know that the subject matter is classified are still obligated to protect it.

I will state that there is information which is both classified and available to the public. For example, the Snowden documents are still classified, and I have friends with clearances who need to leave conversations when they come up. They are, simultaneously, publicly available. There is a legalistic position that such information is only classified. Such rejection of reality is uninteresting to me.

I can read Comey’s statements two ways. One is that Clinton was discussing Snowden documents, which she likely needed to do as Secretary of State. The other is that she was discussing information which was not both public and classified. My assessment of her behavior is dependent on knowing this.

Are facts available to distinguish between these cases?

Regulations and Their Emergent Effects

There’s a fascinating story in the New York Times, “Profits on Carbon Credits Drive Output of a Harmful Gas“:

[W]here the United Nations envisioned environmental reform, some manufacturers of gases used in air-conditioning and refrigeration saw a lucrative business opportunity.

They quickly figured out that they could earn one carbon credit by eliminating one ton of carbon dioxide, but could earn more than 11,000 credits by simply destroying a ton of an obscure waste gas normally released in the manufacturing of a widely used coolant gas. That is because that byproduct has a huge global warming effect. The credits could be sold on international markets, earning tens of millions of dollars a year.

That incentive has driven plants in the developing world not only to increase production of the coolant gas but also to keep it high — a huge problem because the coolant itself contributes to global warming and depletes the ozone layer.

Writing good regulation to achieve exactly the effects that you want is a hard problem. It’s not hard in the “throw some smart people” at it sense, but hard in the sense that you’re generally going to have to make hard tradeoffs around behavior like this. Simple regulations will fail to capture nuance, but as the regulation becomes more complex, you end up with more nooks and crannies full of strange outcomes.

We as people and as a society need to think about how much of this we want. If we want to regulate with a fine-toothed comb, then we’re going to see strange things like this. If we want to regulate more broadly, we’ll likely end up with some egregious failures and frauds like Enron or the mortgage crisis. But those failures are entirely predictable: companies occasionally fake their books, and bankers will consistently sell as much risk as they can to the biggest sucker. For example, Bush administration’s TARP program or Seattle taking on $200 million in risk from a hedge fund manager who wants to build a new sports stadium. At least that risk isn’t hidden in some bizarre emergent effect of the regulation.

That aside, long, complex regulations are always going to produce emergent and chaotic effects. That matters for us in security because as we look at the new laws that are proposed, we should look to see not only their intended effects, but judge if their complexity itself is a risk.

I’m sure there’s other emergent effects which I’m missing.

"Quartering large bodies of armed troops among us.."

So following up on our tradition of posting the Declaration of Independence from Great Britain on the 4th, I wanted to use one of those facts submitted to a candid world to comment on goings on in…Great Britain. There, the government has decided to place anti-aircraft missiles on the roof of a residential building near the Olympic park, and the residents objected.

However, the courts have ruled that such a decision is not subject to judicial review. (“London tower block residents lose bid to challenge Olympic missiles“) I think it’s a bit of a shame it didn’t happen here in the US, where it would be a rare opportunity for a bit of third amendment law:

No soldier shall, in time of peace be quartered in any house, without the consent of the owner, nor in time of war, but in a manner to be prescribed by law.

It’s not clear that a missile battery is a soldier, nor that on a house is equivalent to in a house, and I suspect those are two of the few remaining words in the Bill of Rights that haven’t been hyper-analyzed.

Kind of Copyrighted

This Week in Law is a fascinating podcast on technology law issues, although I’m way behind on listening. Recently, I was listening to Episode #124, and they had a discussion of Kind of Bloop, “An 8-Bit Tribute to Miles Davis’ Kind of Blue.” There was a lawsuit against artist Andy Baio, which he discusses in “Kind of Screwed.” There’s been a lot of discussion of the fair use elements of the case (for example, see “Kind of Bamboozled: Why ‘Kind of Bloop’ is Not a Fair Use“). But what I’d really like to talk about is (what I understand to be) a clear element of copyright law that is fundamental to this case, and that is compulsory mechanical licensing.

In TWIL podcast, there’s a great deal of discussion of should Baio have approached the photographer for a license or not. He did approach the copyright holders for Kind of Blue, who were “kind” enough to give him a license. They gave him a license for the music, but he didn’t need to approach them. Copyright law gives anyone the right to record a cover, and as a result, there is a flourishing and vibrant world of cover music, including great podcasts like Coverville, and arists like Nouvelle Vague, who do amazing bossa-nova style covers of punk. (Don’t miss their cover of Too Drunk to Fuck.) And you can listen to that because they don’t have to approach the copyright holder for permission. Maybe they would get it, maybe not. But their ability to borrow from other artists and build on their work is a matter of settled law.

I’m surprised this difference didn’t come up in the discussion, because it seems to me to be kind of important.

It’s kind of important because it’s a great example of how apparently minor variations in a law can dramatically change what we see in the world. It’s also a great example of how constraining rules like mechanical licensing can encourage creativity by moving a discussion from “allow/deny” to “under what circumstances can a copyright holder use the courts to forbid a copy.” If we had mechanical licensing for all copyrighted materials, Napster might still be around and successful.

Outrage of the Day: DHS Takes Blog Offline for a year

Imagine if the US government, with no notice or warning, raided a small but popular magazine’s offices over a Thanksgiving weekend, seized the company’s printing presses, and told the world that the magazine was a criminal enterprise with a giant banner on their building. Then imagine that it never arrested anyone, never let a trial happen, and filed everything about the case under seal, not even letting the magazine’s lawyers talk to the judge presiding over the case. And it continued to deny any due process at all for over a year, before finally just handing everything back to the magazine and pretending nothing happened. I expect most people would be outraged. I expect that nearly all of you would say that’s a classic case of prior restraint, a massive First Amendment violation, and exactly the kind of thing that does not, or should not, happen in the United States.

But, in a story that’s been in the making for over a year, and which we’re exposing to the public for the first time now, this is exactly the scenario that has played out over the past year — with the only difference being that, rather than “a printing press” and a “magazine,” the story involved “a domain” and a “blog.”

Read the whole thing at “Breaking News: Feds Falsely Censor Popular Blog For Over A Year, Deny All Due Process, Hide All Details…