Category: Usability

Toyota Stalks Woman, Claims She Consented

clown-and-cops.jpg

In a lawsuit filed Sept. 28 in Los Angeles Superior Court, Amber Duick claims she had difficulty eating, sleeping and going to work during March and April of last year after she received e-mails for five days from a fictitious man called Sebastian Bowler, from England, who said he was on the run from the law, knew her and where she lived, and was coming to her home to hide from the police.

There was even a fictitious MySpace page reportedly created for Bowler.

Although Bowler did not have Duick’s current address, he sent her links to his My Space page as well as links to video clips of him causing trouble all over the country on his way to her former house in Los Angeles, according to the lawsuit.

“Amber mate! Coming 2 Los Angeles. Gonna lay low at your place for a bit till it all blows over,” the man wrote in one e-mail….

It turns out the prank was actually part of a marketing effort executed by the Los Angeles division of global marketing agency Saatchi & Saatchi, which created the campaign to promote the Toyota Matrix, a new model launched in 2008. …Tepper, Duick’s attorney, said he discussed the campaign with Toyota’s attorneys earlier this year, and they said the “opting in” Harp referred to was done when Duick’s friend e-mailed her a “personality test” that contained a link to an “indecipherable” written statement that Toyota used as a form of consent from Duick….(“Woman Sues Toyota Over ‘Terrifying’ Prank,” ABC News.)

Dear Toyota attorneys: a contract involves, first and foremost, a meeting of the minds. We’ve had years of farcical and indecipherable privacy policies. Anyone who’s ever tried to read them knows that you can’t figure them out. Everyone knows that no one even tries. The final thing which any first year law student knows: neither of those lead to terms which shock the conscience.

I’d like to ask readers to blog and tweet about this until Saatchi, Saatchi and Toyota explain what went wrong, and agree to all of Duick’s demands.

Shown, Toyota’s attorneys in conference with representatives of Saatchi and Saatchi. Photo by Jrbrubaker.

Rebuilding the internet?

Once apon a time, I was uunet!harvard!bwnmr4!adam. Oh, harvard was probably enough, it was a pretty well known host in the uucp network which carried our email before snmp. I was also harvard!bwnmr4!postmaster which meant that at the end of an era, I moved the lab from copied hosts files to dns, when I became adam@bwnmr4.harvard…wow, there’s still cname for that host. But I digress.

Really, I wanted to talk about a report, passed on by Steven Johnson and Gunnar Peterson, that Vint Cerf said that if he were re-designing the internet, he’d add more authentication.

And really, while I respect Vint a tremendous amount, I’m forced to wonder: Whatchyou talkin’ about Vint?

I hate going off based on a report on Twitter, but I don’t know what the heck a guy that smart could have meant. I mean, he knows that back in the day, people like me could and did give internet accounts to (1) anyone our boss said to and (2) anyone else who wanted them some of this internet stuff and wouldn’t get us in too much trouble. (Hi S! Hi C!) So when he says “more authentication” does that mean inserting “uunet!harvard!bwnmr4!adam” in an IP header? Ensuring your fingerd was patched after Mr. Morris played his little stunt?

But more to the point, authentication is a cost. Setting up and managing authentication information isn’t easy, and even if it were, it certainly isn’t free. Even more expensive than managing the authentication information would be figuring out how to do it. The packet interconnect paper (“A Protocol for Packet Network Intercommunication,” Vint Cerf and Robert Kahn) was published in 1974, and says “These associations need not involve the transmission of data prior to their formation and indeed two associates need not be able to determine that they are associates until they attempt to communicate.” That was before DES (1975), before Diffie-Hellman (1976), Needham-Schroeder (1978) or RSA. I can’t see how to maintain that principle with the technology available at the time.

When setting up a new technology, low cost of entry was a competitive advantage. Doing authentication well is tremendously expensive. I might go so far as to argue that we don’t know how fantastically expensive it is, because we so rarely do it well.

Not getting hung up in easy problems like prioritization or hard ones like authentication, but simply moving packets was what made the internet work. Allowing new associations to be formed, ad-hoc, made for cheap interconnections.

So I remain confused by what he could have meant.

[Update: Vint was kind enough to respond in the comments that he meant the internet of today.]

Perfecter than Perfect

So I’m having a conversation with a friend about caller ID blocking. And it occurs to me that my old phone with AT&T, before Cingular bought them, had this nifty feature, “show my caller-ID to people in my phone book.”

Unfortunately, my current phone doesn’t have that, because Steve Jobs has declared that “Apple’s goal is to provide our customers with the best possible user experience. We have been able to do this by designing the hardware and software in our products to work together seamlessly.” Setting aside Michael Arrington’s excellent deconstruction, there’s a little feature, easy to implement if you have access to the call setup function (dial with a prepended *67). But we don’t have the ability to do that. Because that would be better than the best possible experience, and obviously, that’s not possible.

Twitter Bankruptcy and Twitterfail

If you’re not familiar with the term email bankruptcy, it’s admitting publicly that you can’t handle your email, and people should just send it to you again.

A few weeks ago, I had to declare twitter bankruptcy. It just became too, too much. I’ve been meaning to blog about it since, but things have just been too, too much. Shortly after I did, The Guardian published their hilarious April Fools article about shifting to an all-twitter format. I found it especially funny because they made several digs at Stephen Fry, the very person who drove me to twitter bankruptcy.

In Mr. Fry’s case, he’s literate, funny, worth listening to, and prolific. These traits in a twitter user are horrible as his content dominates the page over all the other tweets. The problem was twofold: I couldn’t keep up with Mr. Fry alone, and yet having removed him, a graph of the interestingness quotient of my twitter page resembled an economic report.

I discussed this with some other friends, one of whom is my favorite twitterer, because he has some magic scraper that puts his tweets into an RSS feed on his blog and I can read them at my leisure.

I opined that what I really need from twitter is streams separated into separate pages with metadata about how many unread tweets there are from each person I follow, and a way to look at them in a block. That way, I can look at Mr. Fry’s tweets, note that there’s a Mersenne prime number of them unread, and catch up.

In short, I want twitter to either an RSS feed or an email box. Either is fine.

One of my friends said that perhaps what Mr. Fry should do is put his tweets together into paragraphs, the paragraphs into essays, and then collect the essays in a book.

She also pointed out that twitter is perhaps the first Internet medium which does not level social hierarchies, but creates and reinforces them. The numbers of people following whom, who is attentively watching whose tweets and so on recreates a high-school-like social structure.

This brings us to #twitterfail, the current brou-ha-ha about a change in twitter rules in which direct messages only go to people who are following people who are following those who are following — someone.

The #twitterfail channel is a bunch of people retweeting that they think this is a bad idea. There is apparently no channel for retweeting if you think it’s a good idea.

Valleywag thinks it is a good idea in their article, “Finally, Twitter Learns When to Shut Up,” pointing out a Nielsen report that 60% of new twitter users drop out after signing up. This might be a way to cut down the noise level for people who are newbies, according to Valleywag.

Others see it as a way to further reinforce the status hierarchies. The brash and ever entertaining Prokovy Neva says:

What [various twitterati, none of whom is Stephen Fry] all have in common is an overwhelming desire to have lots of “friends” who follow them, but they want them to be loyal, positive, and not talk back, except to warble about how they’ve read their books or gush about how wonderful they are.

What they definitely, definitely DO NOT like is when people they aren’t following talk back to them using @. They hate it. It gets them into a frenzy.

I think they’re both right. I think that the sheer noise level of twitter combined with a wretched UI makes it unusable for people who have a long multitasking quantum. My twitter page goes back a mere seven hours, and Beaker has only said one thing (I hope he’s not sick). If I go to a long meeting or get on an airplane, I’ve lost context.

There are two behavioral feedback loops I see. Sometimes one twitters because one is twittering, which drives more twittering. The other is that one is not twittering because one is not twittering which drives not wanting to look at twitter.

Cutting down on the noise level would help people get into twittering, but not as much as Valleywag thinks. Twitter’s systems and subsystems are power-law driven (which is the same thing as saying they’re human status hierarchies). If you’re a newbie, noise isn’t really the problem, the problem is figuring out who you want to follow and wondering why you should bother tweeting into an empty room.

Prokovy Neva is right, too. The social circles that twitter creates are lopsided, and power-law in scale (which is why the whale is up so much). An even playing field for replies means that people who have lots of followers but follow few others not only don’t see messages from people they don’t know, but can have a nice civil public conversation with the few people they follow without having to know about the riff-raff. Right now, the downside of having lots of followers is that you can be on the receiving end of that power law. Over the long haul, that will lead to self-monitoring on the tweets, having tweets handled by assistants (which already goes on), or just giving up on it all.

I suspect that twitter will reverse this change (if they haven’t already) at least in part because there’s no channel of retweeting for people who like the change. Perhaps most of all, I think they realize that reinforcing the hierarchies to that degree would indeed make the twitter fad fade even faster than it would otherwise.

That seems to itself be inevitable, since it’s now been reported what should surprise no one — spammers are gleaning email addresses from tweets in real time as well as using twitter trending to drive uptake. That tweeting opens one up to spam will tend to put the brakes on it.

My Wolfram Alpha Demo

I got the opportunity a couple days ago to get a demo of Wolfram Alpha from Stephen Wolfram himself. It’s an impressive thing, and I can sympathize a bit with them on the overblown publicity. Wolfram said that they didn’t expect the press reaction, which I both empathize with and cast a raised eyebrow at.

There’s no difference, as you know, between an arbitrarily advanced technology and a rigged demo. And of course anyone whose spent a lot of time trying to create something grand is going to give you the good demo. It’s hard to know what the difference is between a rigged demo and a good one.

The major problem right now with Alpha is the overblown publicity. The last time I remember such gaga effusiveness it was over the Segway before we knew it was a scooter.

Alpha has had to suffer through not only its creator’s overblown assessments, but reviews from neophiles whose minds are so open that their occipital lobes face forward.

My short assessment is that it is the anti-Wikipedia and makes a huge splat on the fine line between clever and stupid, extending equally far in both directions. What they’ve done is create something very much like the computerized idiot savant. As much as that might sound like criticism, it isn’t. Alpha is very, very, very cool. Jaw-droppingly cool. And it is also incredibly cringe-worthily dumb. Let me give some examples.

Stephen gave us a lot of things that it can compute and the way it can infer answers. You can type “gdp france / germany” and it will give you plots of that. A query like “who was the president of brazil in 1930” will get you the right answer and a smear of the surrounding Presidents of Brazil as well.

It also has lovely deductions it makes. It geolocates your IP address and so if you ask it something involving “cups” it will infer from your location whether that should be American cups or English cups and give you a quick little link to change the preference on that. Very, very, clever.

It will also use your location to make other nice deductions. Stephen asked it a question about the population of Springfield, and since he is in Massachusetts, it inferred that Springfield, and there’s a little pop-up with a long list of other Springfields, as well. It’s very, very clever.

That list, however, got me the first glimpse of the stupid. I scanned the list of Springfields and realized something. Nowhere in that list appeared the Springfield of The Simpsons. Yeah, it’s fictional, and yeah that’s in many ways a relief, but dammit, it’s supposed to be a computational engine that can compute any fact that can be computed. While that Springfield is fictional, its population is a fact.

The group of us getting the demo got tired of Stephen’s enthusiastic typing in this query and that query. Many of them are very cool but boring. Comparing stock prices, market caps, changes in portfolio whatevers is something that a zillion financial web sites can do. We wanted more. We wanted our queries.

My query, which I didn’t ask because I thought it would be disruptive, is this: Which weighs more, a pound of gold or a pound of feathers? When I get to drive, that will be the first thing I ask.

The answer, in case you don’t know this famous question is a pound of feathers. Amusingly, Google gets it on the first link. Wolfram emphasizes that Alpha computes and is smart as opposed to Google just dumbly searching and collating.

I also didn’t really need to ask because one of the other people asked Alpha to plot swine flu in the the US, and it came up with — nil. It knows nothing about swine flu. Stephen helpfully suggested, “I can show you colon cancer instead” and did.

And there it is, the line between clever and stupid, and being on both sides of it. Alpha can’t tell you about swine flu because the data it works on is “curated,” meaning they have experts vet it. I approve. I’m a Wikipedia-sneerer, and I like an anti-mob system. However, having experts curate the data means that there’s nothing about the Springfield that pops to most people’s minds (because it’s pop culture) nor anything about swine flu. We asked Stephen about sources, and specifically about Wikipedia. He said that they use Wikipedia for some sorts of folk knowledge, like knowing that The Big Apple is a synonym for New York City but not for many things other than that.

Alpha is not a Google-killer. It is not ever going to compute anything that can be computed. It’s a humorless idiot savant that has an impressive database (presently some ten terabytes, according to the Wolfram folks), and its Mathematica-on-steroids engine gives a lot of wows.

On the other hand, as one of the people in my demo pointed out, there’s not anything beyond a spew of facts. Another of our queries was “17/hr” and Alpha told us what that is in terms of weekly, monthly, yearly salary. It did not tell us the sort of jobs that pay 17 per hour, which would be useful not only to people who need a job, but to socioeconomic researchers. It could tell us that, and very well might rather soon. But it doesn’t.

Alpha is an impressive tool that I can hardly wait to use (supposedly it goes on line perhaps this week). It’s something that will be a useful tool for many people and fills a much-needed niche. We need an anti-Wikipedia that has only curated facts. We need a computational engine that uses deductions and heuristics.

But we also need web resources that know about a fictional Springfield, and resources that can show you maps of the swine flu.

We also need tech reviewers who have critical faculties. Alpha is not a Google-killer. It’s also not likely as useful as Google. The gushing, open-brained reviews do us and Alpha a disservice by uncritically watching the rigged demo and refusing to ask about its limits. Alpha may straddle the line between clever and stupid, but the present reviewers all stand proudly on stupid.

Mr Laurie – Don’t do that

Ben Laurie has a nice little post up “More Banking Stupidity: Phished by Visa:”
scolding.jpg

Not content with destroying the world’s economies, the banking industry is also bent on ruining us individually, it seems. Take a look at Verified By Visa. Allegedly this protects cardholders – by training them to expect a process in which there’s absolutely no way to know whether you are being phished or not. Even more astonishing is that this seen as a benefit!

Ben’s analysis seems pretty good, except for one thing–he doesn’t say anything about what to do. Right now, we can see that organizations are flailing around, trying to address the problem. And pointing out problems can be helpful, “you’re wrong” is a pet peeve of mine. (While, Michael Howard’s really, but I’ve adopted it.)

So Mr Laurie, don’t do that. Don’t just say what not to do. Say what to do.

The security engineering community needs to come together and speak out on what the right design is. I’m going to ask Ben, Gunnar Peterson, Rich Mogull and Mike Dahn to ask what should we do? Can the four of you come to agreement on what to recommend?

(My recommendation, incidentally, stands from August 2005, in the essay “Preserving the Internet Channel Against Phishers.” Short version: bookmarks, although I need to add, empower people to use the bookmarks by giving them a list of pending actions from the login landing page.)

Photo: “The Matt Malone experience

[Update: edited title. Thanks, @mortman. Update 2: Fixed Mike Dahn’s URL; Firefox still not happy, I don’t think I can fix the post URL.]

Joseph Ratzinger and Information Security

Joseph Ratzinger (a/k/a Benedict XVI) made some comments recently made some comments that got some press. In particular, as Reuters reports: “Pope in Africa reaffirms ‘no condoms’ against AIDS.” Quoting the story, “The Church teaches that fidelity within heterosexual marriage, chastity and abstinence are the best ways to stop AIDS.”

Many of you are likely outraged. Saying, “sure, if only people would do that, then we wouldn’t need condoms. But people don’t behave that way.”

I’d like to explain what this has to do with information security. Some of you may be saying “sure, but we’re not that bad.”

In information security, we often keep saying the same thing over and over again, because we know it’s right. We tell people to never write down their passwords, to always validate their input, and to run IDS systems. Deep in our hearts, we know they don’t, and yet we keep saying those things. We tell them they “have to” fix all the security problems all the time.

It’s my hope that we in information security will be less religious than the Pope, but there’s plenty of evidence that, like him, we offer advice that makes people shake their heads in disgust.

Wherever you work, whatever you do, it’s worth asking yourself: am I being dogmatic in what I’m asking of people?

Me, I’m being dogmatic about asking you all to keep it civil in the comments.

Understanding Users

Paul Graham has a great article in “Startups in 13 Sentences:”

Having gotten it down to 13 sentences, I asked myself which I’d choose if I could only keep one.

Understand your users. That’s the key. The essential task in a startup is to create wealth; the dimension of wealth you have most control over is how much you improve users’ lives; and the hardest part of that is knowing what to make for them. Once you know what to make, it’s mere effort to make it, and most decent hackers are capable of that.

Then in “Geeks and Anti-Geeks,” Adam Barr writes:

You notice this if you listen to the chatter before a meeting. Half the time people are talking about World of Warcraft; those are the geeks. The other half they are talking about pinot noir; those are the anti-geeks. In either case, the group then proceeds to discuss a pattern-based approach to refactoring your C# class design in order to increase cohesion and leverage mock objects to achieve high code coverage while minimizing your unit test execution time.

The reason this matters is because Microsoft has recently been pushing engineers to realize that they are not the customer, the customers are not geeks, and therefore engineers can’t design properly for our customers. What I think happens, however, is that the anti-geeks hear this and think, “They’re not talking about me; I know that those beer-swilling geeks don’t understand the customer, but I’m a cultured sort, not a geek–I’m just like our customers!” And so they go out and design software for themselves…and of course they mess it up…because our customers may not spend their spare time playing Dungeons & Dragons, but neither do they spend it tramping across the Burgess Shale.

So I don’t disagree with Mr. Barr, but I do want to expand a little. The fundamental job of the program manager is to understand the market, come up with a solution that will delight the customer, sell that vision to the team, create and drive the product to shipping to those customers. The market only matters in understanding if a product is worth building, and in helping to shape our understanding of the customer by understanding their economic context.

I don’t think I’m anything like most of my customers. Those customers are first and foremost, 35,000 or so software engineers inside of Microsoft, second, security experts helping them or reviewing their work, and third, software engineers at other vendors who build on our platform. I’m most like the second set, but they’re a distant second, and (as several of them will tell you) I have a tendency to reject their first attempt at getting a feature out of hand, because our previous tools were so expert-centric.

More importantly, I don’t need to be like our customers to delight them. I am nothing like a professional chef, but I am frequently delighted by them. What I need to do is actively listen to those customers, and fairly and effectively advocate for their attitudes and words to my team.

As I was working on this Joel Spolsky posted “How to be a program manager,” which covers some similar ideas.

The New Openness?

This photograph was taken at 11:19 AM on January 20th. It’s very cool that we can get 1 meter resolution photographs from space. What really struck me about this photo was.. well, take a look as you scroll down…

Obama inauguration from space.jpg

What really struck me about this is the open space. What’s up with that? Reports were that people were being turned away. Why all the visible ground? Were those areas still filling in? Did security procedures keep away that many?

You can click through for a much larger version at the Boston Globe. [update: even larger version at GeoEye, purveyors of fine space imagery.]

Cryptol Language for Cryptography

Galois has announced “

Cryptol is a domain specific language for the design, implementation and verification of cryptographic algorithms, developed over the past decade by Galois for the United States National Security Agency. It has been used successfully in a number of projects, and is also in use at Rockwell Collins, Inc.


Cryptol allows a cryptographer to:

  • Create a reference specification and associated formal model.
  • Quickly refine the specification, in Cryptol, to one or more implementations, trading off space, time, and other performance metrics.
  • Compile the implementation for multiple targets, including: C/C++, Haskell, and VHDL/Verilog.
  • Equivalence check an implementation against the reference specification, including implementations not produced by Cryptol.

The trial version & docs are here.

First, I think this is really cool. I like domain specific languages, and crypto is hard. I really like equivalence checking between models and code. I had some questions, which I’m not yet able to answer, because the trial version doesn’t include the code generation bits, and in part because I’m trying to vacation a little.

My main question came from the manual, which First off the manual states: “Cryptol has a very flexible notion of the size of data.” (page number 11, section 2.5) I’d paste a longer quote, but the PDF doesn’t seem to encode spaces well. Which is ironic, because what I was interested in is “does the generated code defend against stack overflows well?” In light of the ability to “[trade] off space, time [etc]” I worry that there are a set of options which translate, transparently, into something bad in C.

I worry about this because as important as crypto is, cryptographers have a lot to consider as they design algorithms and systems. As Michael Howard pointed out, the Tokeneer system shipped with a library that may be from 2001, with 23 possible vulns. It was secure for a set of requirements, and if the requirements for Cryptol don’t contain “resist bad input,” then a lot of systems will be in trouble.

Navigation