Category: Usability

Cryptol Language for Cryptography

Galois has announced “

Cryptol is a domain specific language for the design, implementation and verification of cryptographic algorithms, developed over the past decade by Galois for the United States National Security Agency. It has been used successfully in a number of projects, and is also in use at Rockwell Collins, Inc.


Cryptol allows a cryptographer to:

  • Create a reference specification and associated formal model.
  • Quickly refine the specification, in Cryptol, to one or more implementations, trading off space, time, and other performance metrics.
  • Compile the implementation for multiple targets, including: C/C++, Haskell, and VHDL/Verilog.
  • Equivalence check an implementation against the reference specification, including implementations not produced by Cryptol.

The trial version & docs are here.

First, I think this is really cool. I like domain specific languages, and crypto is hard. I really like equivalence checking between models and code. I had some questions, which I’m not yet able to answer, because the trial version doesn’t include the code generation bits, and in part because I’m trying to vacation a little.

My main question came from the manual, which First off the manual states: “Cryptol has a very flexible notion of the size of data.” (page number 11, section 2.5) I’d paste a longer quote, but the PDF doesn’t seem to encode spaces well. Which is ironic, because what I was interested in is “does the generated code defend against stack overflows well?” In light of the ability to “[trade] off space, time [etc]” I worry that there are a set of options which translate, transparently, into something bad in C.

I worry about this because as important as crypto is, cryptographers have a lot to consider as they design algorithms and systems. As Michael Howard pointed out, the Tokeneer system shipped with a library that may be from 2001, with 23 possible vulns. It was secure for a set of requirements, and if the requirements for Cryptol don’t contain “resist bad input,” then a lot of systems will be in trouble.

As easy as dialing a phone

People often make the claim that something is “as intuitive as dialing the phone.”

As I was listening to “Dave Birch interviewing Ben Laurie,” I was reminded of this 1927 silent film:

how to dial the telephone.jpg

Ben commented on people having difficulty with the CardSpace user interface, and it not being as intuitive as having your email address being a login identifier.

Anyway, fascinating interview. Worth a listen, even if takes twice as long as learning what a dial tone is.

Working Through Screens

Jacob Burghardt has a very interesting new ebook, “Working Through Screens.”
MagicHappens.gif

If one was to summarize the status quo, it might sound something like this: when it comes to interactive applications for knowledge work, products that are considered essential are not always satisfactory. In fact, they may be deeply flawed in ways that we commonly do not recognize given our current expectations of these tools. With our collective sights set low, we overlook many faults.

Unless knowledge workers are highly motivated early adopters that are willing and able to make use of most anything, their experiences as users of interactive applications can vary drastically. These differences in experience can largely depend on the overall alignment of an individual’s intentions and understandings with the specifics of a tool’s design.

Poorly envisioned knowledge work applications can … present workers with confusing data structures and representations of information that do not correlate to the artifacts that they are used to thinking about in their own work practices.

I’m only a little ways into the book, but a great deal of what he says resonates with me. Much of the problem I saw with previous generation threat modeling tools were that they were created by and for those ‘highly motivated early adopters,’ and then delivered to people who were not used to thinking about their software from the perspective of assets and entry points. (Thus the third excerpt.) In creating the v3 SDL Threat Modeling Tool, I struggled with a lot of these issues.

If you encounter problems like this, there’s no reason to not invest some time in “Working Through Screens.”

Via Information Aesthetics.

Virgin America

I flew Virgin Atlantic for the first time recently, for a day trip to San Francisco. I enjoyed it. I can’t remember the last time I actually enjoyed getting on a plane.

The first really standout bit was when the Seattle ground folks put on music and a name that song contest. They handed out free drink tickets for each winner, and a second free drink for singing along through the PA. I was initially a little skeptical — I really wanted some peace and quiet — but it’s better than airport CNN. They seemed to be having a genuinely good time, and they had me smiling by the time I got on the plane.

On the way home, I splurged for a $50 upgrade, figuring that I needed a drink or three, and some food wouldn’t hurt either. The seat was comfy, and the flight attendant was friendly, conversational and appeared to be enjoying himself.

If I lived in San Francisco (their US hub) I’d be a convert. As is, I’ll likely fly them when I can.

If I was one of those pedantic bloggers who tried to tie everything back to the blog title, I’d talk about the value of the unexpected. But really, give them a chance if you’re headed on a route they fly.

SDL Announcements

I’m in Barcelona, where my employer has made three announcements about our Security Development Lifecycle, which you can read about here: “SDL Announcements at TechEd EMEA.”

I’m really excited about all three announcements: they represent an important step forward in helping organizations develop more secure code.

But I’m most excited about the public availability of the SDL Threat Modeling Tool. I’ve been working on this for the last 18 months. A lot of the thinking in “Experiences Threat Modeling at Microsoft” has been made concrete in this new tool, which helps any software engineer threat model.

SDL-Threat-Modeling-Tool-v3.jpg

I’m personally tremendously grateful to Meng Li, Douglas MacIver, Patrick McCuller, Ivan Medvedev and Larry Osterman. Each of them has contributed tremendously to making the tool what it is today. I’m also grateful to the many Microsoft employees who have taken the time to give me feedback, and I look forward to more feedback as more people use the tool.

Blaming the Victim, Yet Again

malware dialog box

John Timmer of Ars Technica writes about how we ignore dialog boxes in, “Fake popup study sadly confirms most users are idiots.”

The article reports that researchers at the Psychology Department of North Carolina State University created a number of fake dialog boxes had varying sorts of clues that they were not real dialog boxes, but sham ones. The sham dialog boxes had varying levels of visual clues to help the user think they were sham. One of the fake dialogs is here:

The conclusion of many people is summed up in the title of the Ars Technica — that people are idiots.

My opinion is that this is blaming the victim. Users are presented with such a variety of elements that it’s hard to know what’s real and what’s not. Worse, there are so many worthless dialogs that pop up during normal operation that we’re all trained to play whack-a-mole with them.

I confess to being as bad as anyone. My company has SSL set up to the mail server, but it’s a locally-generated certificate. So every time I fire up the mail client, there’s a breathless dialog telling me that the certificate isn’t a real certificate. Do you know what this has taught me? To be able to whack the okay button before the dialog finishes painting.

The idiots are the developers who give people worthless dialog boxes, who make it next to impossible to import in local certificates, who train people to just make the damned dialog go away.

Computing isn’t safe primarily because the software expects the user to be a continuously alert expert. If the users are idiots, it is only because they stand for this.

The Discipline of "think like an attacker"

John Kelsey had some great things to say a comment on “Think Like An Attacker.” I’ve excerpted some key bits to respond to them here.

Perhaps the most important is to get the designer to stop looking for reasons attacks are impossible, and start looking for reasons they’re possible. That’s a pattern I’ve seen over and over again–smart people who really know their system also usually like their system, and want it to be secure. And so they spend a lot of time thinking about why their system is secure. “Nobody could steal our PIN because we encrypt it with triple-DES.”

So this is a great goal. I have two questions: first, is it reasonable? How many people can really step outside their design and regard it with a new perspective? How many people can then analyze the security of a system they’ve designed? (Is there a formal name for this problem? I call it ‘creator-blindness.’) I’m not sure exhorting people to think like an attacker helps. This problem isn’t unique to security, which brings me to my second question: is it effective? I was once taught to read my writing aloud as a way of finding mistakes. I teach people to diagram their system and then use a system we call “STRIDE per element” to help people look at it. By giving people a structure for analysis, we help them step outside of that creator frame.

A second goal of that “think like an attacker” exhortation is to get people to realize that, in order to know whether their system is secure, they need to learn something about what tools and resources an attacker is likely to have.

So, for a moment, let’s assume that this is a reasonable goal, and one we can expect every developer who hears the phrase to go pursue. Where do they go? How much time should they devote to it? Again, I’m not talking about the use of the phrase within the security engineering community, but in software engineering more generally. Secondly (again), there’s the question of “is this the most effective way to push people?”

Third, there’s a mindset of being an attacker. I don’t know how to teach that. It’s not just about intelligence–I’ve worked with stunningly brilliant people who don’t seem to have that mindset, and with people who are much less brilliant in that brute-force impressive brain sense, but who just seem to have the right kind of mind to break stuff.

Well, that I can’t argue with. All I’ll say is that we’ve been exhorting people to think like attackers for years, and it hasn’t helped.

I believe that security analysis is a skill which can be taught. The best have both talent and have worked to develop that talent. I hope and expect that we can figure out how to do so. Figuring that out will involve figuring out what pedagogic approaches have failed, so we can set them aside, and make room for experimentation, chaos, and — we hope — actual improvements. I believe that, when asked of non-security experts, the ‘think like an attacker’ is on that list of things we should set aside.

Finally, a side note on the title. If you’re indisciplined, feel free to skip to about 3:10.

Think Like An Attacker?

One of the problems with being quoted in the press is that even your mom writes to you with questions like “And what’s wrong with “think like an attacker?” I think it’s good advice!”

Thanks for the confidence, mom!

Here’s what’s wrong with think like an attacker: most people have no clue how to do it. They don’t know what matters to an attacker. They don’t know how an attacker spends their day. They don’t know how an attacker approaches a problem. Telling people to think like an attacker isn’t prescriptive or clear. Some smart folks like Yoshi Kohno are trying to teach it. (I haven’t seen a report on how it’s gone.)

Even if Yoshi is succeeding, it’s hard to teach a way of thinking. It takes a quarter or more at a university. I’m not claiming that ‘think like an attacker’ isn’t teachable, but I will claim that most people don’t know how. What’s worse, the way we say it, we sometimes imply that you should be embarrassed if you can’t think like an attacker.

Lately, I’ve been challenging people to think like a professional chef. Most people have no idea how a chef spends their days, or how they approach a problem. They have no idea how to plan a menu, or how to cook a hundred or more dinners in an hour.

We need to give advice that can be followed. We need to teach people how to think about security. Repeating the “think like an attacker” mantra may be useful to a small class of well-oriented experts. For everyone else, it’s like saying “just ride the bike!” rather than teaching them step-by-step. We can and should do better at understanding people’s capabilities, giving them advice to match, and training and education to improve.

Understanding people’s capabilities, giving them advice to match and helping them improve might not be a bad description of all the announcements we made yesterday.

In particular, the new threat modeling process is built on something we expect an engineer will know: their software design. It’s a better starting point than “think like a civil engineer.”

[Update: See also my follow-up post, “The Discipline of ‘think like an attacker’.”]

Hans Monderman and Risk

Zimran links to an excellent long article on Hans Monderman and then says:

When thinking about human behavior, it makes sense to understand what people perceive, which may be different from how things are, and will almost certainly be very different from how a removed third party thinks them to be. Traffic accidents are predominantly caused by people being inattentive. Increase the feeling of risk, and you increase the attention. I know when I am in traffic on my bike, I’m hyper-vigilant, and this has made me a better car driver.

Some interesting quotes from the article:

Without bumps or flashing warning signs, drivers slowed, so much so that Monderman’s radar gun couldn’t even register their speeds. Rather than clarity and segregation, he had created confusion and ambiguity. Unsure of what space belonged to them, drivers became more accommodating. Rather than give drivers a simple behavioral mandate— say, a speed limit sign or a speed bump— he had, through the new road design, subtly suggested the proper course of action. And he did something else. He used context to change behavior. He had made the main road look like a narrow lane in a village, not simply a traffic- way through some anonymous town.

On Kensington High Street, a busy thoroughfare for pedestrians, bikes, and cars, local planners decided to spruce up the street and make it more attractive to shoppers by removing the metal railings that had been erected between the street and the sidewalk, as well as “street clutter,” everything from signs to hatched marks on the roadway. None of these measures complied with Department for Transport standards. And yet, since the makeover there have been fewer accidents than before. Though more pedestrians now cross outside crosswalks, car speeds (the fundamental cause of traffic danger) have been reduced, precisely because the area now feels like it must be navigated carefully.

We talk about Monderman’s thinking about risk in the New School, and I wanted to talk a little about the implications for computer security. The idea of giving a user experience a sense of place is a great one, if we could constrain it to the good guys. Unfortunately, bad guys can design their websites to look like a narrow lane in a village, a welcoming mall, or whatever else they want. The designer of a space can make you feel safe or feel like you must navigate carefully.

What do you think phishers are going to do?

Lessons for security from "Social Networks"

There are a couple of blog posts that I’ve read lately that link together for me, and I’m still working through the reasons why. I’d love your feedback or thoughts.

A blogger by the name of Lhooqtius ov Borg has a long screed on why he doesn’t like the “Social Futilities.” Tyler Cowan has a short on “fake following.”

I think the futility of these systems involves a poor understanding of how people interact. The systems I like and use (LinkedIn, Dopplr) are very purpose specific. I really like how Dopplr doesn’t even bother with a friend concept–feel free to tell me where you’re going, I don’t have to reciprocate. It’s useful because it doesn’t try to replace a real, complex relationship (“friendship”) with a narrowly defined shadow of the world. (In this vein, Austin Hill links a great video in his Facebook in Reality post.)

In information technology, we often replace these rich, nuanced concepts with much more narrow, focused replacements which serve some business purpose. Credit granting has gone from an assessment of the person to an assessment of data about the person to an assessment of the person’s data shadow. There are some benefits to this: race is less of a factor than it was. There are also downsides, as data shadows, blurry things, get confused after fraud. (Speaking of credit scoring, BusinessWeek’s “Your lifestyle may hurt credit score” is not to be missed.)

We’ve replaced the idea of ‘identity’ with ‘account.’ (I’ll once again plug Gelfman’s Presentation of Self for one understanding of how people fluidly and easily manage their personas, and why federated identity will never take off.) Cryptographers model people as Alice and Bob, universal turing machines. But as Adi Shamir says, “If there’s one thing Alice and Bob are not, it’s universal turing machines.” Many people have stopped Understanding Privacy and talk only about identity theft, or, if we’re lucky, about fair information practices.

So the key lesson is that the world is a complex, confusing, emergent and chaotic system. Simplifications all come at a cost. Without an understanding of those costs, we risk creating more security systems as frustrating as those “social networks.”

[Update: It turns out Bruce Schneier has a closely related essay in today’s LA Times, “The TSA’s useless photo ID rules” in which he talks about the dangers of simplifying identity into intent. Had I seen it earlier, I’d have integrated it in.]

Navigation