Mac Command Line: Turning Apps into Commands

I moved to MacOS X because it offers both a unix command line and graphical interfaces, and I almost exclusively use the command line as I switch between tasks. If you use a terminal and aren’t familiar with the open command, I urge you to take a look.

I tend to open documents with open ~/Do[tab]… I wanted a way to open more things like this. I wanted to treat every app as if it were a command. I did this a little while back, and recently had to use a Mac without these little aliases and it was annoying! (We know that mousing was objectively faster and cognitively slower than keyboard use.

So I thought I’d share. This works great in a .tcshrc. I spent a minute translating into bash, but the escaping escaped me. Also, I suppose there might be a more elegant approach to the MS apps, but it was easier to write 5 specific aliases than to figure it out.

Anyway, here’s the code:

foreach f (/Applications/*.app /Applications/Utilities/*.app)
    set t=`basename -a $f`
	# Does not work if your app has a shell metachar in the name. Lookin' at you, superduper!
    set w=`echo $t | sed  -e 's/ //g' -e  's/.app$//'  | tr '[A-Z]' '[a-z]'`
    alias $w open -a \""$f"\"
end

alias excel open -a "/Applications/Microsoft\ Office\ 2011/Microsoft\ Excel.app"
alias word open -a "/Applications/Microsoft\ Office\ 2011/Microsoft\ Word.app"
alias powerpoint open -a "/Applications/Microsoft\ Office\ 2011/Microsoft\ PowerPoint.app"
alias ppt powerpoint
alias xls excel

(Previously: Adding emacs keybindings to Word.)

Election 2016

This election has been hard to take on all sorts of levels, and I’m not going to write about the crap. Everything to be said has been said, along which much that never should have been said, and much that should disqualify those who said it from running for President. I thought about endorsing Jill Stein, the way we endorsed McCain-Palin in 2008, but even the Onion is having trouble being funny.

One thing which makes the American election system less functional is the electoral college system, which means that essentially a small number of states decide the election.

There is an effort underway to change that to a national popular vote, and there’s a group working towards that by getting states to agree amongst themselves to allocate their electoral college votes towards the winner of the national popular vote, once enough states have made that commitment to control the results of the elections. Its a pretty neat approach to patching the Constitution, and you can learn more at National Popular Vote.

Also in the spirit of nice things to see today, WROC in Rochester is streaming from the resting place of Susan B Anthony, whose tombstone has been covered with “I voted” stickers, and as I watch, people are reading the Seneca Falls Declaration.

Learning from Our Experience, Part Z

One of the themes of The New School of Information Security is how other fields learn from their experiences, and how information security’s culture of hiding our incidents prevents us from learning.

Zombie survival guide

Today I found yet another field where they are looking to learn from previous incidents and mistakes: zombies. From “The Zombie Survival Guide: Recorded Attacks:”

Organize before they rise!

Scripted by the world’s leading zombie authority, Max Brooks, Recorded Attacks reveals how other eras and cultures have dealt with–and survived– the ancient viral plague. By immersing ourselves in past horror we may yet prevail over the coming outbreak in our time.

Of course, we don’t need to imagine learning from our mistakes. Plenty of fields do it, and so don’t shamble around like zombies.

The Breach Response Market Is Broken (and what could be done)

Much of what Andrew and I wrote about in the New School has come to pass. Disclosing breaches is no longer as scary, nor as shocking, as it was. But one thing we expected to happen was the emergence of a robust market of services for breach victims. That’s not happened, and I’ve been thinking about why that is, and what we might do about it.

I submitted a short (1 1/2 page) comment for the FTC’s PrivacyCon, and the FTC has published that here.

[Update Oct 19: I wrote a blog post for IANS, “After the Breach: Making Your Response Count“]

[Update Nov 21: the folks at Abine decided to run a survey, and asked 500 people what they’d like to see a breach notice letter. Their blog post.]

Secure Development or Backdoors: Pick One

In “Threat Modeling Crypto Back Doors,” I wrote:

In the same vein, the requests and implementations for such back-doors may be confidential or classified. If that’s the case, the features may not go through normal tracking for implementation, testing, or review, again reducing the odds that they are secure. Of course, because such a system is designed to bypass other security controls, any weaknesses are likely to have outsized impact.

It sounds like exactly what I predicted has occurred. As Joseph Menn reports in “Yahoo secretly scanned customer emails for U.S. intelligence:”

When Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users’ security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.

(I should add that I did not see anything like this at Microsoft, but had thought about how it might have unfolded as I wrote what I wrote in the book excerpt above.)

Crypto back doors are a bad idea, and we cannot implement them without breaking the security of the internet.

Gavle Goat, now 56% more secure!

A burned wooden goat

“We’ll have more guards. We’re going to try to have a ‘goat guarantee’ the first weekend,” deputy council chief Helene Åkerlind, representing the local branch of the Liberal Party, told newspaper Gefle Dagblad.

“It is really important that it stays standing in its 50th year,” she added to Arbetarbladet.

Gävle Council has decided to allocate an extra 850,000 kronor ($98,908) to the goat’s grand birthday party, bringing the town’s Christmas celebrations budget up to 2.3 million kronor this year. (“Swedes rally to protect arson-prone yule goat“)

Obviously, what you need to free up that budget is more burning goats. Or perhaps its a credible plan on why spending it will reduce risk. I’m never quite sure.

Previously: 13 Meter Straw Goat Met His Match, Gavle Goat Gone, Burning News: Gavle Goat, Gävle Goat Gambit Goes Astray, The Gavle Goat is Getting Ready to Burn!.

Image: The goat’s mortal remains, immortalized in 2011 by Lasse Halvarsson.

You say noise, I say data

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Why Don't We Have an Incident Repository?

Steve Bellovin and I provided some “Input to the Commission on Enhancing National Cybersecurity.” It opens:

We are writing after 25 years of calls for a “NTSB for Security” have failed to result in action. As early as 1991, a National Research Council report called for “build[ing] a repository of incident data” and said “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.” [1] The calls for more data about incidents have continued, including by us [2, 3].

The lack of a repository of incident data impacts our ability to answer or assess many of your questions, and our key recommendation is that the failure to establish such a repository is, in and of itself, worthy of study. There are many factors in the realm of folklore as to why we do not have a repository, but no rigorous answer. Thus, our answer to your question 4 (“What can or should be done now or within the next 1-2 years to better address the challenges?”) is to study what factors have inhibited the creation of a repository of incident data, and our answer to question 5 (“what should be done over a decade?”) is to establish one. Commercial air travel is so incredibly safe today precisely because of decades of accident investigations, investigations that have helped plane manufacturers, airlines, and pilots learn from previous failures.

Diagrams in Threat Modeling

When I think about how to threat model well, one of the elements that is most important is how much people need to keep in their heads, the cognitive load if you will.

In reading Charlie Stross’s blog post, “Writer, Interrupted” this paragraph really jumped out at me:

One thing that coding and writing fiction have in common is that both tasks require the participant to hold huge amounts of information in their head, in working memory. In the case of the programmer, they may be tracing a variable or function call through the context of a project distributed across many source files, and simultaneously maintaining awareness of whatever complex APIs the object of their attention is interacting with. In the case of the author, they may be holding a substantial chunk of the plot of a novel (or worse, an entire series) in their head, along with a model of the mental state of the character they’re focussing on, and a list of secondary protagonists, while attempting to ensure that the individual sentence they’re currently crafting is consistent with the rest of the body of work.

One of the reasons that I’m fond of diagrams is that they allow the threat modelers to migrate information out of their heads into a diagram, making room for thinking about threats.

Lately, I’ve been thinking a lot about threat modeling tools, including some pretty interesting tools for automated discovery of existing architecture from code. That’s pretty neat, and it dramatically cuts the cost of getting started. Reducing effort, or cost, is inherently good. Sometimes, the reduction in effort is an unalloyed good, that is, any tradeoffs are so dwarfed by benefits as to be unarguable. Sometimes, you lose things that might be worth keeping, either as a hobby like knitting or in the careful chef preparing a fine meal.

I think a lot about where drawing diagrams on a whiteboard falls. It has a cost, and that cost can be high. “Assemble a team of architect, developer, test lead, business analyst, operations and networking” reads one bit of advice. That’s a lot of people for a cross-functional meeting.

That meeting can be a great way to find disconnects in what people conceive of building. And there’s a difference between drawing a diagram and being handed a diagram. I want to draw that out a little bit and ask for your help in understanding the tradeoffs and when they might and might not be appropriate. (Gary McGraw is fond of saying that getting these people in a room and letting them argue is the most important step in “architectural risk analysis.” I think it’s tremendously valuable, and having structures, tools and methods to help them avoid ratholes and path dependency is a big win.)

So what are the advantages and disadvantages of each?

Whiteboard

  • Collaboration. Walking to the whiteboard and picking up a marker is far less intrusive than taking someone’s computer, or starting to edit a document in a shared tool.
  • Ease of use. A whiteboard is still easier than just about any other drawing tool.
  • Discovery of different perspective/belief. This is a little subtle. If I’m handed a diagram, I’m less likely to object. An objection may contain a critique of someone else’s work, it may be a conflict. As something is being drawn on a whiteboard, it seems easier to say “what about the debug interface?” (This ties back to Gary McGraw’s point.)
  • Storytelling. It is easier to tell a story standing next to a whiteboard than any tech I’ve used. A large whiteboard diagram is easy to point at. You’re not blocking the projector. You can easily edit as you’re talking.
  • Messy writing/what does that mean? We’ve all been there? Someone writes something in shorthand as a conversation is happening, and either you can’t read it or you can’t understand what was meant. Structured systems encourage writing a few more words, making things more tedious for everyone around.

Software Tools

  • Automatic analysis. Tools like the Microsoft Threat Modeling tool can give you a baseline set of threats to which you add detail. Structure is a tremendous aid to getting things done, and in threat modeling, it helps in answering “what could go wrong?”
  • Authority/decidedness/fixedness. This is the other side of the discovery coin. Sometimes, there are architectural answers, and those answers are reasonably fixed. For example, hardware accesses are mediated by the kernel, and filesystem and network are abstracted there. (More recent kernels offer filesystems in userland, but that change was discussed in detail.) Similarly, I’ve seen large, complex systems with overall architecture diagrams, and a change to these diagrams had to be discussed and approved in advance. If this is the case, then a fixed diagram, printed poster size and affixed to walls, can also be used in threat modeling meetings as a context diagram. No need to re-draw it as a DFD.
  • Photographs of whiteboards are hard to archive and search without further processing.
  • Photographs of whiteboards may imply that ‘this isn’t very important.” If you have a really strong culture of “just barely good enough” than this might not be the case, but if other documents are more structured or cared for, then photos of a whiteboard may carry a message.
  • Threat modeling only late. If you’re going to get architecture from code, then you may not think about it until the code is written. If you weren’t going to threat model anyway, then this is a win, but if there was a reasonable chance you were going to do the architectural analysis while there was a chance to change the architecture, software tools may take that away.

(Of course, there are apps that help you take images from a whiteboard and improve them, for example, Best iOS OCR Scanning Apps, which I’m ignoring for purposes of teasing things out a bit. Operationally, probably worth digging into.)

I’d love your thoughts: are there other advantages or disadvantages of a whiteboard or software?

Navigation