20 Year Software: Engineering and Updates

Twenty years ago, Windows 95 was the most common operating system. Yahoo and Altavista were our gateways to the internet. Steve Jobs just returned to Apple. Google didn’t exist yet. America Online had just launched their Instant Messenger. IPv6 was coming soon. That’s part of the state of software in 1997, twenty years ago. We need to figure out what engineering software looks like for a twenty year lifespan, and part of that will be really doing such engineering, because theory will only take us to the limits of our imaginations.

Today, companies are selling devices that will last twenty years in the home, such as refrigerators and speakers, and making them with network connectivity. That network connectivity is both full internet connectivity, that is, Internet Protocol stacks, and also local network connectivity, such as Bluetooth and Zigbee.

We have very little idea how to make software that can survive as long as AOL IM did. (It’s going away in December, if you missed the story.)

Recently, there was a news cycle around Sonos updating their privacy policy. I don’t want to pick on Sonos, because I think what they’re experiencing is just a bit of the future, unevenly distributed. Responses such as this were common: “Hardware maker: Give up your privacy and let us record what you say in your home, or we’ll destroy your property:”

“The customer can choose to acknowledge the policy, or can accept that over time their product may cease to function,” the Sonos spokesperson said, specifically.

Or, as the Consumerist, part of Consumer Reports, puts it in “Sonos Holds Software Updates Hostage If You Don’t Sign New Privacy Agreement:

Sonos hasn’t specified what functionality might no longer work in the future if customers don’t accept the new privacy policy.

There are some real challenges here, both technical and economic. Twenty years ago, we didn’t understand double-free or format string vulnerabilities. Twenty years of software updates aren’t going to be cheap. (I wrote about the economics in “Maintaining & Updating Software.”)

My read is that Sonos tried to write a privacy policy that balanced their anticipated changes, including Alexa support, with a bit of future-proofing. I think that the reason they haven’t specified what might not work is because they really don’t know, because nobody knows.

The image at the top is the sole notification that I’ve gotten that Office 2011 is no longer getting security updates. (Sadly, it’s only shown once per computer, or perhaps once per user of the computer.) Microsoft, like all tech companies, will cut functionality that it can’t support, like <"a href="https://www.macworld.com/article/1154785/business/welcomebackvisualbasic.html">Visual Basic for Mac and also “end of lifes” its products. They do so on a published timeline, but it seems wrong to apply that to a refrigerator, end of lifeing your food supply.

There’s probably a clash coming between what’s allowable and what’s economically feasible. If your contract says you can update your device at any time, it still may be beyond “the corners of the contract” to shut it off entirely. Beyond economically challenging, it may not even be technically feasible to update the system. Perhaps the chip is too small, or its power budget too meager, to always connect over TLS4.2, needed addresses the SILLYLOGO attack.

What we need might include:

  • A Dead Software Foundation, dedicated to maintaining the software which underlies IoT devices for twenty years. This is not only the Linux kernel, but things like tinybox and openssl. Such a foundation could be funded by organizations shipping IoT devices, or even by governments, concerned about the externalities, what Sean Smith called “the Cyber Love Canal” in The Internet of Risky Things. The Love Canal analogy is apt; in theory, the government cleans up after the firms that polluted are gone. (The practice is far more complex.)
  • Model architectures that show how to engineer devices, such as an internet speaker, so that it can effectively be taken offline when the time comes. (There’s work in the mobile app space on making apps work offline, which is related, but carries the expectation that the app will eventually reconnect.)
  • Conceptualization of the legal limits of what you can sign away in the fine print. (This may be everything; between severability and arbitration clauses, the courts have let contract law tilt very far towards the contract authors, but Congress did step in to write the Consumer Review Fairness Act.) The FTC has commented on issues of device longevity, but not (afaik) on limits of contracts.

What else do we need to build software that survives for twenty years?

Threat Modeling “App Democracy”

Direct Republican Democracy?” is a fascinating post at Prawfsblog, a collective of law professors. In it, Michael T. Morley describes a candidate for Boulder City Council with a plan to vote “the way voters tell him,” and discusses how that might not be really representative of what people want, and how it differs from (small-r) republican government. Worth a few moments of your time.

Building an Application Security Team

The Application Security Engineer role is in demand nowadays. Many job offers are available, but actual candidates are scarce. Why is that? It’s not an easy task as the level of skills needed is both to be broad and specialized at the same time. Most of the offers are about one person, one unicorn that does all those wonderful things to ensure that the organization is making secure software.

 

Looking at where we are coming from

For the sake of simplicity, let’s say that the application security industry is traditionally split between two main types of offering: those who are selling products and those who are selling pen test services. In many occasions both are from the same vendor, but from different teams internally. Let’s break down how they are judged on their success to have an idea how it managed to evolve.

Continue reading “Building an Application Security Team”

Worthwhile books, Q3

Some of what I’ve read over the past quarter, and want to recommend each of the books below as worthy of your time.

Cyber

  • The Internet of Risky Things, Sean Smith. This was a surprisingly good short read. What I gained was an organized way of thinking and a nice reference for thinking through the issues of IOT. Also, the lovely phrase “cyber Love Canal.”
  • American Spies, Jennifer Stisa Granick. Again, surprisingly good, laying out with the logical force that really good lawyers bring, explaining both sides of an issue and then explaining the frame in which you should understand it.
  • Saving Bletchley Park, Sue Black. (Title links to publisher, who sells ebook & print, or you can go to Amazon, who only sells the hardback.) The really interesting story of the activism campaign to save Bletchley Park, which was falling apart 20 years ago. Dr. Black is explicit that she wrote the book to carry the feel of an internet campaign, with some stylistic bits that I found surprising. I was expecting a drier style. Don’t make my mistake, and do read the book. Also, visit Bletchley Park: it’s a great museum.

Nonfiction, not security

Fiction

  • N. K. Jemisin’s Broken Earth Series. Outstanding writing, interesting worldbuilding, and the first two books have both won Hugos. First book is “The Fifth Season.” Bump it up in your queue.
  • The Rise and Fall of D.O.D.O, Neal Stephenson and Nicole Galland. I’m not (yet) familiar with Galland’s work, much of which seems to be historical fiction. This fairly breezy and fun time travel read, much less dense than most of Stephenson’s recent books.

Previously: Q2.

Emergent Musical Chaos

Global jukebox framed
The New York Times reports on how many of Alan Lomax’s recordings are now online, “The Unfinished Work of Alan Lomax’s Global Jukebox.” This is a very interesting and important archive of musical and cultural heritage. The Global Jukebox. I was going to say that Lomax and Harry Smith were parallel, and that the Anthology of American Folk Music is a similar project, but I was wrong, Smith drew heavily on Lomax’s work.

Simultaneously, the Internet Archive has been working to put 78 RPM records online, and has a feed.

It’s Not The Crime, It’s The Coverup or the Chaos

Well, Richard Smith has “resigned” from Equifax.

The CEO being fired is a rare outcome of a breach, and so I want to discuss what’s going on and put it into context, which includes the failures at DHS, and Deloitte breach. Also, I aim to follow the advice to praise specifically and criticize in general, and break that pattern here because we can learn so much from the specifics of the cases, and in so learning, do better.

Smith was not fired because of the breach. Breaches happen. Executives know this. Boards know this. The breach is outside of their control. Smith was fired because of the post-breach chaos. Systems that didn’t work. Tweeting links to a scam site for two weeks. PINS that were recoverable. Weeks of systems saying “you may have been a victim.” Headlines like “Why the Equifax Breach Stings So Bad” in the NYTimes. Smith was fired in part because of the post-breach chaos, which was something he was supposed to control.

But it wasn’t just the chaos. It was that Equifax displayed so much self-centeredness after the breach. They had the chutzpah to offer up their own product as a remedy. And that self-dealing comes from seeing itself as a victim. From failing to understand how the breach will be seen in the rest of the world. And that’s a very similar motive to the one that leads to coverups.

In The New School Andrew and I discussed how fear of firing was one reason that companies don’t disclose breaches. We also discussed how, once you agree that “security issues” are things which should remain secret or shared with a small group, you can spend all your energy on rules for information sharing, and have no energy left for actual information sharing.

And I think that’s the root cause of “U.S. Tells 21 States That Hackers Targeted Their Voting Systems” a full year after finding out:

The notification came roughly a year after officials with the United States Department of Homeland Security first said states were targeted by hacking efforts possibly connected to Russia.

A year.

A year.

A year after states were first targeted. A year in which “Obama personally warned Mark Zuckerberg to take the threats of fake news ‘seriously.’” (Of course, the two issues may not have been provably linkable at the time.) But. A year.

I do not know what the people responsible for getting that message to the states were doing during that time, but we have every reason to believe that it probably had to do with (and here, I am using not my sarcastic font, but my scornful one) “rules of engagement,” “traffic light protocols,” “sources and methods” and other things which are at odds with addressing the issue. (End scornful font.) I understand the need for these things. I understand protecting sources is a key role of an intelligence service which wants to recruit more sources. And I also believe that there’s a time to risk those things. Or we might end up with a President who has more harsh words for Australia than the Philippines. More time for Russia than Germany.

In part, we have such a President because we value secrecy over disclosure. We accept these delays and view them as reasonable. Of course, the election didn’t turn entirely on these issues, but on our electoral college system, which I discussed at some length, including ways to fix it.

All of which brings me to the Deloitte breach, “Deloitte hit by cyber-attack revealing clients’ secret emails.” Deloitte, along with the others who make up the big four audit firms, gets access to its clients deepest secrets, and so you might expect that the response to the breach would be similar levels of outrage. And I suspect a lot of partners are making a lot of hat-in-hand visits to boardrooms, and contritely trying to answer questions like “what the flock were you people doing?” and “why the flock weren’t we told?” I expect that there’s going to be some very small bonuses this year. But, unlike our relationship with Equifax, boards do not feel powerless in relation to their auditors. They can pick and swap. Boards do not feel that the system is opaque and unfair. (They sometimes feel that the rules are unfair, but that’s a different failing.) The extended reporting time will likely be attributed to the deep analysis that Deloitte did so it could bring facts to its customers, and that might even be reasonable. After all, a breach is tolerable; chaos afterwards may not be.

The two biggest predictors of public outrage are chaos and coverups. No, that’s not quite right. The biggest causes are chaos and coverups. (Those intersect poorly with data brokerages, but are not limited to them.) And both are avoidable.

So what should you do to avoid them? There’s important work in preparing for a breach, and in preventing one.

  • First, run tabletop response exercises to understand what you’d do in various breach scenarios. Then re-run those scenarios with the principals (CEO, General Counsel) so they can practice, too.
  • To reduce the odds of a breach, realize that you need continuous and integrated security as part of your operational cycles. Move from focusing on pen tests, red teams and bug bounties to a focus on threat modeling, so you can find problems systematically and early.

I’d love to hear what other steps you think organizations often miss out on.

Parroting Bad Security Advice

A PARROT has become the latest voice to fool Amazon’s Alexa voice assistant after ordering gift boxes using an Amazon Echo. Buddy the African Grey Parrot, mimicked his owner’s voice so convincingly that her Amazon Echo accepted the order for six gift boxes. (“
Parrot mimics owner to make purchases using Amazon Echo
.”)

As Alexa has a facility to require a PIN code before placing an order, it was really down to the family that their bird was able to make the request.

Of course, Buddy would have been unable to learn the PIN.

Via Michael Froomkin.

“The Readability Of Scientific Texts Is Decreasing Over Time”

There’s an interesting new paper at bioRXiv, “The Readability Of Scientific Texts Is Decreasing Over Time.”

Lower readability is also a problem for specialists (22, 23, 24). This was explicitly shown by Hartley (22) who demonstrated that rewriting scientific abstracts, to improve their readability, increased academics’ ability to comprehend them. While science is complex, and some jargon is unavoidable (25), this does not justify the continuing trend that we have shown.

Ironically, the paper is released as a PDF, which is hard to read on a mobile phone. There’s a tool, pandoc, which can easily create HTML versions from their LaTeX source. I encourage everyone who cares about their work being read to create HTML and ebook versions.

Threat Modeling and Architecture

Threat Modeling and Architecture” is the latest in a series at Infosec Insider.

After I wrote my last article on Rolling out a Threat Modeling Program, Shawn Chowdhury asked (on Linkedin) for more informatioin on involving threat modeling in the architecture process. It’s a great question, except it involves the words “threat, “modeling,” and “architecture.” And each of those words, by itself, is enough to get some people twisted around an axle.

Continue reading “Threat Modeling and Architecture”