The voice shouts out: “Detector error, please see manual.” Just once, then a few hours later. And when I did see the manual, I discovered that it means “Alarm has reached its End of Life”
No, really. That’s how my fire alarm told me that it’s at its end of life. By telling me to read the manual. Why it doesn’t say “device has reached end of life?” That would be direct and to the point. But no. When you press the button, it says “please see manual.” Now, this was a 2009 device, so maybe, just maybe, there was a COGS issue in how much storage was needed.
But sheesh. Warning messages should be actionable, explanatory and tested. At least it was loud and annoying.
Access to an account is access to an account. A lot of systems talk about “backup” authentication, but make that backup authentication available at all times. This has led to all sorts of problems, because the idea that the street you grew up on is a secret didn’t make sense even before Yahoo! “invalidated“it. Not to mention that even when answers to these questions are freeform, they tend to have only a few bits of entropy. Colors? First names? All have distributions. Then there’s the ones who insist they know your answers:
One of the people who’s focused on really improving account recovery is Brad Hill, and at F8, Facebook announced some new tech which I think is a very useful new point in the design space.
As developers, we talk a lot about building experiences that people love. But there’s one experience that never fails to elicit a groan from people everywhere: recovering an account after forgetting your password.
Delegated Account Recovery helps people and businesses recover their accounts using the services that they trust. It is an open protocol that gives companies the ability to provide better and more secure options to their customers for regaining access to their accounts. Facebook — and other providers in the future — can help people verify who they are when they forget their password, lose their two-factor codes, or don’t want to answer security questions based on personal information. (“Delegated Account Recovery Now Available in Beta.”)
It’s worth checking out.
And not that I’m trying to make trouble for anyone, but at what point does relying on use of a “secret” question like “street you grew up on” become the sort of unfair trade practice that garners regulatory attention? My guess is that the availability of credible alternatives brings that day closer.
There’s an interesting report out from the Cyentia Institute, which is run by Wade Baker and Jay Jacobs. (Wade and Jay were amongst the principals behind the Verizon DBIR.) It’s “The Cyber Balance Sheet.” It’s interesting research and if you spend time with executives, worth your time.
Despite the title, end users are rarely the weak link in security. We often make impossible demands of them. For example, we want them to magically know things which we do not tell them.
Today’s example: in many browsers, this site will display as “Apple.com.” Go ahead. Explore that for a minute, and see if you can find evidence that it’s not. What I see when I visit is:
When I visit the site, I see it’s a secure site. I click on the word secure, I see this:
But it’s really www.xn--80ak6aa92e.com, which is a Puncycode URL. Punycode is way to encode other languages so they display properly. That’s good. What’s not good is that there’s no way to know that those are not the letters you think they are. Xudong Zheng explains the problem, in more depth, and writes about how to address it in the short term:
A simple way to limit the damage from bugs such as this is to always use a password manager. In general, users must be very careful and pay attention to the URL when entering personal information. I hope Firefox will consider implementing a fix to this problem since this can cause serious confusion even for those who are extremely mindful of phishing.
I appreciate Xudong taking the time to suggest a fix. And I don’t think the right fix is that we can expect everyone to use a password manager.
When threat modeling, I talk about this as the interplay between threats and mitigations: threats should be mitigated and there’s a threat that any given mitigation can be bypassed. When dealing with people, there’s a simple test product security engineering can use. If you cannot write down the steps that a person must take to be secure, you have a serious problem. If you cannot write that list on a whiteboard, you have a serious problem. I’m not suggesting that there’s an easy or obvious fix to this. But I am suggesting that as long as browser makers are telling their users that looking at the URL bar is a security measure, they have to make that security measure resist attacks.
When I started blogging a dozen years ago, the world was different. Over time, I ended up with at least two main blogs (Emergent Chaos and New School), and guest posting at Dark Reading, IANS, various Microsoft blogs, and other places.
I decided it’s time to bring all that under a single masthead, and hey, get TLS finally. I’ve imported the EmergentChaos and New School archives, but not the others. For those others, I’ll post a link here as I post there.
If you subscribe to either or both, I suggest subscribing here; I’ll post reminders to those other blogs to move as well. If you maintain a link to either of the old blogs, please update it to point here.
I’m sure I’ve broken things in the imports, please let me know what they are.
In the near future, I’ll set up redirects from the old blogs to here.
So I’m curious: on what basis is the President of the United States able to issue orders to attack the armed forces of Syria?
It is not on the basis of the 2001 “Authorization for Use of Military Force,” cited in many instances, because there has been no claim that Syria was involved in the 9/11 attacks. (Bush and then Obama both stretched this basis incredibly, and worryingly, far. But both took care to trace back to an authorization.)
It is not on the basis of an emergency use of force because the United States was directly threatened.
Which leaves us with, as the NY Times reports:
Mr. Trump authorized the strike with no congressional approval for the use of force, an assertion of presidential authority that contrasts sharply with the protracted deliberations over the use of force by his predecessor, Barack Obama. (“Dozens of U.S. Missiles Hit Air Base in Syria.”)
Or, as Donald Trump once said:
Seriously, what is the legal basis of this order?
Have we really arrived at a point where the President of the United States can simply order the military to strike anywhere, anytime, at his personal discretion?
This video is really amazingly inspiring:
Not only does it show more satellites than I’ve ever seen in a single frame of video, but the rocket that took them up was launched by the Indian Space Research Organisation, who managed to launch not only the largest satellite constellation ever, but had room for a few more birds in the launch. It’s an impressive achievement, and it (visually) crystalizes a shift in how we approach space. Also, congratulations to the team at Planet, the ability to image all of Earth’s landmass every day.
Launching a micro satellite into low Earth orbit is now accessible to hobbyists. Many readers of this blog could do it. That’s astounding. Stop and think about that for a moment. Our failure to have exciting follow-on missions after Apollo can obscure the fascinating things which are happening in space, as it gets cheap and almost boring to get to low Earth orbit. The Economist has a good summary. That’s not to say that there aren’t things happening further out. This is the year that contestants in the Google Lunar XPrize competition must launch. Two tourists have paid a deposit to fly around the moon.
But what’s happening close to the planet is where the economic changes will be most visible soon. That’s not to say it’s the only thing to watch, but the same engines will enable more complex and daring missions.
For more on what’s happening in India around space exploration and commercialization, this is a fascinating interview with Susmita Mohanty.
Video link: ISRO PSLV-C37 onboard camera view of 104 satellites deployment
After the February, 2017 S3 incident, Amazon posted this:
We are making several changes as a result of this operational event. While removal of capacity is a key operational practice, in this instance, the tool used allowed too much capacity to be removed too quickly. We have modified this tool to remove capacity more slowly and added safeguards to prevent capacity from being removed when it will take any subsystem below its minimum required capacity level. This will prevent an incorrect input from triggering a similar event in the future. We are also auditing our other operational tools to ensure we have similar safety checks. We will also make changes to improve the recovery time of key S3 subsystems. (“Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region“)
How often do you see public lessons like this in security?
“We have modified our email clients to not display URLs which have friendly text that differs meaningfully from the underlying anchor. Additionally, we re-write URLs, and route them through our gateway unless they meet certain criteria…”
Relatedly, Etsy’s Debriefing Facilitation guide. Also, many people are describing this as “human error,” which reminds me of Don Norman’s “Proper Understanding of ‘The Human Factor’:”
…if a valve failed 75% of the time, would you get angry with the valve and simply continual to replace it? No, you might reconsider the design specs. You would try to figure out why the valve failed and solve the root cause of the problem. Maybe it is underspecified, maybe there shouldn’t be a valve there, maybe some change needs to be made in the systems that feed into the valve. Whatever the cause, you would find it and fix it. The same philosophy must
apply to people.
(Thanks to Steve Bellovin for reminding me of the Norman essay recently.)
At RSA’17, I spoke on “Security Leadership Lessons from the Dark Side.”
Leading a security program is hard. Fortunately, we can learn a great deal from Sith lords, including Darth Vader and how he managed security strategy for the Empire. Managing a distributed portfolio is hard when rebel scum and Jedi knights interfere with your every move. But that doesn’t mean that you have to throw the CEO into a reactor core. “Better ways you will learn, mmmm?”
In the talk, I discussed how “security people are from Mars and business people are from Wheaton,” and how to overcome the communication challenges associated with that.
RSA has posted audio with slides, and you can take a listen at the link above. If you prefer the written word, I have a small ebook on Cyber Portfolio Management, a new paradigm for driving effective security programs. But I designed the talk to be the most entertaining intro to the subject.
Later this week, I’ll be sharing the first draft of that book with people who subscribe to my “Adam’s New Thing” mailing list. Adam’s New Thing is my announcement list for people who hate such things. I guarantee that you’ll get fewer than 13 messages a year.
Lastly, I want to acknowledge that at BSides San Francisco 2012, Kellman Meghu made the point that “they’re having a pretty good risk management discussion,” and that inspired the way I kicked off this talk.
In September, Steve Bellovin and I asked “Why Don’t We Have an Incident Repository?.”
I’m continuing to do research on the topic, and I’m interested in putting together a list of such things. I’d like to ask you for two favors.
First, if you remember such things, can you tell me about it? I recall “Computers at Risk,” the National Cyber Leap Year report, and the Bellovin & Neumann editorial in IEEE S&P. Oh, and “The New School of Information Security.” But I’m sure there have been others.
In particular, what I’m looking for are calls like this one in Computers at Risk (National Academies Press, 1991):
3a. Build a repository of incident data. The committee recommends that a repository of incident information be established for use in research, to increase public awareness of successful penetrations and existing vulnerabilities, and to assist security practitioners, who often have difficulty persuading managers to invest in security. This database should categorize, report, and track pertinent instances of system security-related threats, risks, and failures. […] One possible model for data collection is the incident reporting system administered by the National Transportation Safety Board… (chapter 3)
Second, I am trying to do searches such as “cites “Computers at Risk” and contains ‘NTSB’.” I have tried without luck to do this on Google Scholar, Microsoft Academic and Semantic Scholar. Only Google seems to be reliably identifying that report. Is there a good way to perform such a search?