Analyzing The Army's Accidental Test

According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”

Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)

With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.

So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.

Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”

However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.

Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.

Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.

It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.

[/update]

How Harvey Mudd Brings Women into CS

Back in October, I posted on “Maria Klawe on increasing Women in Technology.” Now the New York Times has a story, “Giving Women The Access Code:”

“Most of the female students were unwilling to go on in computer science because of the stereotypes they had grown up with,” said Zachary Dodds, a computer scientist at Mudd. “We realized we were helping perpetuate that by teaching such a standard course.”

To reduce the intimidation factor, the course was divided into two sections — “gold,” for those with no prior experience, and “black” for everyone else. Java, a notoriously opaque programming language, was replaced by a more accessible language called Python. And the focus of the course changed to computational approaches to solving problems across science.

“We realized that we needed to show students computer science is not all about programming,” said Ran Libeskind-Hadas, chairman of the department. “It has intellectual depth and connections to other disciplines.”

Well, sometimes computer science has depth and connections to reality. Other times we get wrapped around some little technical nit, and lose sight of the larger picture. Or sometimes, we just talk about crypto and key lengths.

If we want more diversity in computer security, we have to look around, see what’s working and take lessons from it. Otherwise, we’re going to stay on the hamster wheels. There’s excellent evidence that more diversity helps you solve certain classes of problems better. (See, for example, “Scott Page’s The Difference.)

Maria Klawe on increasing Women in Technology

I talk a lot about the importance of data in enabling us to bring the scientific method to bear on information security. There’s a reason for that: more data will let us know the falsehoods, and knowing the falsehoods will set us free. But discovering what claims don’t stand up to scrutiny is a matter of understanding systems. And to understand systems, we need diverse perspectives. And that’s really hard. At my book reading at Ada’s, I decided to include the section of the book that talks about diversity. Jacob Appelbaum asked me what we can do about the problem, and I was forced to admit that my best answer is to raise awareness that there’s a real issue here, and hope that someone with a different perspective can offer up better answers. (It’s a nicely recursive solution to the issue.)

And fortunately, Maria Klawe (President of Harvey Mudd College, ACM Fellow, Microsoft board member) has some answers, which are subtle, simple, and likely incredibly difficult to implement:

Via “Harvey Mudd President Klawe on Women in Technology “.

[Update: See also “How Harvey Mudd brings women into Computer Science.”]