Big Brother Watch report on breaches

Over at the Office of Inadequate Security, Dissent says everything you need to know about a new report from the UK’s Big Brother Watch:

Extrapolating from what we have seen in this country, what the ICO learns about is clearly only the tip of the iceberg there. I view the numbers in the BBW report as a significant underestimate of the number of breaches that actually occurred because not only are we not hearing from 9% of entities, but many authorities that did report probably did not detect or learn of all of the breaches they actually experienced. BBC notes, “For example, it does seem surprising that in 263 local authorities, not even a single mobile phone or memory stick was lost.” “Surprising” is a very diplomatic word. (“What They Didn’t Know: Big Brother Watch report on breaches highlights why we need mandatory disclosure“)

We Robot: The Conference

This looks like it has the potential to be a very interesting event:

A human and robotinc hand reaching towards each other, reminiscent of Da Vinci

The University of Miami School of Law seeks submissions for “We Robot” – an inaugural conference on legal and policy issues relating to robotics to be held in Coral Gables, Florida on April 21 & 22, 2012. We invite contributions by academics, practitioners, and industry in the form of scholarly papers or presentations of relevant projects.

We seek reports from the front lines of robot design and development, and invite contributions for works-in-progress sessions. In so doing, we hope to encourage conversations between the people designing, building, and deploying robots, and the people who design or influence the legal and social structures in which robots will operate.

Robotics seems increasingly likely to become a transformative technology. This conference will build on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues.

They’re still looking for papers at: http://www.we-robot.com. I encourage you to submit a paper on who will get successfully sued when the newly armed police drones turn out to be no more secure than Predators, with their viruses and unencrypted connections. (Of course, maybe the malware was just spyware.) Bonus points for entertainingly predicting quotes from the manufacturers about how no one could have seen that coming. Alternately, what will happen when the riot-detection algorithms decide that policemen who’ve covered their barcodes are the rioters, and opens fire on them?

The possibilities for emergent chaos are nearly endless.

Telephones and privacy

Three stories, related by the telephone, and their impact on privacy:

  • CNN reports that your cell phone is being tracked in malls:

    Starting on Black Friday and running through New Year’s Day, two U.S. malls — Promenade Temecula in southern California and Short Pump Town Center in Richmond, Va. — will track guests’ movements by monitoring the signals from their cell phones.


    Still, the company is preemptively notifying customers by hanging small signs around the shopping centers. Consumers can opt out by turning off their phones.


    The tracking system, called FootPath Technology, works through a series of antennas positioned throughout the shopping center that capture the unique identification number assigned to each phone (similar to a computer’s IP address), and tracks its movement throughout the stores.

    The company in question is Path Intelligence, and they claim that since they’re only capturing IMSI numbers, it’s anonymous. However, the IMSI is the name by which the phone company calls you. It’s a label which identifies a unique phone (or the SIM card inside of it) which is pretty darned closely tied to a person. The IMSI identifies a person more accurately and effectively than an IP address. The EU regulates IP addresses as personally identifiable information. Just because the IMSI is not easily human-readable does not make it anonymous, and does not make it not-a-name.

    It’s really not clear to me how Path Intelligence’s technology is legal anywhere that has privacy or wiretap laws.

  • Kashmir Hill at Forbes reports on “How Israeli Spies Were Betrayed By Their Cell Phones“:

    Using the latest commercial software, Nasrallah’s spy-hunters unit began methodically searching for traitors in Hezbollah’s midst. To find them, U.S. officials said, Hezbollah examined cellphone data looking for anomalies. The analysis identified cellphones that, for instance, were used rarely or always from specific locations and only for a short period of time. Then it came down to old-fashioned, shoe-leather detective work: Who in that area had information that might be worth selling to the enemy?

    This reminds me of the bin Laden story: he was found in part because he had no phone or internet service. What used to be good tradecraft now stands out. Of course, maybe some innocent folks were just opting out of Path Intelligence. Hmmm. I wonder who makes that “latest commercial software” Nasrallah’s team is using?

  • Who’s on the Line? Increasingly, Caller ID Is Duped“, Matt Richtel, The New York Times

    Caller ID has been celebrated as a defense against unwelcome phone pitches. But it is backfiring.

    Telemarketers increasingly are disguising their real identities and phone numbers to provoke people to pick up the phone. “Humane Soc.” may not be the Humane Society. And think the I.R.S. is on the line? Think again.

    Caller ID, in other words, is becoming fake ID.

    “You don’t know who is on the other end of the line, no matter what your caller ID might say,” said Sandy Chalmers, a division manager at the Department of Agriculture, Trade and Consumer Protection in Wisconsin.

    Starting this summer, she said, the state has been warning consumers: “Do not trust your caller ID. And if you pick up the phone and someone asks for your personal information, hang up.”
    ()

    I’m shocked that a badly designed invasion of privacy doesn’t offer the security people think it does.

    When I say badly designed, I’m referring to inline signaling late in the signal, not to mention that the Bells already had ANI. But they didn’t want to risk the privacy concerns with caller-ID impacting on ANI, so they designed an alternative.

"It's Time to Learn Like Experts" by Jay Jacobs

I want to call attention to a new, important and short article by Jay Jacobs.

This article is a call to action to break the reliance on unvalidated expert opinions by raising awareness of our decision environment and the development of context-specific feedback loops.

Everyone in the New School is a fan of feedback loops of one form or another. Hypothesis testing, learning, and calling out superstition are all forms of feedback loops.

One thing that Jay brings in that I hadn’t seen is the idea of kind and wicked learning environments. A kind environment is one in which you can quickly get good feedback on things experts agree will help you improve. (Did you fall off the bike?) An unkind environment is, amongst other things, one where feedback comes later, if at all. Jay has a table. It’s on page 2.

You should find Jay’s article here: “A Call to Arms: It’s Time to Learn Like Experts“, or his short blog here.

Twitter Weekly Updates for 2011-11-27

Powered by Twitter Tools

Relentless navel gazing, part MCXII

Two changes here at Emergent Chaos this weekend: first, a new, variable width theme which is a little tighter, so there’s more on a screen. Second, I’ve moved the twitter summary to weekly, as comments were running about 50-50 on the post asking for opinion. I think that may be a better balance.

And a bonus third: someone else’s navel for you to gaze at:

cute belly button

The One Where David Lacey's Article On Risk Makes Us All Stupider

In possibly the worst article on risk assessment I’ve seen in a while, David Lacey of Computerworld gives us the “Six Myth’s Of Risk Assessment.”  This article is so patently bad, so heinously wrong, that it stuck in my caw enough to write this blog post.  So let’s discuss why Mr. Lacey has no clue what he’s writing about, shall we?

First Mr. Lacey writes:

1. Risk assessment is objective and repeatable

It is neither. Assessments are made by human beings on incomplete information with varying degrees of knowledge, bias and opinion. Groupthink will distort attempts to even this out through group sessions. Attitudes, priorities and awareness of risks also change over time, as do the threats themselves. So be suspicious of any assessments that appear to be consistent, as this might mask a lack of effort, challenge or review.

Sounds reasonable, no?  Except it’s not alltogether true.  Yes, if you’re doing idiotic RCSA of Inherent – Control = Residual, it’s probably as such, but those assessments aren’t the totality of current state.

“Objective” is such a loaded word.  And if you use it with me, I’m going to wonder if you know what you’re talking about.  Objectivity / Subjectivity is a spectrum, not a binary, and so for him to say that risk assessment isn’t “objective” is an “of course!”  Just like there is no “secure” there is no “objective.”

But Lacey’s misunderstanding of the term aside, let’s address the real question: “Can we deal with the subjectivity in assessment?”  The answer is a resounding “yes” if your model formalizes the factors that create risk and logically represents how they combine to create something with units of measure.  And not only will the right model and methods handle the subjectivity to a degree that is acceptable, you can know that you’ve arrived at something usable when assessment results become “blindly repeatable.”  And yes, Virginia, there are risk analysis methods that create consistently repeatable results for information security.

2. Security controls should be determined by a risk assessment

Not quite. A consideration of risks helps, but all decisions should be based on the richest set of information available, not just on the output of a risk assessment, which is essentially a highly crude reduction of a complex situation to a handful of sentences and a few numbers plucked out of the air. Risk assessment is a decision support aid, not a decision making tool. It helps you to justify your recommendations.

So the key here is “richest set of information available” – if your risk analysis leaves out key or “rich” information, it’s pretty much crap.  Your model doesn’t fit, your hypothesis is false, start over.  If you think that this is a trivial matter for him to not understand, I’ll offer that this is kind of the foundation of modern science.  And mind you, this guy was supposedly a big deal with BS7799.  Really.

4. Risk assessment prevents you spending too much money on security

Not in practice. Aside from one or two areas in the military field where ridiculous amounts of money were spent on unnecessary high end solutions (and they always followed a risk assessment), I’ve never encountered an information system that had too much security. In fact the only area I’ve seen excessive spending on security is on the risk assessment itself. Good security professionals have a natural instinct on where to spend the money. Non-professionals lack the knowledge to conduct an effective risk assessment.

This “myth” basically made me physically ill.  This statement “I’ve never encountered an information system that had too much security” made me laugh so hard I keeled over and hurt my knee in the process by slamming it on the little wheel thing on my chair.

Obviously Mr. Lacey never worked for one of my previous employers that forced 7 or so (known) endpoint security applications on every Windows laptop.  Of course you can have too much !@#%ing security!  It happens all the !@#%ing time.  We overspend where frequency and impact ( <- hey, risk!) don’t justify the spend.  If I had a nickel for every time I saw this in practice, I’d be a 1%er.

But more to the point, this phrase (never too much security) makes several assumptions about security that are patently false.  But let me focus on this one:  This statement implies that threats are randomly motivated.  You see, if a threat has targeted motivation (like IP or $) then they don’t care about systems that offer no value in data or in privilege escalation.  Thus, you can spend too much on protecting assets that offer no or limited value to a threat agent.

5. Risk assessment encourages enterprises to implement security

No, it generally operates the other way around. Risk assessment means not having to do security. You just decide that the risk is low and acceptable. This enables organisations to ignore security risks and still pass a compliance audit. Smart companies (like investment banks) can exploit this phenomenon to operate outside prudent limits.

I honestly have no idea what he’s saying here.  Seriously, this makes no sense.  Let me explain.  Risk assessment outcomes are neutral states of knowledge.  They may feed a state of wisdom decision around budget, compliance, and acceptance (addressing or transferring, too) but this is a logically separate task.

If it’s a totally separate decision process to deal with the risk, and he cannot recognize this is a separate modeling construct, these statements should be highly alarming to the reader.  It screams “THIS MAN IS AUTHORIZED BY A MAJOR MEDIA OUTLET TO SPEAK AS AN SME ON RISK AND HE IS VERY, VERY CONFUSED!!!!”

Then there is that whole thing at the end where he calls companies that address this process illogically as “smart.”  Deviously clever, I’ll give you, but not smart.

6. We should aspire to build a “risk culture” across our enterprises

Whatever that means it sounds sinister to me. Any culture built on fear is an unhealthy one. Risks are part of the territory of everyday business. Managers should be encouraged to take risks within safe limits set by their management.

So by the time I got to this “myth” my mind was literally buzzing with anger.  But then Mr. Lacey tops us off with this beauty.  This statement is so contradictory to his past “myth” assertions, is so bizarrely out of line with his last statement in any sort of deductive sense, that one has to wonder if David Lacey isn’t actually an information security surrealist or post-modernist who rejects ration, logic, and formality outright for the sake of random, disconnected and downright silly approaches to risk and security management. Because that’s the only way this statement could possibly make sense.  And I’m not talking “pro” or “con” for risk culture here, I’m just talking about how his mind could possibly conceptually balance the concept that an “enterprise risk culture” sounds sinister vs. “Managers should be encouraged to take risks within safe limits set by their management” and even “I’ve never encountered an information system that had too much security.”

(Mind blown – throws up hands in the air, screams AAAAAAAAAAAAAAAAAHHhHHHHHHHHHHHHH at the top of his lungs and runs down the hall of work as if on fire)

See?  Surrealism is the only possible explanation.

Of course, if he was an information security surrealist, this might explain BS7799.

 

 

 

What's Wrong and What To Do About It?

Pike floyd
Let me start with an extended quote from “Why I Feel Bad for the Pepper-Spraying Policeman, Lt. John Pike“:

They are described in one July 2011 paper by sociologist Patrick Gillham called, “Securitizing America.” During the 1960s, police used what was called “escalated force” to stop protesters.

“Police sought to maintain law and order often trampling on protesters’ First Amendment rights, and frequently resorted to mass and unprovoked arrests and the overwhelming and indiscriminate use of force,” Gillham writes and TV footage from the time attests. This was the water cannon stage of police response to protest.

But by the 1970s, that version of crowd control had given rise to all sorts of problems and various departments went in “search for an alternative approach.” What they landed on was a paradigm called “negotiated management.” Police forces, by and large, cooperated with protesters who were willing to give major concessions on when and where they’d march or demonstrate. “Police used as little force as necessary to protect people and property and used arrests only symbolically at the request of activists or as a last resort and only against those breaking the law,” Gillham writes.

That relatively cozy relationship between police and protesters was an uneasy compromise that was often tested by small groups of “transgressive” protesters who refused to cooperate with authorities. They often used decentralized leadership structures that were difficult to infiltrate, co-opt, or even talk with. Still, they seemed like small potatoes.

Then came the massive and much-disputed 1999 WTO protests. Negotiated management was seen to have totally failed and it cost the police chief his job and helped knock the mayor from office. “It can be reasonably argued that these protests, and the experiences of the Seattle Police Department in trying to manage them, have had a more profound effect on modern policing than any other single event prior to 9/11,” former Chicago police officer and Western Illinois professor Todd Lough argued.

Former Seattle police chief Norm Stamper gives his perspective in “Paramilitary Policing From Seattle to Occupy Wall Street“:

“We have to clear the intersection,” said the field commander. “We have to clear the intersection,” the operations commander agreed, from his bunker in the Public Safety Building. Standing alone on the edge of the crowd, I, the chief of police, said to myself, “We have to clear the intersection.”

Why?

Because of all the what-ifs. What if a fire breaks out in the Sheraton across the street? What if a woman goes into labor on the seventeenth floor of the hotel? What if a heart patient goes into cardiac arrest in the high-rise on the corner? What if there’s a stabbing, a shooting, a serious-injury traffic accident? How would an aid car, fire engine or police cruiser get through that sea of people? The cop in me supported the decision to clear the intersection. But the chief in me should have vetoed it. And he certainly should have forbidden the indiscriminate use of tear gas to accomplish it, no matter how many warnings we barked through the bullhorn.

My support for a militaristic solution caused all hell to break loose. Rocks, bottles and newspaper racks went flying. Windows were smashed, stores were looted, fires lighted; and more gas filled the streets, with some cops clearly overreacting, escalating and prolonging the conflict. The “Battle in Seattle,” as the WTO protests and their aftermath came to be known, was a huge setback—for the protesters, my cops, the community.

Product reviews on Amazon for the Defense Technology 56895 MK-9 Stream pepper spray are funny, as is the Pepper Spraying Cop Tumblr feed.

But we have a real problem here. It’s not the pepper spray that makes me want to cry, it’s how mutually-reinforcing up a set of interlocking systems have become. It’s the police thinking they can arrest peaceful people for protesting, or for taking video of them It’s a court system that’s turned “deference” into a spineless art, even when it’s Supreme Court justices getting shoved aside in their role as legal observers. It’s a political system where we can’t even agree to ban the TSA, or work out a non-arbitrary deal on cutting spending. It’s a set of corporatist best practices that allow the system to keep on churning along despite widespread revulsion.

So what do we do about it? Civil comments welcome. Venting welcome. Just keep it civil with respect to other commenters.

Image: Pike Floyd, by Kosso K

Twitter Updates from Adam, 2011-11-24

Powered by Twitter Tools