Seattle: Pete Holmes for City Attorney

pete_homes_for_city_attorney.jpgI don’t usually say a lot about local issues, but as readers know, I’m concerned about how arbitrary ID checking is seeping into our society.

It turns out my friend Eric Rachner is also concerned about this, and was excited when a Washington “Judge said showing ID to cops not required.” So when Eric was challenged by the police, in accordance with the law, he refused. He was charged with obstruction of justice by city attorney Tom Carr. Well, it turns out Eric didn’t roll over, and after much stress, charges were dropped. The city shouldn’t be putting people through such things after state judges have ruled. It’s a waste of city resources, and it subjects nice folks like Eric or you or me to the leviathan power of the state. Such power must be responsibly exercised, and Tom Carr has shown he can’t do that.

On that basis alone, Tom Carr should be voted out of office.

It’s just a sweetner that Pete Holmes, his challenger, seems to have his head screwed on straight, with priorities that include government accountability and transparency, smart sentencing, and not a new $250MM jail that we don’t need and can’t afford.

As if you needed any more, our sole remaining newspaper has endorsed Holmes.

So please, vote Pete Holmes for city attorney.

[Update: Thank you! Tom Carr has conceded the race. I don’t think I can claim lots of credit, but I’m glad he’s on the outs.]

Just say 'no' to FUD

“Fear, uncertainty, and doubt” (FUD) is a distortion tactic to manipulate decision-makers. You may think it’s good because it can be successful in getting the outcomes you desire. But it’s unethical. FUD is also anti-data and anti-analysis. Don’t do it. It’s the opposite of what we need.

fearfulNewSchool is about making rational security decisions and investments based on best available data, experiments, and even formal reasoning.  It’s the opposite of “fear, uncertainty, and doubt” (FUD).  FUD is the intentional amplification and exaggeration of fears and uncertainties for the sole purpose of manipulating the decision-maker into approving your proposal or budget — the “safe choice”.

Dr. Anton Chuvakin, in his guest blog post at FUDsec.com, argues in favor of FUD as a tactic:

…many people view using FUD for driving security spending and security technology deployments as the very opposite of sensible risk management. However, FUD is risk management at its best: FUD approach is simply risk management where risks are unknown and unproven but seem large at first glance, information is scarce, decisions uncertain and stakes are high. In other words, just like with any other risk management approach today!

In light of this, we have to accept that there are benefits of FUD – as well as risks. … First, in the world we live in, FUD works! …Second, keep in mind that many of the Big Hairy Ass Risks (BHARs) are both genuinely scary and, in fact, likely…Finally, …fear might not be a very positive emotion to experience, but acting out of fear has led to things that are an overall positive…The key issue with FUD is its “blunt weapon” nature. It is a sledgehammer, not a sword! If you use FUD to “power through” issues, you might end up purchasing or deploying things that you need and things that you don’t.

As “greed-based” ROI scams fail to move security ahead, the role of fear has nowhere to go but up. In other words, all of us get to pick out favorite 3 letter abbreviation – and I’d take honest FUD over insidious ROI any day…

…Even if objective metrics will ever replace FUD as the key driver for security, we have a bit of time to prepare now. After all, in that remote future age interstellar travel, human cloning, teleportation and artificial intelligence will make the life of a security practitioner that much more complicated…  [emphasis in original]

Anton’s position on FUD reminds me of the quote by Gordon Gekko from the 1987 movie “Wall Street”:  “…greed, for lack of a better word, is good. Greed is right, greed works. Greed clarifies, cuts through, and captures the essence of the evolutionary spirit.”   Substitute “FUD” for “greed”, and this is basically Anton’s argument.

This Machiavellian justification of FUD sounds appealing until you consider this: FUD is unethical, plain and simple.

200412418-001A Halloween analogy: It’s like putting an arachnophobic person in a dark room and then whispering: “This is such a dark room.  There’s no telling how many spiders there are in here.”    Then, just before locking them in the room, you say: “For all the money in your wallet, I can sell you some bug spray.”

 

The term “FUD” originated in the 1970s to describe some of IBM’s selling tactics against competitors (who had better price/performance, etc.). The FUD technique was used by IBM sales people to destabilize the decision-maker’s thinking process. FUD issues raised could not really be answered by the decision-maker or the competitor, and so nagged at the back of the mind. They had the effect of causing the decision-maker to retreat to the safe decision, which was IBM. “Nobody ever got fired for buying IBM”.

FUD has the same ethical status as using incriminating photos to coerce a favorable decision (one of J. Edgar Hoover’s favorite tactics).  Both of them work if all you care about is getting approval, but it corrupts the process and works against rational decision-making overall.

There are substantial reasons for framing risks  beyond simple statement of facts and statistics, namely to deal with the psychology of risk. Security is about avoiding bad outcomes.  We have fear and uncertainty about those outcomes and we are prone to cognitive distortions about them.   FUD amplifies distortionsFUD is anti-data and anti-analysis. 

Instead, ethical security professionals should take pains to present feared scenarios in an understandable way and, most important, relative to the likelihood of other possibilities.  We should also be on a never-ending quest for data and analysis that will inform decisions and reduce emotionalism. Don’t make the situation worse by pumping out FUD. It’s unethical.

Continue reading “Just say 'no' to FUD”

Ooops! and Ooops again!

Those of you who’ve heard me speak about the New School with slides have probably heard me refer to this as an astrolabe:

orrey.jpg

Brett Miller just emailed me and asked (as part of a very nice email) “isn’t that an orrery, not an astrolabe?”

It appears that I’m going to have to update my commentary. Thanks, Brett!

[And thanks Scott–I misspelt orrery, now corrected.]

Ross Anderson's Psychology & Security page

Ross Anderson has a new Psychology and Security Resource Page. His abstract:

A fascinating dialogue is developing between psychologists and security engineers. At the macro scale, societal overreactions to terrorism are founded on the misperception of risk and undertainty, which has deep psychological roots. At the micro scale, more and more crimes involve deception; as security engineering gets better, it’s easier to mislead people than to hack computers or hack through walls. Many systems also fail because of usability problems: the designers have different mental models of threats and protection mechanisms from users. Wrong assumptions about users can lead systems to discriminate against women, the less educated and the elderly. And misperceptions cause security markets to fail: many users buy snake oil, while others distrust quite serviceable mechanisms. Security is both a feeling and a reality, and they’re different. The gap gets ever wider, and ever more important.

A tremendous resource.

Fordham report on Children's Privacy

Following the No Child Left Behind mandate to improve school quality, there has been a growing trend among state departments of education to establish statewide longitudinal databases of personally identifiable information for all K-12 children within a state in order to track progress and change over time. This trend is accompanied by a movement to create uniform data collection systems so that each state’s student data systems are interoperable with one another. This Study examines the privacy concerns implicated by these trends.

The Study reports on the results of a survey of all fifty states and finds that state educational databases across the country ignore key privacy protections for the nation’s K-12 children. The Study finds that large amounts of personally identifiable data and sensitive personal information about children are stored by the state departments of education in electronic warehouses or for the states by third party vendors. These data warehouses typically lack adequate privacy protections, such as clear access and use restrictions and data retention policies, are often not compliant with the Family Educational Rights and Privacy Act, and leave K-12 children unprotected from data misuse, improper data release, and data breaches. The Study provides recommendations for best practices and legislative reform to address these privacy problems.

For more, “Children’s Educational Records and Privacy.”

Bob Blakley Gets Future Shock Dead Wrong

Bob Blakley has a very thought provoking piece, “Gartner Gets Privacy Dead Wrong.” I really, really like a lot of what he has to say about the technical frame versus the social frame. It’s a very useful perspective, and I went back and forth for a while with titles for my post (The runner up was “Fast, Cheap and out of Bob’s Control.”)

I think, however, that my frame for a response will be Alvin Toeffler’s excellent analysis of Future Shock. In it, he describes our lives as most people in the professional class move more and more often for work. How the traditional means of social cohesion — church, scouts, the PTA, bridge clubs, the local watering hole — all down as we expect to be gone in just a few years. How we have friends we see annually at a conference or in airports. He explained that ongoing acceleration and the removal of support structures would lead to isolation, alienation and an ongoing and increasing state of future shock.

A great many Americans on the coasts live in many micro-societies. We have our professional groups and sub-groups. We have hobbies. We may have college buddies in the same areas as we are. We pick a fat demogauge to listen to: Rush Limbaugh or Michael Moore as suits our fancy. But our social spaces are massively fragmented. And so when Bob says:

But he’s right that we’d better behave. When we see someone else’s private information, we’d better avert our gaze. We’d better not gossip about it. We’d better be sociable. Because otherwise we won’t need the telescreen – we’ll already have each other. And we’ll get the society we deserve.

We no longer have a society, or the society. We have teabaggers screaming at Obamaphiles. We have neighbors suing neighbors. We call the cops rather than walking next door. We run background checks on our scoutmasters, all because we no longer have a society which links us tightly enough that we can avoid these things.

And amidst all of this which society will create and drive the social norms for privacy? Will it be the one that lets cops beat protesters at the G20? The one that convinced Bob to join Facebook? The one that leads me to tweet?

In a world where some people say “I’ve got nothing to hide” and others pay for post office boxes, I don’t know how we can settle on a single societal norm. And in a world in which cheesy-looking web sites get more personal data — no really, listen to Alessandro Acquisti, or read the summary of “Online Data Present a Privacy Minefield” on All Things Considered. In a world in which cheesy-looking web sites get more data, I’m not sure the social frame will save us.

Is responsible disclosure dead?

Jeremiah Grossman has an article in SC Magazine, “Businesses must realize that full disclosure is dead.” On Twitter, I asked for evidence, and Jerimiah responded “Evidence of what exactly?

I think the key assertion that I take issue with is bolded in the context below:

Unquestionably, zero-day vulnerabilities have an increasing real-world value to many different parties. We should expect more and more researchers to demand and receive payment from governments, software vendors, security vendors, enterprises or someone on the black market. It has already happened and will continue. The evolution is underway and it will become more prevalent in the next few years as it becomes routine for our systems to be compromised using unknown vulnerabilities. This environment will force us to evolve our thinking and mature our offensive and defensive security strategies – fueling the need for third-party patches, subscriptions to unreleased vulnerability information, and general underground industry intelligence. We’re already seeing these services being offered on the fringe (legally and illegally) and slowly moving towards mainstream acceptance as the business models are better understood. So it’s not a matter of if, but when.

We will need to evolve, yes, but I don’t see that the direction suggested is the one we’ll need to take. In particular, operationally using 0day is a tricky business. You risk discovery and losing a valuable asset by exposing it to a target. So maybe you use something a bit more commonplace. As I recall the Verizon Breach Report, they say that roughly 75% of vulns exploited have been public for a year or more. Yes, there’s a rapidly growing volume of underground stuff, but that’s easy when such things are a tiny fraction of attacks, vulnerabilities, or root causes of bad outcomes.

So I’m curious where is the evidence that undisclosed vulns will come to dominate? Oh, and a second question. Jerimiah, your title seems to imply that this is the most important thing for businesses to realize. Is that really what you meant?

My employer spends a lot of energy on building things to make exploiting unknown vulns harder, but if I wanted to speak for them, I’d do so on my work blog.

[Ooops! Mis-spelt Jeremiah’s name. Sorry!]

The Conch Republic

Conch-Republic-battle.jpg
Apparently, in a sovereign-in-cheeck move, the the Florida Keys have withdrawn from the United States, and declared themselves to be “The Conch Republic.” Their motto is “We seceded where others failed.” Perhaps you haven’t heard of them because they make all the good jokes, making writing about them hard.

I heard about them because of an incident that was mentioned in this podcast. The United States will allow Cuban refugees to enter if they reach dry land. The Border Patrol declared that 15 Cuban refugees that had reached the bridge were not in the United States, and thus could be returned to Cuba. Based on this disavowal, the Conch Republic seized the bridge and declared it their territory, in what is now known as “The great invasion of 1995.”

Next time I need a good vacation in the sun, I know where I’m going.

Shown: “Close up Bloody Battle.”

On the value of 'digital asset value' for security decisions

What good is it to know the economic value of a digital asset for the purposes of making information security decisions? If you can’t make better decisions with this information, then the metric doesn’t have any value. This post discusses alternative uses, especially threshold or sanity checks on security spending. For these purposes, it functions better as a “spotlight” than as a “razor”. Digital Asset Value has other uses, not the least to get InfoSec people to understand Business people and their priorites and vice versa.

I left out something important in my blog post “How to Value Digital Assets (Web Sites, etc.)” .  This came to light as I read the commentary on other blogs by Andrew Jacquith, Pete Lindstrom, Matthew Rosenquist, and Gunnar Peterson.  (I’m also anticipating Alex Hutton’s soon-to-come blog post.)

What good is it to know the economic value of a digital asset for the purposes of making information security decisions?  If you can’t make better decisions with this information, then the metric doesn’t have any value.  Skip it.

Like most things in information security management, this is less obvious and more complicated than it seems.  (more tutorial…)

Continue reading “On the value of 'digital asset value' for security decisions”

Something For Soscia, Girardi, & Charlie Manuel

It’s the probabilistic decision making tool for baseball managers.  On the iPhone.  It’s like a business intelligence application in the palm of your hand 🙂

Basically, it takes the probabilistic models of either Win Expectancy or Run Expectancy (any given action has some probability of contributing a run or a win) and given a situation, attempts at offering whether it’s a good idea or bad idea to execute that plan.

StealView-1Here we see a situation where the manager is wondering if it’s a good idea to attempt a double steal.  An obvious dependency is knowing the stolen base success rate for the runner on second (it also assumes that the catcher will only attempt to throw at the lead runner, a pretty safe assumption).  If we’re baseball freaks, we might also note that there’s not contra-factor around the probability of a pick off move, I don’t see how the catcher’s rate of successful pick offs is factored in, etc. – but we’re nitpicking….

Once the decision to execute is established (press the button! press the button!) then we receive a screen that tells us whether it’s a good or bad idea, respective of how much our win (or run) expectancy increases or decreases.

StealResult-1

Now obviously, this is more of an “armchair quarterback” (to mix sports in metaphors) sort of toy, but it got me to thinking that this would be pretty fun for us to have something like this for risk or threat based modeling.  Rather than a baseball diamond, we might conceive of a set of connected IT objects/assets (a business process, maybe), each with their own “expectancy” to do some combination of Prevent/Detect/Respond to various threat sources.  Do I want to add more Prevent?  Bad idea!  Your risk is reduced at an insignificant level compared to the investment required to achieve that new level of prevention.  Do I want to add more training?  Good idea!  Training analysts in “detection” increases the risk reducing probability for this group of assets in an economically efficient manner.  Obviously, this is all probabilistic, but all decision making is, right?  I mean, this is why I’m NewSchool, I hope that someday we’ll reach this level of sophistication as an industry.