Microsoft Backs Laws Forbidding Windows Use By Foreigners

According to Groklaw, Microsoft is backing laws that forbid the use of Windows outside of the US. Groklaw doesn’t say that directly. Actually, they pose charmingly with the back of the hand to the forehead, bending backwards dramatically and asking, “ Why Is Microsoft Seeking New State Laws That Allow it to Sue Competitors For Piracy by Overseas Suppliers? ” Why, why, why, o why, they ask.

The headline of this article is the obvious reason. Microsoft might not know they’re doing it for that reason. Usually, people with the need to do something, dammit because they fear they might be headed to irrelevancy think of something and follow the old Aristotelian syllogism:

Something must be done.
This is something.
Therefore, it must be done.

It’s pure logic, you know. This is exactly how Britney Spears ended up with Laurie Anderson’s haircut and the US got into policing China’s borders. It’s logical, and as an old colleague used to say with a sigh, “There’s no arguing with logic like that.”

Come on, let’s look at what happens. I run a business, and there’s a law that says that if my overseas partners aren’t paying for their Microsoft software, then Microsoft can sue me, what do I do?

Exactly right. I put a clause in the contract that says that they agree not to use any Microsoft software. Duh. That way, if they haven’t paid their Microsoft licenses, I can say, “O, you bad, naughty business partner. You are in breach of our contract! I demand that you immediately stop using Microsoft stuff, or I shall move you from being paid net 30 to net 45 at contract renegotiation time!” End of problem.

And hey, some of my partners will actually use something other than Windows. At least for a few days, until they realize how badly Open Office sucks.

I'd like some of that advertising action

Several weeks back, I was listening to the Technometria podcast on “Personal Data Ecosystems,” and they talked a lot about putting the consumer in the center of various markets. I wrote this post then, and held off posting it in light of the tragic events in Japan.

One element of this is the “VRM” or “vendor relationship management” space, where we let people proxy for ads to us.

As I was listening, I realized, I’m in the market for another nice camera. And rather than doing more research, I would like to sell the right to advertise to me. There’s a huge ($59B?) advertising market. I am ready to buy, and if Fuji had shipped their #$^&%^ X100, I was about ready to buy it. But even before the earthquake, they were behind in production, and I’m ready to buy. So I could go do research, or the advertisers could advertise to me. But before they do, I want a piece of that $59B action.

I don’t want to start a blog. (Sorry, Nick!). I don’t want to sell personal information about me. I want another nice camera. How do I go about accepting ads into this market?

I’m willing, by the way, to share additional information about my criteria, but I figure that those have value to advertisers. Please send in your bids for the answers to specific questions. Please specify if your bids are for exclusive, private, or public answers. (Public answers prevent others from gathering exclusive market intelligence, and are thus a great strategic investment.)

So, dear readers, how do I get a piece of the action? How do I cash in on this micro-market?

If I get a highly actionable answer, I’ll share 25% of the proceeds of the advertising with whomever points me the right way.

Sedgwick, Maine versus the Feds

Maine Town Declares Food Sovereignty, Nullifies Conflicting Laws.” So reads the headline at the 10th Amendment center blog:

The Maine town of Sedgwick took an interesting step that brings a new dynamic to the movement to maintain sovereignty: Town-level nullification. Last Friday, the town passed a proposed ordinance that would empower the local level to grow and sell food amongst themselves without interference from unconstitutional State or Federal regulations. Beyond that, the passed ordinance would make it unlawful for agents of either the State or Federal government to execute laws that interfere with the ordinance.

Under the new ordinance, producers and processors are protected from licensure or inspection in sales that are sold for home consumption between them and a patron, at farmer’s market, or at a roadside stand. The ordinance specifically notes the right of the people to food freedom, as well as citing the U.S. Declaration of Independence and Maine Constitution in defending the rights of the people.

Andy Ellis pointed out on Twitter that Wickard v. Filburn disagrees, but it’s fascinating to watch the frustration with the political system. Think of it as a Tea Party for foodies, with hand-harvested Darjeeling milk.

Back to You, Rob!

Rob is apparently confused about what risk management means. I tried to leave this as a comment, but apparently there are limitations in commenting.  So here go:

 

Rob,

Nowhere did I imply you were a bad pen tester.  I just said that you should have a salient view of failure in complex systems (which I’m sure you do).

“I’ve never thought of incident analysis aka. casual analysis aka. failure analysis as part of risk management.”

First, risk management, done properly is an implementation of scientific method.  If treated differently, its stupid numerology.  What I mean by this is, you start with a hypothesis (model), it is tested, then refined.  Pretty basic stuff, that.   If you do NOT refine the model, then you’re just making up numbers to make them up.  So incident analysis is a step that must be done before model refinement.

Second, can you explain more about how “risk analysis” isn’t part of risk management?  To me, the management of risk (be that engineering, financial or “natural systems” – three different concepts, each with very different approaches/models) is the establishment of a state of wisdom.  Wisdom is predicated on establishing a state of knowledge (yes, I’m being very Bayesian here, it’s a bias) that requires analysis of the state of nature.

“Most people assume that risk management is about preventing bad things from happening. That’s not true. A “risk” could mean good or bad”

As far as a “risk” meaning “good”, that’s limited primarily to financial risk modeling where you can have positive as well as negative returns.  Engineering risk is different, as that which resists is by definition incapable of resisting greater than its designed for.   “Natural Systems” risk as a different animal, is also focused primarily on identifying determinants which cause failure.  The medical community, ecological community and others who operate in this realm don’t necessarily tie in positive outcomes (Side note relevant to why we do the DBIR at Verizon – “Natural Systems risk is done differently than most approaches we’re familiar with in IT – engineering, financial – because it’s dealing with complex systems).  Of course, you could consider enterprise networks to have properties that indicate strong emergence, but that’s another argument.  So a financial risk “positive” really isn’t applicable in this situation, because there isn’t really the potential for a positive return from a meltdown.

“That “maximizing opportunities” never comes up in cybersecurity risks management, which is why cybersecurity is so out of step with the rest of the company.”

No, we’re out of step with business because we have no clue how to simply relate our expense to revenue.

“Nobody does incident analysis after the website was delayed because of cybersecurity concerns.”

I’ll disagree primarily because I do it every day.  If you’re interested in learning more http://herdingcats.typepad.com/
is one of the more salient blogs that discusses project risk.

“Another way of defining “risk management” is “uncertainty management”

Ugh, only if you’re Knightian/Frequentist from the 1920’s.  We actually cover this in the SIRA podcast I think, “uncertainty” is no longer considered the nature of risk by most probabilists.  Uncertainty is a factor relevant to your prior and posterior distributions (usually expressed in the kurtosis of the distribution itself).

“For example, as Alex points out, nobody trusts that TEPCO (the company operating the Fukushima power plan) is telling the truth. Alex says that means we can’t do risk management.”

This is not at all what I said.  I said that this means it’s too early to do post-incident analysis due to:

  1. the fact that it’s still future-predictive rather than past-predictive
  2. even if we tried past-predictive analysis we’d have a high degree of uncertainty in factors (which would necessitate us moving to future-predictive).

You framed the discussion as “hindsight analysis”.  I explained that it’s too early.  I think we can do more predictive analysis around future states, sure, but not hindsight analysis.

Actually It *IS* Too Early For Fukushima Hindsight

OR – RISK ANALYSIS POST-INCIDENT, HOW TO DO IT RIGHT

Rob Graham called me out on something I retweeted here (seriously, who calls someone out on a retweet?  Who does that?):

http://erratasec.blogspot.com/2011/03/fukushima-too-soon-for-hindsight.html

And that’s cool, I’m a big boy, I can take it.  And Twitter doesn’t really give you a means to explain why you think that it’s too early to do a hindsight review of Fukushima, so I’ll use this forum.

Here’s the TL;DNR version: It’s too early to do hindsight or causal analysis on Fukushima – there is still a non-zero chance that something really bad could happen, we’re not at a point where the uncertainty in our information has stabilized, and any analysis done now would still be predictive about a future state.

But if you’re interested in the extended remix, there are several great reasons NOT to use Fukushima for a risk management case study just yet:

  1. Um, the incident isn’t over. It’s closer to contained, sure, but it’s not inconceivable that there’s more that could go seriously wrong.  Risk is both frequency and impact, an incident involves both primary and secondary threat agents.  Expanding our thinking to include these concepts, it’s not difficult to understand thatwe’re a long way from being “done” with Fukushima.
  2. Similarly, given the forthrightness of TEPCO, I’d bet we don’t know the whole story of what has happened, much less the current state. The information that has been revealed has so much uncertainty in it, it’s near incapable of being used in causal analysis (more on that, below).
  3. The complexity of the situation requires real thought, not quick judgment.

Now Rob doesn’t claim to be an expert in risk analysis (and neither do I, I just know how horribly I’ve failed in the past).  So we can’t blame him. But let’s discuss two basic reasons why Rob completely poops the bed on this one, why the entire blog post is wrong:  Post-incident, our analytics aren’t nearly as predictive as pre-incident or during incident analytics. They can still be predictive (addressing the remaining uncertainty in whatever prior distributions we’re using), but they are generally much more accurate.

Second, what Rob doesn’t seem to understand is that post-incident risk management is kind of like causal analysis, but (hopefully) with science involved.  It’s a different animal.

Post-incident risk analysis involves a basic model fit review, identifying why we weren’t precise in those likelihood (1) estimations Rob talks about. It’s in this manner that Jaynes describes probability theory as the logic of science, as the hypothesis you make (your “prediction”) most be examined and the model adjusted post-experiment.  It’s basic scientific method.  I don’t blame Rob for getting these concepts mixed up. I see it as a function of what Dan Geer calls “premature standardization”: our industry truly believes that the sum of risk management is only what is told to them by ISACA, the ISO, OCTAVE, and NIST about IT risk (as if InfoSec were the peak of probability theory and risk management knowledge).  This is another reason to question the value of the CRISC, if in the (yet unreleased) curriculum there’s no focus on model selection, no model fit determination, or any model adjustment.

So if the idea of doing hindsight is invalid because what we’re dealing with now is a  different animal than what we would be doing post-incident,   what do we do?

First, if you really want, make another predictive model.  The incident isn’t over, we don’t have all the facts, but if you really wanted to you could create another (time-framed) predictive model adjusted for the new information we do have.

Second, we wait.  As we wait we collect more information, do more review of the current model. But we wait for the incident to be resolved, and by resolved I mean where it’s apparent that we have about as much information as we’ll be able to gather.  THEN you do hindsight.

At the point of hindsight you review the two basic aspects of your risk model for accuracy – frequency and impact.  Does the impact of what happened match expectations?  To what degree are you off?  Did you account for radioactive spinach?  Did you account for panicky North Americans and Europeans buying up all the iodine pills?  You get the picture. If, as in this case, there may be long term ramifications do we make a new impact model (I think so, or at least create a  wholly new hypothesis about long term impact)?

Once you’re comfortable with impact review and any further estimation, you tackle the real bear – your “frequency” determination.  Now depending on if you’re prone to either Bayesian probabilities or Frequentist counts, you’ll be approaching the subject differently but have key similarities.  The good news is that despite the differing approaches the basic gist is:  Identify the factors or determinants you missed or were horribly wrong about.  This is difficult because more times than not in InfoSec, that 3rd bullet up there, the complexity thing, has ramifications.  Namely:

There usually isn’t one cause, but a series of causes that create the state of failure. (link is Dr. Robert Cook’s wonderful .pdf on failure)

In penetration testing (and Errata should know this of all people), it’s not just the red/blue team identifying one weakness and exploiting it and then #winning.  Usually (and especially in enterprise networks) there are sets of issues that cause a cascade of failures that lead to the ultimate incident.  It’s not just SQLi, it’s SQLi, malware, obtain creds, find/access conf. data, exfiltrate, anti-forensics (if you’re so inclined).  And that’s not even discussing tactics like “low and slow”.  Think about it, in that very simple incident description there, we can identify a host of controls, processes and policies (and I’m not even bringing error into the conversation) that can and do fail – each causing the emergent properties that lead to the next failure.

This dependency trail is being fleshed out for Fukushima right now, but we don’t know the whole story.  We certainly didn’t count on diesel generators to resist a tsunami, but then there was the incompatibility of the first back ups to arrive, the fact that nobody realized that in a big earthquake/tsunami it would create a transportation nightmare and it would take a week to get new power to the station, and there were probably dozens of other minor causes that created the whole of the incident.  Without being at an end-state determination where we have a relatively final amount of uncertainty around the number and nature of all causes,  it would be absurdly premature to start any sort of hindsight bias exercise.

It’s too early to do hindsight or causal analysis on Fukushima – there is still a non-zero chance that something really bad could happen, we’re not at a point where the uncertainty in our information has stabilized, and any analysis done now would still be predictive about a future state.

Finally, this: “The best risk management articles wouldn’t be about what went wrong, but what went right.” is just silly.  The best risk management articles are lessons learned so that we are wiser, not some self-congratulatory optimism.

1.  (BTW gentlereader, the word “likelihood” means something very different to statisticians.  Just another point where we have really premature standardization)

 

What does Coviello's RSA breach letter mean?

After spending a while crowing about the ChoicePoint breach, I decided that laughing about breaches doesn’t help us as much as analyzing them. In the wake of RSA’s recent breach, we should give them time to figure out what happened, and look forward to them fulfilling their commitment to share their experiences.

Right now we don’t know a lot, and this pair of sentences is getting a lot of attention:

Some of that information is specifically related to RSA’s SecurID two-factor authentication products. While at this time we are confident that the information extracted does not enable a successful direct attack on any of our RSA SecurID customers, this information could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack.

With the exception of RSA and its employees, I may be one of the best positioned people to talk about their protocols, because a long time ago, I reverse engineered their system. And when I did, I discovered that “The protocol used by Security Dynamics has substantial flaws which appear to be exploitable and reduce the security of a system using Security Dynamics software to that of a username and password.” It’s important to note that that’s from a 1996 paper, and the flaws I discovered have been corrected.

I’ve been trying to keep up with the actual facts revealed, and I’ve read a lot of analysis on what happened. In particular, Steve Bellovin’s technical analysis is quite good, and I’d like to add a little nuance and color. Bellovin writes: “Is the risk that the attackers have now learned H? If that’s a problem, H was a bad choice to start with.” In conversations after I wrote my 1996 paper, it was clear to me that John Brainard and his engineering colleagues knew that. (Their marketing department felt differently.) RSA has lots of cryptographers who still know it.

The nuance I’d like to point out is that many prominent cryptographers had reviewed their system before I noticed the key management error. So it’s possible that that lesson leads to the statement that the information could be used. That is, the crypto or implementation, however aware of Kerkhoffs’ Principle, could still contain flaws.

If someone had compromised the database of secrets that enable synchronization, then that would “enable a successful direct attack on” one or more customers. So speculation that that’s the compromise cannot be correct without the CEO of a publicly traded company lying in statements submitted to the SEC. That seems unlikely.

But there’s another layer of nuance, which we can see if we read the advice RSA has given their customers. When I read that list, it becomes apparent that the APT used a variety of social engineering attacks. So it’s possible that amongst what was stolen was a database of contacts who run SecurId deployments. That knowledge “could potentially be used to reduce the effectiveness of a current two-factor authentication implementation as part of a broader attack”

My opinion is that social engineers using the contacts database in some way is more likely than a cryptanalytic attack, and a cryptanalytic attack is more likely than a compromise of a secrets database. But we don’t know. Speculating like mad isn’t helping. Maybe I shouldn’t even post this, but the leaps of logic out there provoke some skeptical thinking.

[update: some great comments coming in, don’t skip them.]

[Update 2: Between Nicko’s comment on the new letter, and Paul Kocher’s analysis in his Threatpost podcast I’m not sure that this analysis is still valid.]

Questions about a Libyan no-fly zone

With the crisis in Japan, attention to the plight of those trying to remove Colonel Kaddafi from power in Libya has waned, but there are still calls, including ones from the Arab League, to impose a no-fly zone. Such a zone would “even the fight” between the rebels and Kaddafi’s forces.

There are strong calls to move quickly, such as “Fiddling While Libya Burns” in the New York Times. But I think there are some important questions that I haven’t heard answered. A no-fly zone is a military intervention in Libya. It involves an act of war against the current government, and however bad that government is, we need to consider the question not of a “no-fly zone” but an “act of war” and its implications.

Some questions I’d love to hear answered include:

  • What if it doesn’t work? Are we willing to put soldiers on the ground to support the rebels?
  • What if it does? Who’s in charge?
  • What if it half works? We imposed a no fly zone in Iraq in 1991, and then invaded 11 years later because we hadn’t thought through the question of what we do to remove the no-fly zone. If the rebels end up with a Kurdistan, how do we finish? Another invasion? Fly walk away and let the Libyan air force to bomb in 2 years?
  • What does success look like? What’s our goal? Do we support offensive operations? If the rebels end up with some aircraft, do we let them fly?

There are other questions, about sovereignty, but I think there’s a good tradeoff to be made between preventing democide and respecting sovereignty. But I haven’t seen a proposal which seems to have considered what happens after a no-fly zone is imposed. Is there one?

Copyrighted Science

In “Shaking Down Science,” Matt Blaze takes issue with academic copyright policies. This is something I’ve been meaning to write about since Elsevier, a “reputable scientific publisher,” was caught publishing a full line of fake journals.

Matt concludes:

So from now on, I’m adopting my own copyright policies. In a perfect world, I’d simply refuse to publish in IEEE or ACM venues, but that stance is complicated by my obligations to my student co-authors, who need a wide range of publishing options if they are to succeed in their budding careers. So instead, I will no longer serve as a program chair, program committee member, editorial board member, referee or reviewer for any conference or journal that does not make its papers freely available on the web or at least allow authors to do so themselves.

Please join me. If enough scholars refuse their services as volunteer organizers and reviewers, the quality and prestige of these closed publications will diminish and with it their coercive copyright power over the authors of new and innovative research. Or, better yet, they will adapt and once again promote, rather than inhibit, progress.

I already consider copyright as a factor when selecting a venue for my (sparse) academic work. However, there’s always other factors involved in that choice, and I don’t expect them to go away. Like Matt, my world is not perfect, and in particular, I’m on the steering committee of the Privacy Enhancing Technologies Symposium, and we publish with Springer-Verlag. I regularly raise the copyright question with the board, which has decided to stay with Springer for now [and Springer does allow authors to post final papers].

There’s obviously a need for a business model for the folks who archive and make available the work, but when many webmail providers give away nearly infinite storage and support it with ads, $30 per 200K PDF is way too high for work that was most likely done on a government grant to improve public knowledge.

I’m not sure what the right balance will be for me, but I’d like to raise one issue which I don’t usually see raised. That is, what to do about citing to these journals? I sometimes do security research on my own, or with friends outside the academic establishment. As a non-academic, I don’t have easy access to ACM or IEEE papers. Sometimes, I’ll pick up copies at work, but that’s perhaps not an appropriate use of corporate resources. Other times, I’ll ask the authors or friends for copies. We need to understand what’s been done to avoid re-inventing the wheel.

If our goal is to ensure that scientific work paid for by the public is not handed over to someone who puts it behind a paywall, perhaps the next step is to apply pressure by only reviewing open access journals and conferences? When I first thought about that, I recoiled from the idea. But the process of looking for previous and related work is a process which must be bounded. There’s simply too many published papers out there for anyone to really be aware of all of it, and so everyone limits what they search. In fact, there are already computer security journals, including Phrack and Uninformed, which are high quality work but rarely cited by academics.

So I’m interested. Does being behind a paywall suffice as a reason to not cite work? If you answer, “no, it’s not sufficient,” how much time or money do you think you or I should reasonably spend investigating possibly related work?

SIRA Meeting! THURSDAY

THURSDAY, THURSDAY, THURSDAY!!!!!!!

Hi everyone! SIRA’s March monthly webinar is this Thursday, March 10th from 12-1 PM EST. We are excited to have Mr. Nicholas Percoco, Head of SpiderLabs at Trustwave, talk to us about the 2011 Trustwave Global Security Report. Block off your calendars now!

Hello ,

Alexander Hutton invites you to attend this online meeting.

Topic: SIRA-Tastic! March Madness!
Date: Thursday, March 10, 2011
Time: 12:00 pm, Eastern Standard Time (New York, GMT-05:00)

Continue reading “SIRA Meeting! THURSDAY”

Fear, Information Security, and a TED Talk

In watching this TEDMed talk by Thomas Goetz, I was struck by what a great lesson it holds for information security. You should watch at least the first 7 minutes or so. (The next 9 minutes are interesting, but less instructive for information security.)

The key lesson that I’d like you to take from this is that fear doesn’t get people to act. A belief in the efficacy of your action gets people to act. (Don’t miss at 5:45, when he says “oh, they’re trying to scare people.” He’s not talking about your marketing department.)

In information security, people, and especially management, don’t act because they don’t believe that more firewalls, SSL and IDS will protect their cloud services. They don’t believe that because we don’t talk about how well those things actually work. Do companies that have a firewall experience fewer breaches than those with a filtering router? Does Brand X firewall work better than Brand Y? Who knows? And absent knowing, why invest? There’s no evidence of efficacy. Without evidence, there’s no belief in efficacy. Without a belief in efficacy, there’s no investment.

We’re going to need to move away from fear and to evidence of efficacy. Doing so is going to require us all to talk about investments and outcomes. When we do, we’re going to start getting better rapidly.