New Cyber Security Bill: Crowdsource Analysis?

A lot of people I trust are suggesting that the “Collins-Lieberman” bill has a substantial chance of passing. I have some really interesting (and time-consuming) work tasks right now, and so I’m even more curious than usual what you all think, especially how this

According to the press release, the “Collins-Lieberman” bill would:

  • The Department of Homeland Security (DHS) to assess the risks and vulnerabilities of critical infrastructure systems—whose disruption from a cyber attack would cause mass death, evacuation, or major damage to the economy, national security, or daily life—to determine which should be required to meet a set of risk-based security standards. Owners/operators who think their systems were wrongly designated would have the right to appeal.
  • DHS to work with the owners/operators of designated critical infrastructure to develop risk-based performance requirements, looking first to current standards or industry practices. If a sector is sufficiently secured, no new performance requirements would be developed or required to be met.
  • The owners of a covered system to determine how best to meet the performance requirements and then verify that it was meeting them. A third-party assessor could also be used to verify compliance, or an owner could choose to self-certify compliance.
  • Current industry regulators to continue to oversee their industry sectors.
  • Information-sharing between and among the private sector and the federal government to share threats, incidents, best practices, and fixes, while maintaining civil liberties and privacy.
  • DHS to consolidate its cybersecurity programs into a unified office called the National Center for Cybersecurity and Communications.
  • The government to improve the security of federal civilian cyber networks through reform of the Federal Information Security Management Act.

Some of that, like risk-based security standards, sounds potentially tremendously positive. There are some clear risks, like DHS will make a best-practices table of risk management activity without any focus on outcomes, and then classify it.

Other bits, like information sharing, sounds worrisome, because the authors clearly know that there’s a risk of privacy and liberty impacts. It’s not clear what the data to be shared is. If that’s (for example) “Verisign has been pwned using a 3-year old Flash expliot” there’s minimal impact to liberty. (Of course, since they haven’t said anything, we don’t know how Verisign was owned.) If it’s “We suspect Kevin Mitnick, then that’s both less useful and more privacy impactful.

Stepping back, where should I look for analysis? Have you looked at the bill? What does it do for the New School pillars? As a reminder, those are:

  • Learning from other professions, such as economics and psychology, to unlock the problems that stymie the information security field. The way forward cannot be found solely in mathematics or technology.
  • Sharing objective data and analysis widely. A fetish for secrecy has held us back.
  • The embrace of the scientific method for solving important problems. Analyzing real world outcomes is the best way for information security to become a mature discipline.

In other words, how New School is this bill?

Representative Bono-Mack on the Sony Hack

There’s a very interesting discussion on C-SPAN about the consumer’s right to know about breaches and how the individual is best positioned to decide how to react. “Representative Bono Mack Gives Details on Proposed Data Theft Bill.”

I’m glad to see how the debate is maturing, and how no one bothered with some of the silly arguments we’ve heard in the past.

'Experts' misfire in trying to shoot down Charney's 'Internet Security Tax' idea

Industry ‘experts’ misfired when they criticized Microsoft’s Scott Chareney’s “Internet Security Tax” idea. Q: How many of these ‘experts’ know any thing about information economics and public policy responses to negative externalities? A: Zero. Thus, they aren’t really qualified to comment. This is just one small case in the on-going public policy discussions regarding economics of information security, but given the reaction of the ‘experts’, this was a step backward.

The information security industry intelligentsia are often poorly qualified to evaluate economic and public policy solutions to systemic InfoSec problems.  They just don’t have the training or depth of knowledge.  That doesn’t stop them from being quoted in industry media as if they are the be-all-end-all ‘experts’.  I just wish the media would seek out people who knew what the hell they were talking about in this arena.   Here’s a case in point.

In a keynote speech at RSA 2010 (full text), Microsoft’s Scott Charney proposed proactive solutions to systemic problems like botnets.  Drawing analogies with  public health and environmental protection, he said it might make sense for ISPs quarantine infected consumer PCs. Then he said:

And then there’s a question of who would pay for that. Well, maybe markets will make it work, but if not, there are other models: use taxes for those who use the Internet. We pay a fee to put phone service in rural areas, we pay a tax on our airline ticket for security. You could say it’s a public safety issue and do it with general taxation. [emphasis added]

In other words, some collective action might be beneficial and either markets might pay for it, or taxes might be necessary.  Two days later, a Microsoft spokesperson clarified:

“Scott Charney did not suggest a new Internet tax to fund cybersecurity programs. As part of his keynote at RSA he recommended that the industry and government look at developing the equivalent of the World Health Organization to combat malware on the Internet,” the spokesperson said. “Within this context he mentioned the need to explore how to develop a sustainable funding model for this initiative, not suggesting that any particular funding model is best.”

To be even more clear, he definitely didn’t say that Microsoft should get the proceeds or play any part in how it is spent.   

In the following days, industry analysts, executives, and bloggers weighed in and their judgment was mostly negative.  A prime example is the Computerworld article with a headline that called it  “a horrible idea”, quoting John Pescatore of Gartner Group.   Here are more ‘expert’ reactions quoted in the same article:

  • Pescatore: ” ‘Why not a tax on all retail goods for a standard antishoplifting service all merchants would have to use?’ A business, he said, can now select what it thinks is the best anti-malware solution, but that choice would presumably vanish if funding for battling the bad guys went national.”
  • Pescatore: “A general tax would reduce the services to the lowest common denominator”
  • Wolfgang Kandek, CTO of Qualys:  “I have a hard time seeing [a tax] work. The Internet is an international body; you can’t regulate it, and you cannot levy a tax. ISPs might have to up their fees to pay for something like this, I can see that, but a tax that brings government into play — I can’t see that.”
  • Randy Abrams, Director of Technical Education at ESET Security: “A tax may be a bad idea, but people will pay for it one way or another.”
  • Andrew Storms, Director of Security Operations at nCircle Network Security: “I don’t have a problem with charging a fee and giving it to good works for the whole.  The problem is that one, you have to find a big, smart and trustworthy organization to handle this. And most people will agree that’s not the government, and that’s not Microsoft.”
  • Storms: “More likely is that an ISP will take the plunge, charge its users a little extra to keep their machines clean, and prove that it’s possible.  Then I could see a consortium of ISPs getting together to do that.”

Here are some of the negative reactions from bloggers:

“Let’s also not forget that Microsoft has gone out of its way to create a monoculture where one OS dominates, through legal and illegal methods. So the idea that we should now all pay to solve a problem that Microsoft not only wanted to create, but made billions of dollars in the process is frankly … ridiculous.”

“Microsoft’s “Trustworthy Computing” shtick has gone so far over the oxymoronic top that it’s just no longer possible to give the company the benefit of the doubt. … Really, Scott? … Did you really think we’d all look at each other with nods of agreement, impressed by the brilliance of your epiphany? Didn’t you realize that revelation might just backfire on you?

It’s unfathomable that a company with Microsoft’s resources can be so clueless and out of touch. … If Microsoft expects to be taken seriously as an enabler of “trustworthy computing,” it needs to do a lot more than this to demonstrate trustworthiness. Taxing users who find the software they bought is non-secure is like taxing Toyota owners for finding they have faulty gas pedals.”

This is where I step in an call “BOGUS!”

Q:  How many of these ‘experts’ know any thing about information economics and public policy responses to negative externalities?  A: Zero.

Even more basic Q: How many of them bothered to find out what Charney was really proposing — rather just reacting to the headline version: “Net tax to clean computers” or the fact that someone from Microsoft said it?  A: Of the articles and blog posts I saw, only two bothered to dig into the speech and seek to understand or clarify Charney’s comments: BetaNews and yinhuan.net.  Conversely, the comments by Pescatore and Kandeck lead me to believe that they didn’t really understand the proposed idea.  Others used this opportunity to throw rocks at Microsoft rather than deal with the substance of the ideas. 

Regarding the idea itself, I think the comment by Randy Abrams is on the mark: “… people will pay for it one way or another.”    Right now, we pay for it through the cost of security breaches and through the cost of inefficient security spending.

The idea of taxes as a way to counteract or pay for mitigation of negative externalities has been thoroughly researched in economics, especially environmental economics.  Here are some links if you want to learn more:

Myself, I’m more in favor of market-based funding methods (e.g. insurance, etc.): Incentive-based Cyber Trust.  But mandated insurance or other mandates can be seen as a form of “tax”, so the main question is what form of incentives and funding is most effective and most efficient.

This is just one small case in the on-going public policy discussions regarding economics of information security, but given the reaction of the ‘experts’, this was a step backward.

Green Dam

Update 26 June 2009: The status of Green Dam’s optionality is still up in the air.  See, for example, this news story on PC makers’ efforts to comply, which points out that

Under the order, which was given to manufacturers in May and publicly released in early June, producers are required to pre-install Green Dam or supply it on disc with every PC sold in China from July 1.

Last week, it appeared the government backed away from requiring compulsory installation by users, but manufacturers are still being required to provide the software.

I suspect that there will be at least one more update to this post before all is said and done.

Update 17 June 2009Green Dam is now to be optional, but installed-by-default.

There’s a great deal of discussion in China right now about the new government-mandated “Green Dam” Internet filtering software that must be installed on all PC’s in the People’s Republic of China

Every PC in China could be at risk of being taken over by malicious hackers because of flaws in compulsory government software.

The potential faults were brought to light by Chinese computer experts who said the flaw could lead to a “large-scale disaster”.

The Chinese government has mandated that all computers in the country must have the screening software installed.

It is intended to filter out offensive material from the net.

I was in a taxi in Beijing a couple days ago and the driver was listening to a call-in/talk radio show whose topic was the software and its flaws/weaknesses.  My post, however, had to wait until I returned states-side due to this ‘blog being blocked by the three different connections I tried to access it while I was in China.

The consensus about this software among the locals that I spoke to is that it will be widely ignored, except in places like primary schools and some government offices.

There is so much to say about this, however, that I almost don’t know where to begin.  First, there is the issue of externalities.  The benefit from this software are the government censors.  The cost, however, will be borne by those whose machines are rendered less stable, less secure, and less useful (due to the censoring).  This is the opposite of the theoretical goal of regulation–the transfer externalities back onto their creators, not the other way around.

The results here may be even more toxic than observers currently realize, however.  By demanding compliance even when it does direct harm to those who must comply, the government undermines the loyalty of the citizenry and its own credibility.  It may only be one straw on the camel of Chinese citizens’ discontent, but eventually, there will be a straw that breaks the camel’s back.  This software has re-energized the domestic debate over the role of government censorship and whether their goal is to keep the populace safe or merely in-line.

Similarly, there is a lesson here for security and risk managers.  Namely, policies must also be perceived as benefiting those they govern.  Corporations whose policies are too obviously unfair or which demonstrate a contempt for employees produce similar disloyalty.  While it may not be immediately obvious in the current job market–people generally won’t quit in protest if they can’t find another job–that makes the effect worse.  A grumbling workforce is an unproductive workforce.

Yes, we must achieve our goals, in my case protecting information, and the combination of reduced budgets and nervous employees makes it that much harder to achieve results.  But in times like these, we also need to tread more lightly than ever since the resisters of policy–those employees who are more likely to be a risk–are more likely to stay with us and undermine it from within.

So, as ever, when we are dealing with security, the mantra remains, “People, process and technology–in that order.” Any attempt to attack the problem otherwise frequently produces unintended–and often
unwanted–results.

Footnote:

I don’t know what my employer’s corporate stance is going to be, but we have a significant white collar presence in China, so will probably be unable to ignore the problem.

When asked, I will argue that we already perform this filtering on our corporate proxy servers, but it does not change the fact that the government has created a huge externality for their population and for companies operating here as part of a futile attempt to prevent Chinese citizens from viewing porn or dissident political commentary–not necessarily in that order, IMHO.

Statistics Police?!

From Gelman’s blog:

U.K. Sheriff Cites Officials for Serious Statistical Violations

I don’t know if we need an “office” of information assurance in the government sector, but it would be nice to have some penalty on the books for folks who abuse basic common sense statistical principles. Of course, the *real* answer lies in education and disclosure, but history would suggest that the Enlightenment Bus is usually late and tends to make many unscheduled detours.

Oh, and just so you can share in my mental pain, here’s the song that will probably be stuck in my head for the rest of the day now:

The Eyes of Texas Are on Baseboard Management Controllers? WHAT??!!!

OR TEXAS HB1830S IS SWINEFLU LEGISLATION, IT’S BEEN INFECTED BY PORK!


**UPDATE:  It looks like the “vendor language” around Section Six has been struck!

Given Bejtlich’s recent promises, I thought we’d take a quick but pragmatic look at why risk assessments, even dumb, back-of-the-envelope assessments, might just be a beneficial thing.

As you probably know, the guys here at NewSchool and the guys at sister site EmergentChaos are very interested in the government regulation of cyberspace.  Oh, we also happen to be pretty good with the information risk stuff, too.  So I’m sure you wouldn’t be surprised that we spent some time looking over what one of the biggest, most influential states in the Union, Texas (Austin is also one of my most favorite places thanks to my friend Joe Visconti), is doing about legislating information security.  Currently they have a bill in consideration, HB1830S.  Highlights here:

http://www.legis.state.tx.us/tlodocs/81R/analysis/html/HB01830S.htm

HB1830S has some pretty good stuff in it.  The kind of legislation that tends to make sense, even if you are a “government hands off” kind of guy like I am.

Section 2 is about background checks and having policies and so forth.  This is wonderful, it addresses about the only control we have against Internal threat agents with significant privileges.

Section 3 seems to excuse information security information (like specific vulnerabilities) from the public record.  I’m all for some level of disclosure here (Something like the letter grades the federal government releases is fine), but really, the citizens of the state don’t need particulars.

Section 4 talks about what InfoSec information should be confidential and talks about vendor relationships.  After working on some state RFPs  (not Texas) and watching how specific requirement for a “Penetration Test” was awarded to someone who, in their RFP, specifically said that they were only going to only perform a “Vulnerability Assessment”, I appreciate these sorts of clauses.

Section 5 covers internal state reporting concerns for vuln data, great.

SECTION SIX WHAT THE !@#%^!@@#$* IS THIS???!!!

“Government Code, to require that the biennial operating plan describe the state agency’s current and proposed projects for the biennium, including how the projects will address certain matters, including using, to the fullest extent, technology owned or adapted by other state agencies, including closed loop event management technology that secures, logs, and provides audit management of baseboard management controllers and consoles of cyber assets.”

Let’s parse that and read it again:

“Government Code, to require that the biennial operating plan describe the state agency’s current and proposed projects for the biennium, including how the projects will address certain matters,…”

Looking good, it’s always nice to have a plan.

“…including using, to the fullest extent, technology owned or adapted by other state agencies,…”

Great! I’m all for sharing information among security professionals, that’s pretty much one of the fundamental pillars of the New School.

“…including closed loop event management technology that secures, logs, and provides audit management of baseboard management controllers and consoles of cyber assets.”

Wait, what?

Ok, I’ve heard of closed loop processing in Business Intelligence (A system is said to perform closed-loop processing if the system feeds information back into itself).  I’ve heard the phrase Closed-Loop in SOA.  But I’m sorry, the use of “closed loop event management technology that secures, logs, and provides audit management of baseboard management controllers” sounds like somebody lifted it from a vendor brochure.

Also, I know that this blog generally attracts some of the best and most forward thinking InfoSec readers/professionals – even if you disagree with us.  But if you need to go look up what a baseboard management controller  (BMC) is and does, to remind yourself, go right ahead.  I had to.

Now read the rest of HB1830S highlights there and put Section Six in context.

Is it just me, or does this seem like someone in Texas is trying to legislate the use of a specific vendor’s rather esoteric and specific security control?  I mean, even if BMC is really important in, say, SCADA systems – is there a reason that the dozens (?) of other agencies would have to waste their money on this?

And why legislate this specific technology?  Shouldn’t the agency security management be able to do their own risk assessments and prioritize based on the significant threats that, you know, they’re ACTUALLY SEEING?  And I’m not asking for Forests of Bayesian Belief Networks to establish risk and vulnerability information via Monte Carlo simulations here, I’m asking for a basic risk-based sanity check to make decisions, decisions based in reality, not fear.  I mean, a quick poll of Security pros on Twitter about the BMC and so far nobody has claimed to ever seen one piece of exploit code, more or less heard of an actual *incident*.  Now I’m sure that the State of Texas does a great job with Information Security and all, but I’m willing to bet good money that the BMC’s of their systems is the least of their security problems.

Bottom line, Legislating disclosure, policy, and even ensuring critical processes are in place is a useful endeavor, and the rest of HB1803 does a good job.  But legislating a specific technology is bad for a couple of reasons:

1.)  It removes management’s ability to expend resources on the actual problems they have. You are legislating without the context of risk, even poorly derived risk statements.

2.)  If it takes an act of legislature to force adoption, it will take a similar or more difficult act of politics to remove that technology when it’s outlived it’s usefulness (and one wonders if BMC securing technology would EVER be useful except in fringe cases).

Things Are Tough, Don’t Waste Taxpayer Money, Please!

HB1830S could be a good piece of legislation.  Strike the BMC aspect of Section Six and it becomes more than reasonable.  Heck, add “to the fullest extent POSSIBLE” or “to the extent that’s REASONABLE” and ask state CISO’s to provide Threat Event Metrics for the BMC if you want.  But please Texas, whatever this vendor is paying you in lobbying perks – it’s not worth the waste and hassle and the risk of derision from the parts of the Information Security community that actually happen to be concerned with public safety.