SecurityFocus points to a nice short article over at Silicon.com suggests that
Gartner advises that for companies building their own software, developers should be pushed to put security at the head of their list. It’s not just in-house tech makers that need a word in their ears – the analysts suggest end users should give vendors grief about tightening up their security procedures too.
John Pescatore, the analyst in question, nails it. If you want more security from your vendor, you’ve got to make it a buying criteria. If you want more security from your developers, you’ve got to make time for it in the schedule, and you’ve got to give them tools and training to know what to do. Better security isn’t hard, it just costs some money. Do you prefer to spend that up front, or on operations later?
The New York Times reports on a lack of doctors in Canada, along with a rise in Canadians using emergency rooms to replace family doctors. (Use BugMeNot if you don’t want to register.)
The basic problem is economic. Doctors are much better paid in the US than in Canada, and doctors can easily move. Its also harder for a doctor to be entrepreneurial in Canada, not only because of the extra paperwork, but some things that they may want to do are actually banned. For example, a doctor can’t open a private surgery with the plan to sell overnight stays, even if people want to pay for it. The slur against that is it would ‘create a two-tier system.’ Similarly, the supplemental health insurance I had while working in Montreal would pay for a private hospital room, but there were either none or very few, reserved for senior politicians and the otherwise well-connected. Apparently a private room counts as two-tier.
Of course, there is a two-tier system now. A well-off friend once flew to the US for treatment he needed. It seems that Canada could do a better job of providing base care while still providing the base level of health care which they do. And another friend, just to balance the anecdotes, has gotten good long-term care for an unusual and life-threatening condition. He’d be long bankrupt in the US.
Peter Swire has a new working draft A Model For When Disclosure Helps Security. Its a great paper which lays out two main camps, which he calls open source and military, and explains why the underlying assumptions cause clashes over disclosure. That would be a useful paper, but he then extends it into a semi-mathematical model of the factors that contribute to the usefulness of hiding information. (Semi-mathematical because there’s no numbers attached, but rather “high/low” rankings.)
As part of a larger project on security configuration issues, I’m doing a lot of learning about taxonomies and typographies right now. (A taxonomy is a hierarchical typography.)
I am often jealous of the world of biology, where there are underlying realities that can be used for categorization purposes. (A taxonomy needs a decision tree. Any trained person using this tree should classify the same items the same way.)
A new type of shark has recently been discovered, in the Sea Star Aquarium, in Coburg, Germany. This is (at least) the second zoo that the shark has been in.
We are not embarrassed,” said [Schonbrunn Zoo] spokesman Dr Ekkehard Wolf. “We get thousands of exotic animals every year. It is not possible to categorize them all. (From The Telegraph.)
See a picture (and read the article) at Unterwasser.de or read Google’s translation
Even the lucky biologists run into difficulty classifying their species. I feel better trying to classify minimum time between password changes.
this post by Todd Zywicki clearly illustrates the difference between law professors and economics professors.
Over at TaoSecurity, Richard writes:
Remember that one of the best ways to prevent intrusions is to help put criminals behind bars by collecting evidence and supporting the prosecution of offenders. The only way to ensure a specific Internet-based threat never bothers your organization is to separate him from his keyboard!
Firstly, I’m very glad that the second, qualifying sentence is there. It provides some context. However, I’m not sure that I care that a specific threat stops, what I care about is that the class of threats go away.
If the odds that a specific criminal hacker goes to jail are low, then the penalties need to be exceptionally severe and well publicised to create a deterrent effect. (This is roughly a criminal attack loss expectancy, which someone smart has done work on.)
We can see that the odds that an attacker goes to jail are relatively small because there is clearly a large attacker population, and very few criminal sentencings. I’m curious how many attacker convictions we’d need each year to change the economics of this and deter 15 year olds from bringing down CNN?
Alec Muffet comments on sysadmin resistance to applying patches.
As Steve Beattie and a bunch of others of us wrote about the issue is that there’s a tradeoff to be made to find the optimal uptime for a system. Its a tradeoff between a security risk and an operational risk.
Organizationally, different teams are often measured on different parts of the risk, making a holistic view harder. Vendors need to work to make sure each patch is a smaller change. Roll-ups are nice, but roll-ups naturally combine all of the risks of all of the small changes. (SP2 is risky because of the number of changes that it makes to the OS, and riskier because some of them are new, not rolled-up changes.) Now, I’m not suggesting that the right thing to do is to release each change as a seperate patch, but vendors need to address the fear of messing up their system that people have. One way to do that would be to focus on a good, high-assurance roll-out/roll-back mechanism as part of the operating system.
According to David Garrity, a technology analyst in New York with Caris & Co.:
It was supposed to democratize the process and let people buy in at just a few shares, but it was a miserable failure because the organizers didn’t realize the securities regulations that require people who bid to have a certain net worth. (From Wired News.)
So, assuming that Garrity has his facts right, this is probably the Qualified Investor rule, which requires that an investor in a non-public stock have a net worth of more than a million bucks, or income above $250,000. Its not always enforced, but when it is, in the IPO process, its one of the few rules that literally help the rich get richer. The rules got a fair bit of public notice when Linux companies started going public, and offering friends and family shares to coders who contributed to Linux. The coders, by and large, were not rich, and several banks promised to ignore lies they told on their QI attestations.
Now, is this a $210 million dollar error? Quite possibly. One of the problems discussed has been lower-than-expected participation. Given Google’s exceptionally low fees (expressed as a percentage of the deal size), its possible that they’re getting bad service from their banks. That also fits with the unregistered stock not being discovered. I can more easily see a banker not stressing a point like this than I can see them spending tens of millions to send a message.
Other commentary from Gordon Smith argues that it was a move to manage securities litigation.
[Update: SamaBlog accurately points out that the law is there to protect people from high-risk investments. I should have said that, and made clear that I’m discussing the unintended consequences of the law here.]
So Google popped 18% today. That shouldn’t have happened. The goal of their much-discussed auction was to ensure that they made money. The typical bubble IPO involved a “pop” of as much as 100-300% on opening day. This put huge sums in the hands of bankers and the bankers friends, sometimes illegally. Ideally, Google’s trading should have been at about 85 today, because anyone who wanted to pay $100 for the stock could have paid $85 yesterday. So what happened?