Next time, look to Gourmet Depot and see if they have replacement parts.
It was easy to find their recommendation for our specific coffee machine, the recommended pot fits great, and was cheap.
Check out Gourmet Depot next time you’re in this bind.
In the past, we have has some decidedly critical words for the Ponemon Institute reports, such as “A critique of Ponemon Institute methodology for “churn”” or “Another critique of Ponemon’s method for estimating ‘cost of data breach’“. And to be honest, I’d become sufficiently frustrated that I’d focused my time on other things.
So I’d like to now draw attention to a post by Patrick Florer, “Some Thoughts about PERT and other distributions“, in which he says:
What follows are the results of an attempt to answer this question using a small data set extracted from a Ponemon Institute report called “Compliance Cost Associated with the Storage of Unstructured Information”, sponsored by Novell and published in May, 2011. I selected this report because, starting on page 14, all of the raw data are presented in tabular format. As an aside, this is the first report I have come across that publishes the raw data – please take note, Verizon, if you are reading this!
So I simply wanted to offer kudos to the Ponemon Institute for doing this.
I haven’t yet had a chance to dig into the report, but felt that given our past critiques I should take note of a very positive step.
Powered by Twitter Tools
There’s been a lot of noise of late because Oracle just released their latest round of patches and there are a total of 78 of them. There’s no doubt that that is a lot of patches. But in and of itself the number of patches is a terrible metric for how secure a product is. This is even more the case of companies that bundle all of their patches for all of their product lines at once. Most of the chatter I’ve seen, implies that all 78 are for the main Oracle database, but if you read their announcement, you’ll see the breakdown is as follows:
Oracle Database Server – 2 patches
Oracle Fusion Middleware – 11 patches
Oracle E-Business Suite – 3 patches
Oracle Supply Chain Products Suite – 1 patch
Oracle PeopleSoft – 6 patches
Oracle JD Edwards – 8 patches
Oracle Sun Products – 17 patches
Oracle Virtualization – 3 patches
Oracle MySQL – 27 patches
Fully 60% of the above patches are from OSS products. So which is more secure: open source or closed source. Or let’s compare Oracle DB vs MySQL: 2 versus 27 patches?
What do these numbers tell you? Absolutely nothing. Even with something like CVSS you still can’t tell which product is more secure. The whole thing is a load of malarkey. The product that is and will remain most secure is the one that you can manage and maintain the easiest for your organization.
(From The Oatmeal.)
It’s widely understood that Seattle needs a better way to measure snowfall. However, what’s lacking is a solid proposal for how to measure snowfall around here. And so I have a proposal.
We should create a new unit of measurement: The Nickels. Named after Greg Nickels, who lost the mayorship of Seattle because he couldn’t manage the snow.
Now, there’s a couple of ways we could define the Nickels. It could be:
I’m not sure any of these are really right, so please suggest other ways we could define a Nickels in the comments.
I am saddened to pass on the news that Ulf Müller, a colleague at Zero-Knowledge Systems, has died in tragic and violent circumstances.
I remember Ulf as quiet, gentle, kind and am tremendously saddened by his loss.
The most recent news story is “Computer-Experte in Transporter erschlagen“.
Nils Kammenhuber of the Technical University of Munich is acting as a representative for the family.
I am seeking feedback from others who may have experience developing and presenting security metrics to various stakeholders at their organization. I have a number of questions I’ve thought of, and put them into a simple survey form. I am looking for any examples of the good, bad and ugly involved in developing meaningful metrics. What has worked well and what has failed miserably? How have you packaged and presented the results in a meaningful way to your executives?
If you can spare a few minutes, please consider taking this survey. Even if you answer one question, it is helpful!
You may also simply share an example, graphics or slides via email. I will be using your feedback to facilitate peer discussions and in a presentation aimed at educating security professionals on how they can improve their security metrics program.
Thanks in advance,
From an operations and security perspective, continuous deployment is either the best idea since sliced bread or the worst idea since organic spray pancakes in a can. It’s all of matter of execution. Continuos deployment is the logical extension of the Agile development methodology. Adam recently linked to an study that showed that a 25% increase in features lead to a 200% increase in code complexity, so by making change sets smaller we dramatically decrease the complexity in each release. This translates to a much lower chance of failure. Smaller change sets also mean that rolling back in the case of a failure state is also much easier. Finally, smaller change sets make identifying what broke unit and integration tests easier and far easier to code review which increases the chances of catching serious issues prior to deployment. All of this points to building systems that are more stable, more reliable, have less downtime and are easier to secure. This assumes, of course, that you are doing continuos deployment well.
In order for continuous deployment (and DevOps in general) to be successful there needs to be consistent process and automation. There are lots of other factors as well, such as qualified developers, proper monitoring, the right deployment tools but those are for another discussion.
Consistent processes are essential if you are to guarantee that the deployment happens the same way every time. To put it bluntly, when it comes to operations and security, variation is evil. Look to Gene Kim’s research (Visual Ops, Visual Ops Security) or more traditional manufacturing methodologies like Six-Sigma for a deep dive into why variation is so very very bad. The short version though is that in manufacturing, variation means products you can’t sell. In IT, variation means downtime, performance issues, and security issues. At the most basic level, if you are making changes and you are making changes to how you make the changes, you create a much harder situation from which to troubleshoot. This translates to longer incident response times and longer times to recovery which nobody wants. Especially in an online business.
The easiest way to keep deployment process consistent is to remove the human element as much as possible. In other words, automate as much it as possible. This has the added advantage of saving the humans for reviewing errors and identifying potential issues faster. It doesn’t matter which automation mechanism you use as long as it’s stable and supports your operating environment well. Ideally, it will either be the same system as currently being used the by the operations and applications teams (e.g. chef, puppet, cfengine) or be one that can integrated with those systems (e.g. hudson/jenkins).
With good check-in/build release messages, you even get automated logging for your change management systems and updates to your configuration management database (CMDB).