Initial Thoughts on the 2009 Verizon DBIR
Last night, the fine folks at Verizon posted the 2009 version of the DBIR. I haven’t had time to do a full deep dive yet, but I thought I’d share my initial notes in the meantime. Stuff in italics is from the DBIR, regular text is me:
81 percent of organizations subject to PCI DSS had not been found
compliant prior to the breach.
81%? Wow, PCI really is hard. My initial questions was is this an implication that none were actually compliant at the time of the breach? This gets answered later in the report. The short version is “no not really.”
The value associated with selling stolen credit card data have dropped from between $10 and $16 per record in mid-2007 to less than $0.50 per record today.
Is this based on internal data or on some public study?
Results from 600 incidents over five years make a strong case against the long-abiding and deeply held belief that insiders are behind most breaches.
The distribution of breach sources in 2008 is presented in Figure 4. (74% external, 20% internal, 32% partner) The results are quite similar to that of the 2004 to 2007 data set and continue to challenge some of the prevailing wisdom in the security community with regard to the origins of data breaches
Real data! This is awesome. Finally something other then the old FBI/CSI survey, may I never see it again on a vendor slide deck. Also I love Figure 5.
Figure 7 is fascinating in that we do see the truth that on average Internal breaches are far more devastating by nearly a factor of 3. However, they are 4 times LESS likely to happen. Really highlights the fact that although the really high profile cases like Hannaford and TJX were public, they are in fact edge cases.
Of all insider cases in 2008, investigators determined about two-thirds were the result of deliberate action and the rest were unintentional.
1/3 of the time don’t attribute to malice that which can be attributed
to human stupidity.
While it’s tempting to infer that administrators acted more deliberately and maliciously than end-users and other employees, the evidence does not support this conclusion. The ratio was roughly equal between them
I haven’t found it yet, but there was apparently a recent NSA/Carnegie Mellon study on insiders that found similar results. Anyone know what this is?
With respect to breaches caused by recently terminated employees, the following two scenarios were observed:
— Employee was terminated and his/her account was not disabled in a timely manner.
— Employee was notified of termination but was allowed to “finish the day” unmonitored and with normal access/privileges.
This encompass all areas of access (decommissioning accounts, disabling privileges, escorting terminated employees, etc.).
This is why those auditors are right to be checking IAM process!
In the large majority of cases, it was the lax security practices of the third party that allowed the attack. It should not come as a surprise that organizations frequently lack measures to provide visibility and accountability for partner-facing systems
Yet more justification to have security provisions in the contracts with cash penalties for failure.
We investigated an entire series of cases in which multiple organizations within the same industry all suffered breaches within a very short timeframe. It didn’t take long to figure out that each used the same third-party vendor to remotely manage their systems. Unfortunately, that vendor neglected to change the default username and password and used the same credentials across multiple clients
It’s the little things that bite you the hardest.
Figure 15: Shared/Default credentials just as likely as SQL injection, both which were an order of magnitude more then buffer overflows and XSS attacks.
More proof that attacks are shifting to where the data is known to be.
Only six confirmed breaches resulted from an attack exploiting a patchable vulnerability.
Were any from 0-days? Also, why bother with things that can be patched when you can use things like SQL-injection and default passwords? Path of least resistance and all.
“Figure 22: Over half the attacks could have been performed by script kiddies or people with even less skill.”
At first glance, this is one of the best justifications for something like PCI. By even raising the bar a little we can reduce the number of incidents. Interestingly, it would not necessarily reduce the overall number of records lost due to the high number of records (95%) falling into the the “high skill” category. I still think it’s worthwhile though. If we an with minimal effort reduce the number of incidents that means more time to focus on the high impact issues in the long run.
The final recommendations from the report:
- Changing default credentials is key
- Avoid shared credentials
- User account review
- Application testing and code review
- Smarter patch management strategies
- Human resources termination procedures
- Enable application logs and monitor them
- Define “suspicious” and “anomalous” (then look for whatever “it” is)
Looks like the basics of a good information security program . However, I do have a couple of nitpicks.
Rec #5. There are other reasons to patch quickly and comprehensively then stopping data loss oriented breaches. So saying that patches can be lower priority is ing somewhat overly focused on one aspect of security. It’d be very interesting to see the numbers of other types of incidents and what the impact of patching is on them.
Recs #7 and #8. I’m a big fan of logging however unless you have the right tools then these are effectively impossible to do. And at this point I think effective log monitoring and traffic analysis is definitely in the realm of
capability of only a select few enterprises. Side thought are orgs that use MSSPs for log monitoring having fewer incidents or at least lower impact incidents versus those that are doing it themselves?
Rec #4. Given the prevelance of SQL injection, it would have been really nice to see a breakdown of whether they SQL injection was in a COTS package or an internally developed one.