Measuring ROI for DMARC

I’m pleased to be able to share work that Shostack & Associates and the Cyentia Institute have been doing for the Global Cyber Alliance. In doing this, we created some new threat models for email, and some new statistical analysis of

It shows the 1,046 domains that have successfully activated strong protection with GCA’s DMARC tools will save an estimated $19 million to $66 million dollars from limiting BEC for the year of 2018 alone. These organizations will continue to reap that reward every year in which they maintain the deployment of DMARC. Additional savings will be realized as long as DMARC is deployed.

Their press release from this morning is at here and the report download is here.

GAO Report on Equifax

I have regularly asked why we don’t know more about the Equifax breach, including in comments in “That Was Close! Reward Reporting of Cybersecurity ‘Near Misses’.” These questions are not intended to attack Equifax. Rather, we can use their breach as a mirror to reflect, and ask questions about how defenses work, and learn things we can bring to our own systems.

Ian Melvin was kind enough to point out a GAO report, “Actions Taken by Equifax and Federal Agencies in Response to the 2017 Breach.” As you’d expect of a GAO report, it is level headed and provides a set of facts.


However, I still have lots of questions. Some very interesting details start on page 11:

Equifax officials added that, after gaining the ability to issue system-level commands on the online dispute portal that was originally compromised, the attackers issued queries to other databases to search for sensitive data. This search led to a data repository containing PII, as well as unencrypted usernames and passwords that could provide the attackers access to several other Equifax databases. According to Equifax’s interim Chief Security Officer, the attackers were able to leverage these credentials to expand their access beyond the 3 databases associated with the online dispute portal, to include an additional 48 unrelated
databases.

The use of encryption allowed the attackers to blend in their malicious actions with regular activity on the Equifax network and, thus, secretly maintain a presence on that network as they launched further attacks without being detected by Equifax’s scanning software. (Editor’s note: I’ve inverted the order of the paragraphs from the source.)

So my questions include:

  • How did the attackers get root?
  • Why wasn’t the root shell noticed? Would our organization notice an extra root sell in production?
  • How did they get access to the other 48 databases?
  • Why didn’t the pattern of connections raise a flag? “As before, Equifax
    officials stated that the attackers were able to disguise their presence by
    blending in with regular activity on the network.” I find this statement to be surprising, and it raises questions: Does the dispute resolution database normally connect to these other databases and run the queries which were run? How was that normal activity characterized and analyzed? Encryption provides content confidentiality, not meta-data confidentiality. Would we detect these extra connections?

Specifically, while Equifax had installed a device to inspect network traffic for evidence of malicious activity, a misconfiguration allowed encrypted traffic to pass through the network without being inspected. According to Equifax officials, the misconfiguration was due to an expired digital certificate. The certificate had expired about 10 months before the breach occurred, meaning that encrypted traffic was not being inspected throughout that period.

Would your organization notice if one of hundreds or dozens of IDSs shut up for a week, or if one ruleset stopped firing?

More published incident reports will help us get smarter, and provide better answers to the questions that CEOs and boards are asking: could this happen to us? With this report we an answer that better, but still not well.

Half the US population will live in 8 states

That’s the subject of a thought-provoking Washington Post article, “In about 20 years, half the population will live in eight states,” and 70% of Americans will live in 15 states. “Meaning 30 percent will choose 70 senators. And the 30% will be older, whiter, more rural, more male than the 70 percent.” Of course, as the census shows the population shifting, the makeup of the House will also change dramatically.

Maybe you think that’s good, maybe you think that’s bad. It certainly leads to interesting political times. Maybe even a bit of chaos, emerging.

Averting the Drift into Failure

This is a fascinating video from the Devops Enterprise Summit:

“the airline that reports more incidents has a lower passenger mortality rate. Now what’s fascinating about this … we see this replicated this data across various domains, construction, retail, and we see that there is this inverse correlation between the number of incidents reported, the honesty, the willingness to take on that conversation about what might go wrong and things actually going wrong.”

The speaker’s website is sidneydekker.com/, there’s some really interesting material.

Calls for an NTSB?

In September, Steve Bellovin and I asked “Why Don’t We Have an Incident Repository?.”

I’m continuing to do research on the topic, and I’m interested in putting together a list of such things. I’d like to ask you for two favors.

First, if you remember such things, can you tell me about it? I recall “Computers at Risk,” the National Cyber Leap Year report, and the Bellovin & Neumann editorial in IEEE S&P. Oh, and “The New School of Information Security.” But I’m sure there have been others.

In particular, what I’m looking for are calls like this one in Computers at Risk (National Academies Press, 1991):

3a. Build a repository of incident data. The committee recommends that a repository of incident information be established for use in research, to increase public awareness of successful penetrations and existing vulnerabilities, and to assist security practitioners, who often have difficulty persuading managers to invest in security. This database should categorize, report, and track pertinent instances of system security-related threats, risks, and failures. […] One possible model for data collection is the incident reporting system administered by the National Transportation Safety Board… (chapter 3)

Second, I am trying to do searches such as “cites “Computers at Risk” and contains ‘NTSB’.” I have tried without luck to do this on Google Scholar, Microsoft Academic and Semantic Scholar. Only Google seems to be reliably identifying that report. Is there a good way to perform such a search?

You say noise, I say data

There is a frequent claim that stock markets are somehow irrational and unable to properly value the impact of cyber incidents in pricing. (That’s not usually precisely how people phrase it. I like this chart of one of the largest credit card breaches in history:

Target Stock

It provides useful context as we consider this quote:

On the other hand, frequent disclosure of insignificant cyberincidents could overwhelm investors and harm a company’s stock price, said Eric Cernak, cyberpractice leader at the U.S. division of German insurer Munich Re. “If every time there’s unauthorized access, you’re filing that with the SEC, there’s going to be a lot of noise,” he said.
(Corporate Judgment Call: When to Disclose You’ve Been Hacked, Tatyana Shumsky, WSJ)

Now, perhaps Mr. Cernak’s words been taken out of context. After all, it’s a single sentence in a long article, and the lead-in, which is a paraphrase, may confuse the issue.

I am surprised that an insurer would be opposed to having more data from which they can try to tease out causative factors.

Image from The Langner group. I do wish it showed the S&P 500.

Security Lessons From Star Wars: Breach Response

To celebrate Star Wars Day, I want to talk about the central information security failure that drives Episode IV: the theft of the plans.

First, we’re talking about really persistent threats. Not like this persistence, but the “many Bothans died to bring us this information” sort of persistence. Until members of Comment Crew are going missing, we need a term more like ‘pesky’ to help us keep perspective.

Kellman Meghu has pointed out that once the breach was detected, the Empire got off to a good start on responding to it. They were discussing risk before they descend into bickering over ancient religions.

But there’s another failure which happens, which is that knowledge of the breach apparently never leaves that room, and there’s no organized activity to consider questions such as:

  • Can we have a red team analyze the plans for problems? This would be easy to do with a small group.
  • Should we re-analyze our threat model for this Death Star?
  • Is anyone relying on obscurity for protection? This would require letting the engineering organization know about the issue, and asking people to step forward if the plans being stolen impacts security. (Of course, we all know that the Empire is often intolerant, and there might be a need for an anonymous drop box.)

If the problem hadn’t been so tightly held, the Empire might not have gotten here:

Tarkin bast

General Bast: We’ve analyzed their attack, sir, and there is a danger. Should I have your ship standing by?

Grand Moff Tarkin: Evacuate? In our moment of triumph? I think you overestimate their chances.

There are a number of things that might have been done had the Empire known about the weakly shielded exhaust port. For example, they might have welded some steel beams across that trench. They might put some steel plating up near the exhaust port. They might land a Tie Fighter in the trench. The could deploy some storm troopers with those tripod mounted guns that never quite seem to hit the Millenium Falcon. Maybe it’s easier in a trench. I’m not sure.

What I am sure of is there’s all sorts of responses, and all of them depend on information leaving the hands of those six busy executives. The information being held too closely magnified the effect of those Bothan spies.

So this May the Fourth, ask yourself: is there information that you could share more widely to help defend your empire?

Exploit Kit Statistics

On a fairly regular basis, I come across pages like this one from SANS, which contain fascinating information taken from exploit kit control panels:

Exploit Kit Control panel

There’s all sorts of interesting numbers in that picture. For example, the success rate for owning XP machines (19.61%) is three times that of Windows 7. (As an aside, the XP number is perhaps lower than “common wisdom” in the security community would have it.) There are also numbers for the success rates of exploits, ranging from Java OBE at 35% down to MDAC at 1.85%.

That’s not the only captured control panel. There’s more, for example, M86, Spider Labs and webroot.

I’m fascinated by these numbers, and have two questions:

  • Is anyone capturing the statistics shown and running statistics over time?
  • Is there an aggregation of all these captures? If not, what are the best search terms to find them?

Analyzing The Army's Accidental Test

According to Wired, “Army Practices Poor Data Hygiene on Its New Smartphones, Tablets.” And I think that’s awesome. No, really, not the ironic sort of awesome, but the awesome sort of awesome, because what the Army is doing is a large scale natural experiment in “does it matter?”

Over the next n months, the Pentagon’s IG can compare incidents in the Army to those in the Navy and the Air Force, and see who’s doing better and who’s doing worse. In theory, the branches of the military should all be otherwise roughly equivalent in security practice and culture (compared to, say, Twitter’s corporate culture, or that of Goldman Sachs.)

With that data, they can assess if the compliance standards for smartphones make a difference, and what difference they make.

So I’d like to call on the Army to not remediate any of the findings for 30 or 60 days. I’d like to call on the Pentagon IG to analyze incidents in a comparative way, and let us know what he finds.

Update: I wanted to read the report, which, as it turns out, has been taken offline. (See Consolidated Listing of Reports, which says “Report Number DODIG-2013-060, Improvements Needed With Tracking and Configuring Army Commercial Mobile Devices, issued March 26, 2013, has been temporarily removed from this website pending further review of management comments.”

However, based on the Wired article, this is not a report about breaches or bad outcomes, it’s a story about controls and control objectives.

Spending time or money on those controls may or may not make sense. Without information about the outcomes experienced without those controls, the efficacy of the controls is a matter of opinion and conjecture.

Further, spending time or money on those controls is at odds with other things. For example, the Army might choose to spend 30 minutes training every soldier to password lock their device, or they could spend that 30 minutes on additional first aid training, or Pashtun language, or some other skill that they might, for whatever reason, want soldiers to have.

It’s well past time to stop focusing on controls for the sake of controls, and start testing our ideas. No organization can afford to implement every idea. The Army, the Pentagon IG and other agencies may have a perfect opportunity to test these controls. To not do so would be tragic.

[/update]

Breach Analysis: Data Source biases

Bob Rudis has an fascinating and important post “Once More Into The [PRC Aggregated] Breaches.” In it, he delves into the various data sources that the Privacy Rights Clearinghouse is tracking.

In doing so, he makes a strong case that data source matters, or as Obi-Wan said, “Luke, you’re going to find that many of the truths we cling to depend greatly on our own point of view:”

Breach count metatype year 530x353

I don’t want to detract from the work Bob’s done. He shows pretty clearly that human and accidental factors are exceeding technical ones as a source of incidents that reveal PII. Without detracting from that important result, I do want to add two points.

First, I reported a similar result in work released in Microsoft SIR v11, “Zeroing in on Malware Propagation Methods.” Of course, I was analyzing malware, rather than PII incidents. We need to get away from the idea that security is a purely technical problem.

Second, it’s time to extend our reporting regimes so that there’s a single source for data. The work done by non-profits like the Open Security Foundation and the Privacy Rights Clearinghouse has been awesome. But these folks are spending a massive amount of energy to collect data that ought to be available from a single source.

As we talk about mandatory breach disclosure and reporting, new laws should create and fund a single place where those reports must go. I’m not asking for additional data here (although additional data would be great). I’m asking that the reports we have now all go to one additional place, where an authoritative record will be published.

Of course, anyone who studies statistics knows that there’s often different collections, and competition between resources. You can get your aircraft accident data from the NTSB or the FAA. You can get your crime statistics from the FBI’s Unified Crime Reports or the National Crime Victimization Survey, and each has advantages and disadvantages. But each is produced because we consider the data an important part of overcoming the problem.

Many nations consider cyber-security to be an important problem, and it’s an area where new laws are being proposed all the time. These new laws really must make the data easier for more people to access.