On Disclosure of Intrusion Events in a Cyberwar
[This guest article is by thegruq. I’ve taken the liberty of HTML-ifying it from his original, http://pastie.org/5673568.]
On Disclosure of Intrusion Events in a Cyberwar
The Nation State’s guide to STFU
In a cyberwar (such as the ongoing events on the Internet), all actors are motivated to remain silent about incidents that they detect. However, on some occasions, strategic and political considerations will be more powerful motivators. These rare disclosure events don’t negate the primary motivations for remaining silent, they simply demonstrate that sometimes there are better reasons for speaking out.
TL;DR; actors in a cyberwar are motivated not to disclose incidents, but sometimes strategic and/or political realities take precedent.
I discussed this briefly with Adam Shostack over Twitter, but the constraints
of the medium limited the depth of the discussion. Recently Adam posted a blog post that more deeply explored his position. He believes that actors in a cyberwar are not (always) motivated to remain silent. He also proposes a methodology for selecting incidents to disclose, and then lays out several benefits that he believes such disclosure provides. I still think he is wrong. Rather, he got the right answer for the wrong reason.
Rather than addressing his arguments in detail (because I don’t find fault with his logic, it is his premise that is incorrect), I will lay out the reasoning behind my position. This will provide a more comprehensive understanding of an important aspect of cyberwar, one frequently ignored in the discussions — Counter Intelligence (COINTEL). I’ll briefly outline some core COINTEL concepts, then apply them to the current cyberwar, and then finally agree with Adam’s conclusion anyway.
Firstly, yes, my understanding of the motivations of the actors in the cyberwar is partially informed by discussions I’ve had with active participants. However, more importantly, I’ve spent the last year studying counter intelligence and looking at how to apply it to cyberwar. Part of that research was presented in my [OPSEC for Hackers] talk. The following arguments are therefore from the point of view of someone who views the ongoing cyberwar as primarily a series espionage operations and activities.
NOTE: I must emphasize that what I outline here is pure speculation. I have no security clearance with any country, so I have no secret knowledge. My opinions are informed only by open source materials (re: I read books and stuff).
COINTEL for dummies
In the broadest sense COINTEL is the practice of mitigating against and attacking the intelligence capabilities of the adversary (I will use adversary and opposition interchangeably). Although there are several attempts to categorize COINTEL strategies, I’m partial to the following: basic denial; adaptive denial, and manipulation. 
- Basic Denial: This includes techniques and methodologies that restrict the amount of intel the adversary is able to collect. Many OPSEC techniques are basic denial techniques.
- Adaptive Denial: These techniques and methodologies are developed to address specific vulnerabilities that exist in your organisation. For example, if you learn (via your own intelligence capabilities) the adversary is monitoring your phone calls, switching to couriers for comms would be adaptive denial. Because it requires some capacity for determining the adversary’s capabilities and then enacting a response, this is a more advance COINTEL practice.
- Manipulation: This is when you specifically target the intelligence agencies of the adversary and attempt to control their understanding of your capabilities, methodologies, membership, techniques, tools, and so on. This is an expensive, risky and tricky practice to pull off effectively. It requires significant resources to plan and orchestrate successfully.
In the realm of cyberwar basic denial techniques, to prevent the opposition from knowing your own capabilities, are crucial. For example, if the opposition learns about a bug that you have, they may patch it and neutralize your capability. Similarly, this applies to tools and techniques that are components in your toolchain.
 Terrorism and Counterintelligence, Blake W. Mobley, 2012. (Terrorism and Counterintelligence: How Terorist Groups Elude Detection )
Motivators to STFU
There are several reasons that I believe an actor in the global game of cyberwar is motivated to practice basic denial about intrusion incidents and STFU. The strongest reasons, I believe, are:
- creating uncertainty in the adversary regarding his success rate
- preventing the adversary from engaging in adaptive denial
- creating scenarios where manipulation is possible, and
- back hack the adversary
Fear, Uncertainty and Doubt
By not disclosing known intrusions, the adversary is denied knowledge of his success rate (as measured by covert persistence). Without feedback on what boxes and networks he controls vs. those he only believes he controls, his confidence is diminished. He is also significantly more likely to utilize a compromised resource that is under active surveillance or has been otherwise neutralized. Also, the adversary’s military leaders will be less confident that they can utilize a specific capability, perhaps even completely dissuaded.
Additionally, if the opponent learns that their operation was a failure (e.g. their intrusion was discovered and cleaned up), they are likely to attempt it again. Subsequent operations by the adversary might not be successfully detected and thwarted.
Stop Adaptive Denial
The adversary is an intelligent dynamic opponent who will alter his tools, techniques and methodologies to remain effective. By denying the adversary information about which of his operations have been discovered, and how, you are reducing his ability to detect and address vulnerabilities within his tradecraft. Keeping the knowledge of this vulnerability to yourself (and possibly your allies) provides you with an advantage against the adversary. Maintaining this advantage is, obviously, in your best interest. Therefore, practicing basic denial and not disclosing which of the subset of successful intrusions you have detected, and particularly how they where detected, is an important COINTEL practice.
The motivation here can be summed up as: “keep the adversary’s knowledge about our knowledge of his activities, capabilities and techniques in the ‘known unknowns’ quadrant”.
Enable Manipulation Opportunities
Once the adversary has successfully conducted a computer network attack (CNA), they (a) want to avoid having to do it again, and (b) seek to profit from it. Typically this is accomplished by installing malicious software that will provide surreptitious access to the adversary. The adversary can then search the computer for operationally relevant data. 
This situation presents a few interesting opportunities for a COINTEL manipulation operation. The obvious one is to provide fake data that appears legitimate but is useless, dangerous or even a lure. A publicly known example of this is in The Cuckoo’s Egg, where fake documents were used to provide attribution (the KGB did it!) as well as prove malicious intent (the hackers weren’t just playing around on the system).
Typically, when an intelligence agency uncovers the agent of an opponent, they do not shut them down (e.g. arrest them). There is far more benefit to be gained by allowing the agent to continue to operate… under very heavy surveillance. If the opposition’s agent is a penetration (a “mole”), they will be “packed in cotton wool” and left in place. Monitoring who a known agent interacts with, how they operate their tradecraft and what sort of information they are looking for provides tremendous opportunities for insight into the opposition’s intelligence agency and operational objectives. Finally, this known agent can be used to feed false information to the adversary.
If the known agent of the adversary is successfully recruited to become your agent this, in traditional intelligence lingo, would be a “double agent”. This opens a channel into the opposition’s intelligence agencies. A deliberate operation to create such a channel would be to use a “dangle”, essentially a lure to attract the adversary’s attentions.
The similarities with honeypots should be obvious.
Publicly announcing and disclosing an intrusion that could still yield valuable intelligence is an extremely poor use of a scarce resource. Granted, the sheer scale of the cyberwar and massive number of incidents reduce the value of any one single event. Additionally the limited COINTEL resources of the actors would seem to limit the utility of manipulation, however it remains an intriguing possibility.
NOTE: we are ignoring directly destructive or distruptive attacks against the computer, such as Stuxnet, to focus specifically on the espionage angle.
First, a brief history lesson. In the 1990s hackers used to put systems online with the latest rumored vulnerabilities. They would monitor to see when they were hacked, and from where. Then the hacker would hack each bounce box back up the chain (hence “back hack”) until he was in a position to collect the adversary’s toolchain. This was one way that 0days and private tools were stolen. This technique predates honeypots.
As has been noted in numerous research reports, the quality of the adversary’s toolchain varies considerably, and generally tends towards shoddy. Exploitable bugs in the C&C software used in intrusions are common, and indeed are typically easy to find and exploit. Laurent Oudot published a large number of such bugs at SyScan Singapore 2010 (unfortunately the archive isn’t online, so here’s the [http://seclists.org/fulldisclosure/2010/Jun/432]).
One possible COINTEL operation would be to replace the opponent’s software with a malicious version that attacks the C&C infrastructure. This would enable any number of follow-up operations to exploit the intelligence opportunities. A recent public example of this was the “Georgia Hacker”, well summarized in this [article].
COINTEL SHMOINTEL, its an election year!
This outlines my position on why actors in the global game of cyberwar are motivated to remain silent about incidents. These motivators are all COINTEL based.
COINTEL is a powerful guiding force in information warfare. But it is not, of course, the only consideration. This is where I have to agree with Adam’s conclusion. The value of a COINTEL operation, whether basic denial or manipulation, has to be judged against the value that can be gained from disclosing the incident. This judgment is for the politicians and other policy makers. It is a strategic decision that must be made to reflect policy and advance your own position (at least, that’s the theory).
There are instances where disclosing an intrusion and the details of that intrusion make more sense than maintaining silence. There are also, I believe, instances where this is emphatically not the case. Unfortunately these decisions will be made by people who know little to nothing about computers or hacking.