Shostack + Friends Blog Archive

 

Phishing and Clearances

Apparently, the CISO of US Homeland Security, a Paul Beckman, said that:

“Someone who fails every single phishing campaign in the world should not be holding a TS SCI [top secret, sensitive compartmentalized information—the highest level of security clearance] with the federal government” (Paul Beckman, quoted in Ars technica)

Now, I’m sure being in the government and trying to defend against phishing attacks is a hard problem, and I don’t want to ignore that real frustration. At the same time, GAO found that the government is having trouble hiring cybersecurity experts, and that was before the SF-86 leak.

Removing people’s clearances is one repsonse. It’s not clear from the text if these are phishing (strictly defined, an attempt to get usernames and passwords), or malware attached to the emails.

In each case, there are other fixes. The first would be multi-factor authentication for government logins. This was the subject of a push, and if agencies aren’t at 100%, maybe getting there is better than punitive action. Another fix could be to use an email client which makes seeing phishing emails easier. For example, an email client could display the RFC-822 sender address (eg, “<account.management@gmail.com>” for any email address that that email client hasn’t sent email to, rather than the friendly text. They could provide password management software with built-in anti-phishing (checking the domain before submitting the password. They could, I presume, do other things which minimize the request on the human being.

When Rob Reeder, Ellen Cram Kowalczyk and I created the “NEAT” guidance for usable security, we didn’t make “Necessary” first just because the acronym is neater that way, we put it first because the poor person is usually overwhelmed, and they deserve to have software make the decisions that software can make. Angela Sasse called this the ‘compliance budget,’ and it’s not a departmental budget, it’s a human one. My understanding is that those who work for the government already have enough things drawing on that budget. Making people anxious that they’ll lose their clearance and have to take a higher-paying private sector job should not be one of them.

7 comments on "Phishing and Clearances"

  • Fernando Montenegro says:

    Your points about having the software support the user make good sense. To the extent that it is possible, software should indeed help identify potential phishing and malware situations.

    At the same time, though, how does one work to reduce the moral hazard of just clicking on things as there’s no consequences? Perhaps removing clearances might be a bit too harsh, but at what point does one address negligent behaviour?

  • Adam says:

    Hi Fernando,

    For there to be moral hazard, I think the person needs to be acting irresponsibly. I don’t think that’s the case. As Cormac Herley argued in “So Long and No Thanks for the Externalities,” people are rationally ignoring the advice in part because it’s very expensive.

    Interfaces today are designed to encourage clicking on things, and *reward* people for doing so. What behavior do you suggest should be engaged in, why do you think it can be done and will work?

    http://research.microsoft.com/en-us/um/people/cormac/papers/2009/SoLongAndNoThanks.pdf

  • Steve Battista says:

    I don’t totally agree. What was said was that if people fall for all phishing attacks, they are a risk. Clearances are used to determine who is a good risk for sensitive information. If a person is unable or unwilling to protect their information, wouldn’t they be considered a risk? I understand that there are very good phishing attempts that you can’t expect users to protect themselves against. You should expect them to be able to ward off the simple and obvious attacks. If you do things like click on all links without reading a mouse-over then that is unacceptable behavior. If delivery drivers don’t follow proper behaviors and ram cars randomly, I’m sure they don’t stay employed. If you accounts payable people send millions of dollars based on a fraudulent email, then maybe they are not worth the risk of being in that job. Obviously, there is and needs to be current and better protections against attacks (e.g. better phishing filters, dynamic IP and DNS blocking) and there needs to be training and exercises to ensure that proper behavior is taught and reinforced (e.g. Phishing tests and awards for finding phishing). At what point do you decide that a person is not worth the risk for sensitive information?

  • AdamKB says:

    I think the view of the CISO (and one that I can understand, as a former federal employee with a clearance) is that people who HABITUALLY fall for these “phishing awareness tests” are demonstrating a worrying lack of responsibility. If someone came up to them in a bar and asked to borrow their entry card to a secure building, those people would obviously say no, but when the request is for credentials, via email, they suddenly feel as though it’s okay to hand over the keys?

    While I 100% agree with everything you mention here in regards to proper automated hardening procedures, I still feel as though implementing some type of consequence for negligence is necessary in a government setting, especially one which deals with classified information. For decades, federal employees with high-level clearances have received training in how to recognize and resist “old school” attempts to obtain information, many of which exploit social cues and rewards – much like the interfaces you mention which “encourage clicking on things.” I feel as though (and like to see that) Beckman believes government employees with a high-level clearance should have the mental fortitude and awareness to recognize and resist basic phishing attempts. They shouldn’t be expected to blindly follow normal behavioral norms, as holding a high-level clearance implies a certain level of maturity and responsibility above and beyond that of the poor, stressed, overworked worker described here.

  • Fernando Montenegro says:

    Adam, thank you for the reference to the Cormac paper. Loved it! I think the paper – particularly section 7 – should be mandatory reading to anyone in InfoSec.
    That being said, I don’t think that paper and what it describes applies so much to the discussion here: the threat environment we’re discussing here is radically different, since these are not users concerned with liability on credit card purchases or the hassle of credit monitoring, but government officials with high-level clearances handling sensitive information. As other comments mention, we wouldn’t allow them to keep said clearances if they committed other types of indiscretions, so why should phishing be so different?
    The proposal appears to be to revoke clearances after several failures, not a single incident.
    As for moral hazard, one can argue that not being careful on your computer use while handling government credentials and holding a TS clearance IS acting irresponsibly…
    Respectfully,
    Fernando

  • Adam says:

    AdamKB, Fernando,

    The reason we might well treat these indiscretions differently is that we train people to engage in them. Both their employer and the wider world regularly confront them with password challenges, and stop them from proceeding unless they enter a password. Asking about the paint on the B2 is not a situation which they normally confront. So there are apparently normal, and probably common situations in which a DHS computer asks for a username and password before proceeding. If that’s the case, then they’re being taught to engage in that behavior.

    Where can I find the guidance offered to DHS employees and the usability testing that shows that it’s possible to follow it in both benign and malicious scenarios at a reasonable cost in time for the number of prompts an employee typically faces in a day and at a high rate of success?

    With such usability testing, I’m happy to concede that their behavior is inappropriately indiscreet.

Comments are closed.