Shostack + Friends Blog Archive


SHB Session 4: Methodology

David Livingstone Smith chaired.

Angela Sasse “If you only remember one thing: write down everything the user needs to do and then write down everything the user needs to know to make the system work. Results of failure are large, hard to measure. (Errors, frustration, annoyance, impact on processes and performance, coloring user perception of security.) Mentions ongoing work “The Compliance Budget: Managing Security Behaviour in Organisations.” (I really like this work.) Figure out what your budget is in real and perceived efforts, and spend it wisely. Very interesting pictures of organizational effort required for compliance. Collecting data about how much security costs: actual and perceived workload; actual and perceived impact on business; actual and perceived risk mitigation effect.

Bashar Nuseibeh, Open
University. Background in mission critical systems. Safety is about accidents, security is about intent. Privacy rights management on mobile apps. Mobile version of Facebook. Wanted to understand how people use Facebook on the move. Hard to do traditional interviews & questionnaires. Need observation, but can’t do that with privacy. Use “experience sampling.” Short questionnaires. Use “memory phrases” to try to trigger things during a later interview. Trying to understand sociocultural interactions. Importance of boundaries to how those studied think about their privacy. “This information is for me only.” This information is only for a subset of the group. Some of Bashar’s work: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis

James Pita, interested in security games where people are guarding physical assets. LAX has 8 terminals which need to be patrolled by K9 units. How to decide where to put the dogs to maximize security while minimizing determinism? Use a Stackelberg game. Assign values of assets to you, adversary. Goal is to make adversaries unable to make a “better” decision. Look at human uncertainty and limits to decision making processes. Adversaries can’t observe perfectly. Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport

Markus Jakobsson gave a talk titled Male, late with your credit card payment, and like to speed? You will be phished!. Auto insurance companies ask if you smoke, because smokers demonstrably disregard their own health. Jakobsson recruited 500 subjects on Mechanical Turk. 100 admitted to losing money to online fraud. 100 others were chosen as control. Asked subjects to rate risks of various activities. Some were correlated. There’s an obvious difference between what people say and do; this studied what people say. Correlation between “other online fraud” and not paying credit card balance was .19. Other online fraud and saving less than 5% for retirement were correlated at .17. Some other papers: Social Phishing, Love and Authentication, Quantifying the Security of Preference-Based Authentication.

Rachel Greenstadt talked about “Mixed Initiative Support for Security Decision Making” Security decisions are hard for both humans and machines. Context dependency, requirement for specialized knowledge, sophisticated adversaries, etc. Machines: “I must be dancing with Jake, this guy knows Jake’s private key.” Computers could recognize other cues. How can agents mediate security decisions between humans and applications? Two security decisions: (1) should I login, (2) if I publish this anonymous essay, will my linguistic style betray me? In (1) phishing case, Alice knows she wants to visit her bank. Device knows Alice not visiting bank. Doesn’t know that Alice thinks she’s visiting her bank. Making appearance profiles, hashing html, css and images with ssdeep fuzzy hashing, and matching using a threshold. Have very good numbers, talk too fast too grok. Looked at stylometry problem (who wrote this paper dominated by AI). Tried to imitate the work of Cormac McCarthy. Students wrote 500 word essays to imitate, their work was identified as his 60-80% of the time. Think about imitating someone else’s style to hide authorship. Papers: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections

Mike Roe Pitched a research question for someone else: what proportion of transexuals are getting their drugs from black market sources? Methodology: ask survey questions. Looking for an academic collaborator. Why public choice economists might be interested: cheap drugs, oops, slides gone.

Mike’s research is about what actual attacks take place in the real world (of games). Initial assumption: virtual goods worth real money, programs are full of bugs, so someone will exploit bugs to steal money. Assumption is ok, but those people aren’t causing a lot of problems that way. Bartle has taxonomy of gamers: explorers, socializers, achievers, killers/griefers. Griefers try to annoy socializers or achievers.


Peter Neumann had a comment for Bashar. Lampson (1974) pointed out that if it’s not secure, it’s not reliable, and vice versa. QED, Bashar’s contrast not sensible. Bashar comments on intent. (I’m skeptical.)

Dave Clark pushes on economics versus griefers. “To me griefers are a constant, but economic fraud grows over time.” Discussion of hacking for glory versus money. Schneier and I suggest it’s different people.

Sasha Romanosky mentions behavioral work that looks at willpower and choice. One theory: Willpower gets depleted over time, alternately it’s like a muscle that strenghtens. Has Angela Sasse looked at compliance budgets in these lights? She says its more complex. Willpower is only one factor. Cues from the behavior of others is hugely important. People are more likely to put on weight if you’re with others putting on weight. Insecure behavior spreads through organizations.

Andrew (?) asks about alternate methods and if compliance budgets change. Passface mechanism that people really liked, but took a while to login, lead to people logging in 70% less often. Rob Reeder asked if perceived value of assets impacted compliance thresholds. Angela said yes.

Richard John asked about correlations and risk perceptions. Commented that Slovic’s work averaged across populations. Richard John was shocked correlations were so small. Slovic has also shown that different demographics show different risk perceptions. Markus said didn’t want to ask about ethnicity.

[Update: Bruce Schneier’s blog post.]