SHB Session 8: How do we fix the world?

(Bruce Schneier has been running a successful prediction attack on my URLs, but the final session breaks his algorithm. More content to follow.)

So as it turns out, I was in the last session, and didn’t blog it. Bruce Schneier and Ross Anderson did. Matt Blaze has the audio. I’ll turn my comments into an update to this post.

Attempting to reconstruct what I said, or intended to say. (Yes, I suppose I could listen to the audio..but then again, this is more fun for me.)

So it’s a struggle to say something new at the end of a workshop like this. And Ross Anderson even said that. So how do we fix security and human behavior? We first need some degree of understanding of what needs fixing. It’s easy at the end of all the talks to think we know what’s wrong, but I don’t think we do. What leads to more failures? 0day or patches not installed? Authentication failures or configuration failures?

We don’t know what goes wrong because people are concerned about a laundry list of issues from customers fleeing to stock price collapse, but we actually know that those don’t really happen. So why don’t we know what goes wrong? Shame. People are ashamed that their security is imperfect and don’t want to talk about it.

This holds us back. When Angela Sasse talks about a compliance budget, we don’t know what to spend it on. When Diana Smetters discusses prioritizing how to train users, we don’t know what to prioritize. When Mark Stewart shows frequency/loss equations, we don’t know what to put into them (for information security).

So we need something like the National Transportation Safety Board, or a Truth and Reconciliation Commission which will hear testimony, ask questions, and provide analysis.

In order to improve security and human behavior, we need more and better data. And to get more and better data, we’ll need to overcome shame.

I’d also like to thank Addison Wesley for providing copies of The New School (the book) to the attendees of the workshop. As I’ve said, they’ve been great [a great publisher] to work with.

SHB Session 7: Privacy

Tyler Moore chaired the privacy session.

Alessandro Acquisti, CMU. (Suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification.) It’s not that people act irrationally, it’s that we need deeper models of their privacy choices. Illusion of control, over-confidence, in privacy people seek ambiguity, people become more privacy-protecting after confidentiality notices. 2 experiments: First on herding effects. Asked 6 intrusive questions about behavior (eg, have you ever had sex with the current husband, wife or partner of a friend. After answering each question, presented ostensible distribution of answers. In fact, manipulated results. Answers were “at least once,” “never,” or “refused to answer.” Will answers trigger higher admission rates? Cumulative admission rates went way up as people are shown that others are admitting to sensitive behavior. (Is there a ‘bragging’ effect?) Second study: do privacy intrusions sensitize or de-sensitze people? Similar questions, went tame to intrusive, or intrusive to tame. “Frog” hypotheses rejected, coherent arbitrariness accepted.

Adam Joinson, Bath: (Suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions) Experimental psychologists. Looking at expressive privacy, the need to communicate. Allows us to connect to other people. (DeCew, 1997). Business case for expressive privacy. Social network design goal of bringing people from discovery to superficial environment to true commitment. Looked at 426 Facebook users in UK, US, Greece, France, Italy. Uses of site varied between countries for photos, status updates, social investigation, social connections. Big differences in trust of Facebook itself. US and UK users made privacy settings closed. Italians open profiles and also join lots of groups. Trust in Facebook is linked much more than trust in other users. More friends, less trust in peers. Lower trust leads to less use of site. Second study: looked at tweets. Used SecretTweet versus Twitter. Clear linguistic markers: personal pronouns, secual words, past tense. More exclamation marks in normal tweets. Checked if # of followers influences tweets. Does audience size matter? Two groups: Less than 100, more than 200. No difference. Value creation depends on expressive privacy. If you don’t provide privacy, you become banal.

Peter Neumann, SRI. (Suggested reading: Holistic systems; Risks; Identity and Trust in Context.) Holistic view. 3 topics: health care, voting and Chinese. Has medical privacy chart that covers 4 million people who can see medical information about you:

medicalprivacychart.jpg

Continuing with Peter Neumann: Claims Andrew is incorrect: we might be able to build secure voting machines, but can’t build secure voting systems. If we want privacy of votes, need reliability. Only blind people were using DRE machines with paper trail in California, and paper trail does them no good.

Moves on to Green Dam software. Everyone selling computers in China will have their software on computers. Chinese may be installing all sorts of trap doors. Any would be solution to all three problems has to look at the pervasive nature of threats. Design requires understanding of privacy threats and addressing them early. Ordinary people have no idea of the depth of the problem. Bottom line: I can’t trust anybody to build a perfect system.

Eric Johnson, Dartmouth. (Suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management.) Working on a problem that many people consider solved. Information access. Users request access, owner approved, systems are administered. But CIOs say access is failing in their organizations. People don’t have the info they need; others have far too much. Did field studies in banks (investment and retail). In one bank, 22,000 employees for 11,000 roles. One bank (in great shape) more roles than employees. Sit the managers down is not a realistic perspective. How to manage, how to understand & visualize, role drift. Origins of complexity: 1000s of applications, 100s of entitlements, 10Ks of users. In a perfect world, we could match permissions to requirements. Many orgs use the “get the job done” approach. Copy & paste between employees. Result: over-entitlement: estimated 50-90% of employees were over-entitled. Dynamic org structure makes it difficult to de-provision. Can track employees tenure based on entitlement sets. Can we use incentives to control? In medical space, standard practice is break the glass. (Image of “Hospital workers fired for looking at Spear’s records.”) Firing makes people risk averse.

Christine Jolls, Yale. (Suggested reading Rationality and Consent in Privacy Law; Emoployee Privacy). Law professor with economic training. Argues that (lawyer jokes aside) law can help privacy. People share information with their intimates. Recent research into human happiness. Most robust predictor of happiness is rich social interaction. An important aspect of privacy is control (hat tip danah). So how do we get control? Some technology, some law. Hardest problem: high frequency, low stakes inquiries. Want to address a less important but under-noticed problem. People making decisions around privacy. Employers make demands for access to employees. (Let us look at your email, let us drug test you.) Pattern in law: agree in advance, not binding. If you agree on the day, binding grant of access. Makes sense from behavioral economics perspective. People think bad stuff won’t happen to them. Second, people tend to be focused on the present. Imminent things are more imposing. Some older cases: consumers would buy on credit, contract would allow home entry for re-posession. Courts would disallow that, but allow re-possession when someone showed up and gotten permission. Last comment: politics of privacy law is unusual. Mentions conservative justice arguing against searching of customs officials, and closing with a hope that customs agents will learn from the respect given to their privacy.

Andrew Adams, Reading. (Suggested reading: Regulating CCTV) Start (and refuting) privacy is for old foggies. In fact, different conception. Research focuses on attitudes towards self-revelation by themselves or others. In the UK, Facebook, MySpace and Bebo have 1/3 of market each. Asked people who they talk to. Principle reason to be online is to increase interaction with people they know in real life. Most UK students live in “laundry belt.” Close enough for trips home for laundry, not so close that parents look over the shoulder. (~2 hour drive). One use of new technology is to stay in touch with friends from high school. Some people reveal too much information. Interviewees don’t say they revealed too much, but some did say that they had been “inappropriate” and revealed extra information about themselves that an employer might see. Wanted more guidance about what to reveal. One thing that came through clearly was that it’s not OK to reveal info that people wouldn’t reveal about themselves. In Japan, systems are Mixi, Mi-Chaneru, Purof. Some openness to making e-only friends. More perceived online anonymity. Concept of “victim responsibility:” your fault for having bad friends.

Comments and questions:

David Livingstone Smith: I’ve heard a number of definitions of privacy, suggests that control of information belonging to self. Christine says that there’s a huge literature on the subject, and offers a thought experiment: a story with your name & information which you don’t want to be known appears on another planet. Has your privacy been violated? (Information doesn’t jump around without a cause.) Allesandro asks if David believes in a single definition of privacy, “no.” Rachel Greenstadt asks about victim responsibility and degree of control. E.G., what about family? Andrew suggests less use of family as SNS friends, and there’s a much greater level of pseudonymity in Japan, “almost everyone.”

Caspar Bowden asks Eric how well hierarchical access control works: does level of escalation matter? Caspar also asks Christine about coersion. Christine says there’s a big country difference. In US there’s employment at will. Peter goes back to access control, and is building an attribute based messaging system. I ask Eric how close we can get to perfect. He suggests that it may be too expensive to get anything close to perfect. Joe Bonneau mentions to Peter than there’s working exploits already available for Green Dam, and asks Alessandro if he’s seen the effect that privacy settings only ratchet up. After his early survey, many surveyees changed their privacy settings.

Andrew Patrick suggests that the frog thing is a myth. Alma asks Christine ‘what happens if the story isn’t true.’ Christine suggests its an interesting question. Andrew Adams says his students are most concerned about incorrect information that might be attributed to them. Jon Callas says that he’s glad danah found people who lied about themselves, because it’s a good technique. If it’s well known there’s lots of falsehoods then you gain plausible deniability. Chrisine says we should try to do more to protect people in other ways: lying is suboptimal, Jon says it may be needed. Alessandro says there are limits. I mention Dan Solove’s Understanding Privacy book (my review), Andrew Adams mentions Future of Reputation.

Chris Soghoian says that if we want to teach kids to obey all rules, Google terms of service forbid use by under 18s, so we should teach our kids to not use Google.

SHB Session 6: Terror

Bill Burns (Suggested reading Decision Research: The Diffusion of Fear: Modeling Community Response to a Terrorist Strike) Response to Crisis: Perceptions, Emotions and Behaviors. Examining a set of scenarios of threats in downtown LA. Earthquake, chlorine release, dirty bomb. Earthquake: likely 100-200 casualties. Dirty bomb, expected casualties: 100 at most. Chlorine may be thousands to tens of thousands. GRP= Direct (casualties, property, business interruption) + indirect effects. Discussing community fear responses to anthrax. Starting to think in terms of half-life. Residual fear falls slowly in their model. Measured ridership loss after London bombings. (Graph). Communities don’t sit on their hands after emergency. Half-life of airline attack ~90 days. Investor fear after “potential financial meltdown” Half-life of 65 days. Looking at financial systems, emotional response. High trust in business leaders between 2-6% (gender gaps).

Chris Cocking, London Met (Suggested reading Effects of social identity on responses to emergency mass evacuation.) Pushing concept of collective resilience. Comes from social psychological perspective. How do people behave in groups? Have identity as individual, also as group (fan of a team, nationalist identity). Milgram authority, (Stanford prison) pushes a pessimistic view of human nature. Gustaf Lebon (sp?) had a theory that a crowd can’t be trusted. Threat causes emotion to overwhelm reason, no concern for those around you, pushing and trampling behaviors. Also concept of contagion. Contradicting, social attachment model (Mawson 2005). But there’s evidence that strangers cooperate in emergencies. Disasters create a common identity. Can result in orderly, altruistic behavior. After WTC 7 collapse, but there was spontaneous planning & coordination. 99% of people below where planes hit escaped. Showing image from 9/11. Picks apart image. Fear visible. Woman in pink has heels on, carrying shopping. Guy in back holding camera, taking photo. Guy gesturing is perhaps saying “get out of the way” to photographer. (Jean Camp asks for a picture of panic behavior. Google images doesn’t return the right thing.) Over-reaction comes from lack of information, or because people are addressed in an atomicized way. Use of collective identity may be helpful. How can authorities use people’s willingness to help each other. Summary: people are resilient.

Richard John, USC (Suggested reading: Decision Analysis by Proxy for the Rational Terrorist.) Talk “Fear and Loathing in Hollywood and Elsewhere.” Social amplification of risk following accidents, natural disasters and terrorism. Small loses due to event, larger losses due to change in behavior. There are very few examples of under-response (San Francisco 1906 earthquake is an example of under-response.) Over-response: Three Mile Island + Chernobyl killed nuclear power industry. Dynamics of risk perception. Measure risk perception after event: elevation, duration, return time to baseline, and change in baseline. Using vignettes to study psychological impacts. Repeated attacks may have very different responses. How do people habituate? Working on a study in Spain with people who have personally experienced an attack. Have found it hard in some cases-people are not that afraid of terrorism anymore. Finding a need to add audio and video to trigger responses. (Is that evidence that terrorism has been overhyped?) Group at USC building virtual world for study.)

Mark Stewart, University of Newcastle, Australia (Suggested reading: A risk and cost-benefit assessment of United States aviation security measures; Risk and Cost-Benefit Assessment of Counter-Terrorism Protective Measures to Infrastructure.) Need to use numbers. Words tend towards worst-case scenarios. Measure probability, consequence, risk reduction. (This would work in infosec if we had numbers; in terrorism, we can get them.) Shows process slides. Notes DHS never tries to figure out if we should do anything or not, but rather, asks how to spend money. DHS is obviously motivated to be risk averse. Numbers at best give you decision support, not decision making. Uses “net benefit = benefits – costs” used in many applications, nuclear plants, aircraft. Discussion Aviation security measures (Stewart and Mueller). Shows TSA 20 layers, asks how many we need. (A good baker’s dozen.) Air marshal program costs US $1B/year. (2,500-4,000 marshals, free seats in business class.) Shows some math about effectiveness of risk reduction measures. Demonstrates that, based on assumptions, air marshals are a poor investment, with a return of 19 cents on the dollar with an attack frequency of 1 per 10 years. (Talk too short to dig into assumptions.) Exhorts audience to come up with models that we can discuss and debate the assumptions and quantifications. Final observations: terrorist risks seems lower than other risks. Many counter-terror decisions made in response to 9/11. (Fighting the last war/attack?)

John Adams, UCL (Suggested reading: Deus e Brasileiro?; Can Science Beat Terrorism?; Bicycle bombs: a further inquiry.) Plugs John Mueller’s “Overblown.” Goes into risk thermostat. Propensity to take risk, perception, rewards, balancing and accidents. That’s individual, but you can imagine institutions doing the same thing. Mention of financial risk. Discusses “top loop” (rewards & risk propensity) and “bottom loop” of accident reduction. Top/bottom requires the picture:

risk thermostat.jpg

Points out that the geography department in which he works has bomb-glazed windows. Has funny signs and warnings. “All of this is now starting to backfire.” An increasing reaction is to be more afraid of the government than the terrorists. Shows a risk chart showing how risk is perceived. Risk voluntary, impersonal controlled:

711_mark2.gif

London bombs killed six days of road casualties. Mentions sticker from a band called “This bike is a pipe bomb.” Comments that cycling without a helmet is very safe. (I really enjoyed John’s book, “Risk.”)

Dan Gardner has a book Risk, as well. Journalist, will talk about the media. Runs down complains about media: cover vivid dramatic causes of death. Media reports things like “this doubles the risk of X” without stating the baseline. Reporters are human, and make human decisions. Tells story about two stories in New England Journal of Medicine. 19 stories. 9 mentioned both. 10 mentioned only bad news. All the both stories gave more space to bad news. Sell more papers? No, bad news is attractive to journalists for the same reason we rubberneck at car accidents. Storytelling is universal. What’s a story? Novelty, conflict, emotion, drama. Why terrorism gets more stories? It’s a better story. What’s not needed for a good story? Numbers! (Bruce adds “Facts!”) Brings up “the miracle on the Hudson.” It’s a little local story. People in audience know name of the pilot. What does it tell you about air travel? (I add “it’s very safe.” Dan responds “If you have Sully as your pilot.) 2007-2008 no one died in an air crash in the United States. Working on a new book on why expert predictions routinely fail and why we believe them anyway.

Questions/Comments:

David Livingstone Smith suggests that response to infection is biological. Richard John responds that we assess the risk poorly. Terry Taylor says that incidence of death from infection is still high. Response to infection disease risk varies tremendously based on risk. Southeast Asia, risks are more real, responses better considered.

Bruce Schneier comments to Mark that there’s an assumption of a single risk. In DHS’s case, it’s the risk of being fired. Have to look at a spectrum of risk. In air marshall case, there’s a deterrence effect or effect of saying we have them. Mark agrees that we need to capture multiple risks, dealing with principle-agent risk of DHS requires policymaker awareness. John Adams comments that liability is key. Ob-Gyn is hugely successful at improving infant survival, but has high insurance rates.

John Mueller compares sports writers putting context in to other journalists. Joe Bonneau argues that sample size, other knowledge missing. Dan Gardner responds that they have numbers in stories.

Jean Camp asks question about al quaeda being treated as terrorists, not criminals, while attacks on women’s health services are called criminals not terrorism. Would labeling it a terrorist campaign change the impact? Bill Burns says it would have an impact. Had we framed 9/11 as a crime, chances are good we would have caught more terrorists, and impacted the recruiting environment. Calling 9/11 the start of a war was one of the biggest mistakes. Chris Cocking is often asked “do people behave differently in a terror incident or a natural disaster?” Not a lot of evidence, but anxiety is higher in terror attacks.

James Pita asks if amount of panic is relative to proximity of danger? Chris Cocking suggests that it’s not primary. Bill Burns says 9/11 investigators found evidence of order and altruism in tower evacuation. Adds that he’s only encountered panic once: in a drowning person. Richard John adds that we don’t have pictures from above, but we know that people jumped. Chris Cocking says people who know they’ll die, they become apathetic. Jon Callas points out that there are final phone calls.

Andrew Adams asks about new risks, such as false air marshals as part of a plot. Mark Stewart says stay focused on big picture. (I agree strongly–in threat modeling, we sometimes see experts trying to go depth first, missing important big picture stuff.) Richard John says that air marshals may reassure and get people flying. (Of course, there’s evidence that fewer people are flying, so overall it’s a fail. )

Angela Sasse says that not everyone got out of towers-in part because of a ‘go back’ announcement. Drills and training are important, and often overlooked. (There’s a tie here to Gene Kim’s work on change control–it’s highly effective and boring.)

Terry Taylor asks about value of uncertainty in defense. Mark Stewart says there’s tremendous layers of uncertainty. Richard John says that uncertainty impacts terrorists substantially. Terrorists are relatively risk averse. Discussion of risk aversion: willing to accept death, but want attacks to succeed. Terrorists want an expected 70-90% success. Failure impacts recruiting.

[Update: Schneier’s blog is here. Matt Blaze’s audio.]

SHB Session 5: Foundations

Rachel Greenstadt chaired. I’m going to try to be a little less literal in my capture, and a little more interpretive. My comments in italic.

Terence Taylor, ICLS (Suggested reading: Darwinian Security; Natural Security (A Darwinian Approach to a Dangerous World)). Thinks about living with risks, rather than managing them. There are lessons from biology, not biomimicry, but discovering concepts. Produced Natural Security book with an interdisciplinary group. Security is not just about survival, it’s about adaptation to a range of risks. Consider collapse of Soviet Union as adaptive survival, rather than destruction. Core, Russian Federation, is still in place. Risk is good. It’s essential for survival when the world changes. Risk takers are good for our societies, because they are drivers of change. Shows (line) graph of biological risks from natural disease to deliberate misuse. Creeping risks, such as anti-biotic resistance, are hard to address. Critiques safety/security split as inappropriate.

Andrew Odlyzko, University of Minnesota (Suggested reading: Network Neutrality, Search Neutrality, and the Never-Ending Conflict Between Efficiency and Fairness in Markets, Economics, psychology, and sociology of security.) Responds to yesterday’s debate with new slides. “Half a century of evidence: people cannot build secure systems. People cannot live with secure systems.” Half a century (II): people don’t need secure systems. More general: it’s amazing that society functions People are innumerate. We get groceries on the shelf anyway. Main issues are adaptability and survivability. Technologists view cyberspace as a separate world to compensate for defects of human space. Quotes John Perry Barlow’s Declaration of Independence of Cyberspace. Says it should have seemed naive then, and more today. (Who was it that said that all progress is due to the unreasonable?) Comments on interplay between human space and cyberspace, compensating for each other. Claims cyberspace doesn’t matter much. (More really is different) Contrarian lessons for the future: build messy.

danah boyd, Microsoft Research: Taken Out of Context – American Teen Sociality in Networked Publics Link is to dissertation, really long, but worth scanning. Will cover two case studies. Study 1: lies. What do teenagers lie about online? Profiles that claim to be 95 year old from Christmas Island, graduating from high school in NJ in 2011. Lots of people from Algeria, Zimbabwe (first and last in list). Lots of 61, 71 (16, 17). Random selection of ages. Birthdates are accurate, years are not. What people lie about is dependent on safety. Kids are told to lie. “COPPA has encouraged an entire generation of liars.” Put in inaccurate info to protect selves.
Study #2: password sharing. 22% of teens in 2001 shared passwords (PEW). Found almost all teens have shared passwords with at least one person–at least their password. Share passwords with significant other, of BFF. It’s about trust. If I don’t share, someone might think I have something to hide. Change passwords before breakup. Sharing is core of a lot of bullying. Teen relationships last about a week and a half. Summarize: people invested in security bring a lot of thoughts about what what should be. Teenagers: our “shoulds” don’t matter. Facebook 25 things meme was different based on conception of audience. Adults: writing for ex-friends from high school. Teens: funny bits for current friends. Can’t think about security without thinking about how young people are thinking about privacy. Teenagers don’t think privacy is dead. Teens are taking a set of lessons from public people (celebrities). Angelina Jolie puts lots of information out to allow her to hide what’s important to her. boyd sees teens using techniques from high-censorship regions: puns, context, subtexts. Teens have not lived in privacy world we live in. Teens have no sense of home as a private zone; no control of who can enter their space. Privacy is about a sense of control. Control in a social media context is about how information flows. How far, who will understand it? “Privacy is getting complicated, getting messy.” (Getting complicated?!)

Mark Levine, Lancaster (Suggested reading: The Kindness of Crowds; Intra-group Regulation of Violence: Bystanders and the (De)-escalation of Violence) Groups and violence. Traditional psych of social order: mob violence, mass hysteria. “It’s all negative.” “Other people will do it not us.” Wants to persuade us that groups can also be good. Studies data from CCTV. Notes issues with CCTV overall, but focuses on incidents identified by camera incidents. Operators are trained. Advantages: see in real time. Disadvantages: no info about individuals, event, relationships, no history or sound. Shows video, people intervene. Predictions from traditional psych: as groups grow, de-individuation. Increasing anti-social acts. Diffusion of responsibility. Actual observation: larger groups, increase incidence of de-escalatory behavior. Third turn in sequence (escalation/de-escalation) is the one that tends to be predictive. More people involved but when lots of people say stop, pro-social outcomes happen. “How do we fix the world? When it comes to violence, group processes are part of the solution, not part of the problem.”

Jeff MacKie-Mason, Michigan (Suggested reading: Humans are smart devices, but not programmable; Security when people matter; A Social Mechanism for Supporting Home Computer Security) Security problems = incentive problems is primary assertion. (How does that relate to Andrew’s point re: we don’t know how to make secure systems?) Humans are responsive and smart. Argues that Google put sponsored links in place in part to overcome SEO & google-bombing. How can we design to use economics? Sciences of motivated behavior: microeconomics, strategic rational choice (game theory), social psychology, personality psychology. Says we can use signal theory to keep bad guys out. Passwords & captchas. Principles for design tradeoffs: relative costs to auth & not-authorized users…Get good guys to help: private provision of public goods. botnets. Address with economic philanthropy theory, non-monetary contribution theory: social norms, social identity, positive self-esteem, optimal distinctiveness, affiliation. (Lots of psych). Problem: discourage delinquency. Apply hidden action theory. Contracting theory, social comparison theory: leaderboards, dissing, etc.) Summary: humans are smart devices who respond to design motivations.

Questions while Joe sets up: Peter Neumann asks Jeff “what about all the Chinese folks who’ve never patched because they’re using pirated system?” Jeff: not sure, but perhaps the ISP could carry burden. Mike Roe asks Mark Levine: is system he showed social norms or figuring out how outnumbered you are? Critical thing is that the third turn not “de-escalate” the interveners. Levine has data on third punch.

Joe Bonneau (Suggested reading: The Privacy Jungle: On the Market for Data Protection in Social Networks.) Spends time hacking Facebook & other social networks. Caricatures of views: security researchers: social networking is pointless and childish. Facebook developers: privacy is boring, difficult and outdated. Why bother with the mess? Shows growth of facebook. (Claims that that means it will stick around.) People underestimate what facebook is: it’s a re-implementation of the “entire internet.” Replaces HTML with FBML. Craiglist with Facebook marketplace. Re-invented the internet with centralized, proprietary and walled with the addition of social context. “Given sufficient funding, all web sites expand in functionality until users can add each other as friends.” (JWZ: “I want to write software that will help people get laid.” 1996?) Social networking repeat all the web’s problems. Phishing, spam, id theft, malware, stalking. (So what value does all that “social” add?) Shows example of a 419 scam on Facebook. Shows example of Scramble asking for permission to view friend info. Conclusion: Negative: Social context aids phishing and scams; fun, noisy, unpredictable environment; people use social networking with brain off. Positive: can analyze graphs to spot fraud, social connections can help establish trust.

Question from John Mueller for Terry: nature responds to sustained change, not momentary. Connects to anthrax. Terry: when you get something perceived as catastrophic, adaptation is odd. Human reactions are faster, less sustained. Already there’s reaction to post office spending $6.5 billion. Over-investing in one place made the US more vulnerable to other, more common events.

Diana Smetters comments to Joe. Mark Sieden has suggested looking at social networks of bad guys.

David Livingstone Smith comments that adaptions continue if it propagates the genes. Adaptive traits can be damaging to individuals while increasing gene propagation. Asks Terry to comment on how well analogy is working in security domains. Looking at organisms, they adapt in ways which are collective. (Cicadia spending different numbers of years underground.) Andrew Odlyzko comments that the addition of learning changes things. Commented on financial rating agencies negotiating sludge in CDOs; analogy to AV companies advising virus writers.

Allan Friedman comments interesting response to a talk yesterday, maybe we’re not screwed. Asks danah for insight into how resistant “digital natives” will be to fraud. danah says that these populations are not understanding of systems. “Digital natives stuff is bullshit.” Reliance on Google is huge. They don’t check links. Lose passwords more often than stolen. Teens are vulnerable to phishing, leave systems. MySpace has huge problems, Facebook getting bad. Joe adds that he spends an hour a day looking at privacy settings, doesn’t understand them. danah says the one thing saving most teens is fear of parents, college admissions officers. Teens also jump systems fast. Joe says “they build their own security on top of one they don’t know how to use.” Jeff Friedberg asks about teen models of trust boundaries, and do systems represent trust boundaries well? danah says system/teen match is dreadful. Lots of dependencies on socio-economic status over age. Relations to extended families is very different between low & high income. Mid-to-high income kids more likely to trust peers. (maybe related to societies with effective formal problem resolution?) Can’t manage things like “on the out with mom” on network. Formalization of these things is not reflective of real world.

[Update: Bruce Schneier’s post is here. Matt Blaze has audio.]

SHB Session 4: Methodology

David Livingstone Smith chaired.

Angela Sasse “If you only remember one thing: write down everything the user needs to do and then write down everything the user needs to know to make the system work. Results of failure are large, hard to measure. (Errors, frustration, annoyance, impact on processes and performance, coloring user perception of security.) Mentions ongoing work “The Compliance Budget: Managing Security Behaviour in Organisations.” (I really like this work.) Figure out what your budget is in real and perceived efforts, and spend it wisely. Very interesting pictures of organizational effort required for compliance. Collecting data about how much security costs: actual and perceived workload; actual and perceived impact on business; actual and perceived risk mitigation effect.

Bashar Nuseibeh, Open
University. Background in mission critical systems. Safety is about accidents, security is about intent. Privacy rights management on mobile apps. Mobile version of Facebook. Wanted to understand how people use Facebook on the move. Hard to do traditional interviews & questionnaires. Need observation, but can’t do that with privacy. Use “experience sampling.” Short questionnaires. Use “memory phrases” to try to trigger things during a later interview. Trying to understand sociocultural interactions. Importance of boundaries to how those studied think about their privacy. “This information is for me only.” This information is only for a subset of the group. Some of Bashar’s work: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis

James Pita, interested in security games where people are guarding physical assets. LAX has 8 terminals which need to be patrolled by K9 units. How to decide where to put the dogs to maximize security while minimizing determinism? Use a Stackelberg game. Assign values of assets to you, adversary. Goal is to make adversaries unable to make a “better” decision. Look at human uncertainty and limits to decision making processes. Adversaries can’t observe perfectly. Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport

Markus Jakobsson gave a talk titled Male, late with your credit card payment, and like to speed? You will be phished!. Auto insurance companies ask if you smoke, because smokers demonstrably disregard their own health. Jakobsson recruited 500 subjects on Mechanical Turk. 100 admitted to losing money to online fraud. 100 others were chosen as control. Asked subjects to rate risks of various activities. Some were correlated. There’s an obvious difference between what people say and do; this studied what people say. Correlation between “other online fraud” and not paying credit card balance was .19. Other online fraud and saving less than 5% for retirement were correlated at .17. Some other papers: Social Phishing, Love and Authentication, Quantifying the Security of Preference-Based Authentication.

Rachel Greenstadt talked about “Mixed Initiative Support for Security Decision Making” Security decisions are hard for both humans and machines. Context dependency, requirement for specialized knowledge, sophisticated adversaries, etc. Machines: “I must be dancing with Jake, this guy knows Jake’s private key.” Computers could recognize other cues. How can agents mediate security decisions between humans and applications? Two security decisions: (1) should I login, (2) if I publish this anonymous essay, will my linguistic style betray me? In (1) phishing case, Alice knows she wants to visit her bank. Device knows Alice not visiting bank. Doesn’t know that Alice thinks she’s visiting her bank. Making appearance profiles, hashing html, css and images with ssdeep fuzzy hashing, and matching using a threshold. Have very good numbers, talk too fast too grok. Looked at stylometry problem (who wrote this paper dominated by AI). Tried to imitate the work of Cormac McCarthy. Students wrote 500 word essays to imitate, their work was identified as his 60-80% of the time. Think about imitating someone else’s style to hide authorship. Papers: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections

Mike Roe Pitched a research question for someone else: what proportion of transexuals are getting their drugs from black market sources? Methodology: ask survey questions. Looking for an academic collaborator. Why public choice economists might be interested: cheap drugs, oops, slides gone.

Mike’s research is about what actual attacks take place in the real world (of games). Initial assumption: virtual goods worth real money, programs are full of bugs, so someone will exploit bugs to steal money. Assumption is ok, but those people aren’t causing a lot of problems that way. Bartle has taxonomy of gamers: explorers, socializers, achievers, killers/griefers. Griefers try to annoy socializers or achievers.

Comments/questions:

Peter Neumann had a comment for Bashar. Lampson (1974) pointed out that if it’s not secure, it’s not reliable, and vice versa. QED, Bashar’s contrast not sensible. Bashar comments on intent. (I’m skeptical.)

Dave Clark pushes on economics versus griefers. “To me griefers are a constant, but economic fraud grows over time.” Discussion of hacking for glory versus money. Schneier and I suggest it’s different people.

Sasha Romanosky mentions behavioral work that looks at willpower and choice. One theory: Willpower gets depleted over time, alternately it’s like a muscle that strenghtens. Has Angela Sasse looked at compliance budgets in these lights? She says its more complex. Willpower is only one factor. Cues from the behavior of others is hugely important. People are more likely to put on weight if you’re with others putting on weight. Insecure behavior spreads through organizations.

Andrew (?) asks about alternate methods and if compliance budgets change. Passface mechanism that people really liked, but took a while to login, lead to people logging in 70% less often. Rob Reeder asked if perceived value of assets impacted compliance thresholds. Angela said yes.

Richard John asked about correlations and risk perceptions. Commented that Slovic’s work averaged across populations. Richard John was shocked correlations were so small. Slovic has also shown that different demographics show different risk perceptions. Markus said didn’t want to ask about ethnicity.

[Update: Bruce Schneier’s blog post.]

SHB Session 3: Usability

Caspar Bowden chaired session 3, on usability.

Andrew Patrick NRC Canada (until Tuesday), spoke about there being two users of biometric systems: the purchaser or system operator and the subject. Argues that biometrics are being rolled out without a lot of thought for why they’re being used, when they make sense and when not. Canada has announced that they will be fingerprinting all visitors soon, and is experimenting with remote (6-10 feet) iris scanners. NIST has data from US-Visit on fingerprints & photos. Both are poor quality, fingerprints largely in the case of older folks for fingerprints. Mentions paper, “The Perception of Biometric Technology” which shows acceptance is correlated with purpose. Fingerprints are left all over. Andrew asserts we should publish our fingerprints. (Adam adds: we are.) Relates biometrics to nuclear waste in the lack of public discussion. Papers: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems

Luke Church, Cambridge discussed HCI and security. Transformation from HCI to User Centered Design. Values: Concretization: talk about personas and scenarios. Direct manipulation of objects. Economy of usability: make the common easy. SER cycle. Zittrain (not present) argues in “The Future of the Internet” that security decisions are being embedded in tech. Driven by desire for less interference. Alma Whitten’s abstraction and feedback. Original goal: control over information and technology. Discusses control by appliance, and how it gives too much power to the technologist (DRM, perfect enforcement, norming). We need tools for expressing meaningful end user control. Shows a Facebook privacy dialog, looks simple, says there are 86 of them. Seek ways to leave the “last mile” of design to users. (Adam adds: it’s the first mile, damnit. Put the user at the center.) Papers: The User Experience of Computer Security, Usability and the Common Criteria

Diana Smetters, PARC talked about meeting the user halfway. You can teach users, you just can’t teach them very much. Requires careful design of what you teach them. Users want to be secure, need to get their job done. Ask questions like: what should the model be for user accounts in home? (Not timeshare/business/mainframe). How should web sites authenticate themselves? How do we make it safe to click links in emails? Why is this hard? Have to give up on what you think would be good for them… everybody else is conspiring against you. Discussed phishing as a “mismatch problem.” Built a set of protected bookmarks and deliver single-site browser instances. What should a user do when they see the browser gives a cert warning? Ignore it. Most certs on the internet are broken. (Didn’t catch the citation.) People are rational: if attacks are rare, people will ignore them. With anti-virus, we increased the number of attacks until people ran anti-virus. Claims that it might be a good idea to throw out the baby with the bathwater. Some papers:
Breaking out of the browser to defend against phishing attacks, Building secure mashups, Ad-hoc guesting: when exceptions are the rule

Rob Reeder, Microsoft talked about “social recovery” for lost passwords. Users pre-designate trustees who can provide for recovery. Can use arbitrary k/n groups of trustees. Ideally, the trustee talks to the person who lost the password. Showing slides of increasingly trustworthy reasons to think it’s the right person, other various attacks and defenses. Good recovery, good resistance to email attacks. Medium resistance to highly-personalized attacks (spouses, etc). It’s Not What You Know, but Who You Know: A Social Approach to Last-Resort Authentication.

Lorrie Cranor, CMU talked about warnings. Best thing to do: fix problem, then guard, then warn. Too often, we warn the user, turns out not to be a problem, but people get habituated. Has a great dialog “you need to click ok to get on doing work.” Need A Framework for Reasoning
About the Human in the Loop
. Suggests blocking real dangers, allowing low dangers, and requiring user to decide only when there’s a middle level. Did a study with real and new browser warnings. Asked “what type of site are you trying to reach?” Firefox 3 warnings: only 1/2 were able to figure out how to override the warnings. Conclusion: warnings are only so effective. Need to think beyond warnings. Timing Is
Everything? The Effects of Timing and Placement of Online Privacy Indicators
, School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings

Jon Callas, agreed to be the pessimist. (The optimist thinks this is the best of all possible worlds, the pessimist is afraid she’s right.) Talks about cliffs we need to scale versus ramps that we can walk up. Maybe dictates are ok. Can users really make effective decisions? Do they want to? Discusses uncle who was an aerospace engineer, but is now in his 80s, doesn’t grok computers. Risks of unencrypted email are small, hard to describe or estimate. Seatbelts as metaphors: easy to visualize. People die for not wearing seatbelts. Left to themselves, people wear seatbelts at a rate of 17-20%. So if people don’t wear seatbelts, why would they encrypt their email. Agree: you can teach them, but you can’t teach them much. Coming to the opinion that we’re going to have to start making decisions for others, because it doesn’t work otherwise.
Improving Message Security With a Self-Assembling PKI

Questions: Markus Jakobsson asked Rob if people would use themselves as trustees with an alternate email account. Yes, but that’s ok. People should have a range of account recovery options.

Peter Neumann commented that the literal interpretation of Descartes is the better is the enemy of the good, and that by inversion the very bad is the enemy of the worst. Asked what’s good enough. Jon Callas says we want to move up from the bottom of the cliff. Luke asserts that the number of people who get hurt is small enough that it makes the newspapers. Chris Soghoian says different sites have different motivations, cites Ross who says Facebook wants to dump hard questions on users so they can blame them. Luke takes issue, says not clear that Facebook knows what to do, or how to do it. Usability is hard.

John Muller (sp?) says its ok to read his email, it’s boring. Jon Callas responds that people often don’t know what data is on their systems, but that it may be ok to have insecure emails.

Jean Camp says its fascinating in 2009 to hear that the happy libertarian market pony is going to solve the problem. Addresses John Muller & Luke. Says “I don’t know how bad it is.” Only just now starting to look at security as a macro problem. Luke responds agreed, don’t know how bad problem is. Worries about trading future uses of technology when we don’t know how bad the problem is.

(Didn’t catch questioner name) whitelist question: how long until RIAA imposes whitelists? How long until banks restrict market entry? Diana responds that you can choose your own whitelist provider. Jon Callas adds that 40% of malware is signed code. EV signed phishing sites are starting to crop up. If our education really worked, we’d have a catastrophic failure.

Dave Clark says he’s thinking about the topic which is supposed to be usability. Maybe people are happy without their seatbelts. Says the conversation was half about usability, and asks “do we have a taxonomy of what we’re talking about.” Diana Smetters responds (missed it).

Allan Friedman comments that general whitelists are hard. I ask why we should expect whitelists to work. Ross suggests that we create physical devices or ceremonies. (When you put your phone within a foot of your computer, you can only go to a whitelisted site.) (Missed some questions.)

Jeff Friedberg asks Lorrie about guidance given to test subjects–do you tell them what to do when you contact a site? Lorrie’s experiment included an alternate route, which involved making a phone call.

Joseph Bonneau argues that social networking sites can’t use privacy as a selling point. Says that sites don’t want users to be confused but do want to steer people towards openness. No incentives to wrong thing, but nudges to avoid bothering. Luke responds that there’s ongoing social negotiation over use of these sites.

Jeff Friedberg asks for more on trusted advisors: what’s worked, what hasn’t, and what research is ongoing?

[Update: fixed links. Bruce Schneier has notes. Matt Blaze has audio.]

SHB Session 1: Deception

Frank Stajano Understanding Victims Six principles for systems security

Real systems don’t follow logic that we think about. Fraudsters understand victims really well. Working with UK TV show, “the real hustle.” Draft paper on SHB site.

Principles: Distraction, social compliance, herd principle, decption, greed, dishonesty

David Livingstone Smith

What are we talking about? Theoretical definitions: that which something has to have to deploy a term. Deception is difficult to define properly: not just false belief, but induced false belief which is known to be false. Can’t be based entirely on human deception-animals decieve. Mirror orchids decieve wasps with chemical signals & flowers that look like wasps. Deception is evolved or learned behavior which causes a victim to fail to behave the way it has learned or evolved to behave.

Bruce Schneier

3 short things: (1) how we buy security: we want things for fear or greed. Security sells based on greed don’t seem to work. (2) conficker didn’t take over very much: dates mattered a lot (April 1) news media could hook a story on the date, but week later update wasn’t noticed. (3) science fiction writers: US Gov hires to imagine threats. Paper on risk analysis showed that formal analyses didn’t get right risks. Control bias, availability hueristic, peak end rule.

Dominic Johnson Paradigm Shifts in Security Strategy

Book: natural security a darwinian approach to a dangerous world. Evolution is 3.5BB years of security problems & solutions. 9/11 threat was recognized but no preperation was executed. Slow or no adaptation over time; sudden adaptation after disaster. Ties in Kuhn’s paradigm shifts. Foucualts moments of rupture, punctuated equilibrium, economics progresses with each funeral. Hypothesis: adaptation after disaster. Predictions: policy changes follow disasters. Selected “policy watersheds since 1945” from army war college. Causes: set of biases: sensory, psychological, leadership, organizational, political.
Dominic Johnson‘s page

Jeff Hancock

Psychologist studies interpersonal deception. Interested in how tech shapes the way we lie, and can it help detect. Most people lie for reasons: few lie for fun. Studied online dating: men lie about height, women lie about weight. People on social networking sites are “ridiculously honest.” Study on resumes showed people were more honest when they expected resumes to be posted to LinkedIn. “Warrants” concept (didn’t capture well, sorry.) Many major scandals of last 5 years involve email. Tech shapes ways in which we lie.

Detection: everything is becoming textual. Processing advances allows us to process more & more text faster. Are there ways to detect lies in text? Examined some corpuses. Lies often involve “social distancing.” Use of first person singular drops as lies increase. Dating sites: more lies, less “I.” Looked at Bush admin statements about Iraq, compared to other statements on same day. If belief states were the same, then 1st person singular should be the same. Effect size was so large that they re-checked the data. Causative complexity, other measures all indicate knowing indications. Politicians tend to be highly aware of language, maybe these are hard to control.

Update: Bruce Schneier’s notes are here.