TSA: Let us Take Nekkid Pics of You Or You Get "Bad Touch"

Apparently, the TSA is now protecting us so well that they make women cry by touching them inappropriately.

According to (CNN Employee Rosemary) Fitzpatrick, a female screener ran her hands around her breasts, over her stomach, buttocks and her inner thighs, and briefly touched her crotch.

“I felt helpless, I felt violated, and I felt humiliated,” Fitzpatrick said, adding that she was reduced to tears at the checkpoint. She particularly objected to the fact that travelers were not warned about the new procedures.

What I really like is that they’re giving us a choice.

“passengers who opt out of advanced imaging technology screening will receive alternate screening to include a thorough pat-down.”

So basically, either we get pics of your private bits, or we get to touch them.

(btw – if you’re not at work, you may want to search for “TSA body scan invert”)

It's not TSA's fault

October 18th’s bad news for the TSA includes a pilot declining the choice between aggressive frisking and a nudatron. He blogs about it in “Well, today was the day:”

On the other side I was stopped by another agent and informed that because I had “opted out” of AIT screening, I would have to go through secondary screening. I asked for clarification to be sure he was talking about frisking me, which he confirmed, and I declined. At this point he and another agent explained the TSA’s latest decree, saying I would not be permitted to pass without showing them my naked body, and how my refusal to do so had now given them cause to put their hands on me as I evidently posed a threat to air transportation security (this, of course, is my nutshell synopsis of the exchange). I asked whether they did in fact suspect I was concealing something after I had passed through the metal detector, or whether they believed that I had made any threats or given other indications of malicious designs to warrant treating me, a law-abiding fellow citizen, so rudely. None of that was relevant, I was told. They were just doing their job.

It’s true. TSA employees are just doing their job, which is to secure transportation systems. The trouble is, their job is impossible. We all know that it’s possible to smuggle things past the nudatrons and the frisking. Unfortunately, TSA’s job is defined narrowly as a secure transportation system, and every failure leads to them getting blamed. All their hard work is ignored. And so they impose measures that a great many American citizens find unacceptable. They’re going to keep doing this because their mission and jobs are defined wrong. It’s not the fault of TSA, it’s the fault of Congress, who defined that mission.

It’s bad enough that the chairman of British Airways has come out and said “Britain has to stop ‘kowtowing’ to US demands on airport checks.”

The fix has to come from the same place the problem comes from. We need a travel security system which is integrated as part of national transportation policy which encourages travel. As long as we have a Presidential appointee whose job is transportation security, we’ll have these problems.

Let’s stop complaining about TSA and start working for a proper fix.

So how do we get there? Normally, a change of this magnitude in Washington requires a crisis. Unfortunately, we don’t have a crisis crisis right now, we have more of a slow burning destruction of the privacy and dignity of the traveling public. We have massive contraction of the air travel industry. We have the public withdrawing from using regional air travel because of the bother. We may be able to use international pressure, we may be able to use the upcoming elections and a large number of lame-duck legislators who feared doing the right thing.

TSA is bleeding and bleeding us because of structural pressures. We should fix those if we want to restore dignity, privacy and liberty to our travel system.

Collective Smarts: Diversity Emerges

Researchers in the United States have found that putting individual geniuses together into a team doesn’t add up to one intelligent whole. Instead, they found, group intelligence is linked to social skills, taking turns, and the proportion of women in the group.
“We didn’t expect that the proportion of women would be a significant influence, but we found that it was,” Prof. Woolley, an organizational psychologist, said in an interview. “The effect was linear, meaning the more women, the better.”

The Globe and Mail, “If you want collective smarts…” In her interview with Quarks and Quirks, Woolley was careful to say that it wasn’t gender per se, but social awareness, but that such awareness correlates strongly with gender.

A Letter from Sid CRISC – ious

In the comments to “Why I Don’t Like CRISC” where I challenge ISACA to show us in valid scale and in publicly available models, the risk reduction of COBIT adoption, reader Sid starts to get it, but then kinda devolves into a defense of COBIT or something.  But it’s a great comment, and I wanted to address his comments and clarify my position a bit.  Sid writes:


Just imagine (or try at your own risk) this –

Step 1. Carry out risk assessment
Step 2. In your organisation, boycott all COBiT recommendations / requirements for 3-6 months
Step 3. Carry out risk assessment again

Do you see increase in risk? If Yes, then you will agree that adoption of Cobit has reduced the risk for you so far.

You might argue that its ‘a Control’ that ultimately reduces risk & not Cobit.. however I sincerely feel that ‘effectiveness’ of the control can be greatly improved by adopting cobit governance framework & Improvement of controls can be translated into reduced risk.

I can go on writting about how cobit also governs your risk universe, but I am sure you are experienced enough to understand these overlapping concepts without getting much confused.

Nice try, Sid!  However, remember my beef is that Information Risk Management isn’t mature enough.  Thus I’ve asked for “valid scales” (i.e. not multiplication or division using ordinal values) and publicly available models (because the state of our public models best mirrors the maturity of the overall population of risk analysts).

And that’s my point, even if I *give* you the fact that we can make proper point predictions for a complex adaptive system (which I would argue we can’t, thus nullifying every IT risk approach I’ve ever seen), there isn’t a publicly available model that can do Steps One and Three in a defensible manner.  Yet ISACA seems hell-bent on pushing forth some sort of certification (money talks?).  This despite the inability of our industry to even use the correct numerical scales in risk assessment, more or less actually performing risk assessment in a means that can be even used to govern on a strategic level, or even showing an ability to identify key determinants in a population.

Seriously, if you can’t put two analysts with the same information in two separate rooms and have them arrive at the same conclusions given the same data – how can you possibly “certify” anything other than “this person is smart enough to know there isn’t an answer”?


I want to make one thing clear.  My beef isn’t with ISACA, it’s not with COBIT, it’s not with audit.  I think all three of these things are awesome to some degree for some reasons.  And especially, Sid, my beef isn’t COBIT – I’m a big process weenie these days because the data we do have (See Visible Ops for Security) suggests that maturity is  a risk reducing determinant.  However, this is like a doctor telling a fat person that they should exercise based on vague association with a published study of some bias.  How much, what kind, and absolute effectiveness compared to existing lifestyle is (and esp. how to change lifestyle if that is a cause) is still very much a guess.  It’s an expert (if I can call myself an expert) opinion, not a scientific fact.

In the same way your assertion about COBIT fails reasoned scrutiny.  First, there is the element of “luck”.  In what data we do have, we know that while there is a pretty even spread in frequency of representation in data breaches between determined and random attackers.  That latter aspect means that it’s entirely likely that we could dump COBIT and NOT see an increase in outcomes (whether this is an increase in risk is another philosophical argument for another day).

Second, maybe it’s my “lack of experience” but I will admit that I am very confused these days as to a proper definition of IT Security Governance.  Here’s why; there are many definitions (formal, informal) I’ve read about what ITSec G is.  If you argue that it is simply the assignment of responsibility, that’s fine.  If you want to call it a means to mature an organization to reduce risk (as you do above), then we have to apply proper scrutiny towards maturity models, and how the outcomes of those models influence risk assessment outcomes (the wonderful part of your comment there is the absolute recognition of this).  If you want to call it a means to maturity or if ITSec G is an enumeration of the actual processes that “must” be done, then we get to ask “why”.  And once that happens, well, I’m from Missouri – you have to show me.  And then we’re back into risk modeling, which, of course, we’re simply very immature at.

Any way I look at it, Sid, I can’t see how we’re ready for a certification around Information Risk Management.

Side Note: My problem with IT Security Governance is this: If at any point there needs to be some measuring and modeling done to create justification of codified IT Security Governance, then the Governance documentation is really just a model that says “here’s how the world should work” and as a model requires measuring, falsification, and comparative analytics. In other words, it’s just management.  In this case, the management of IT risk, which sounds like a nice way of saying “risk management”.

Seriously? Are We Still Doing this Crap? (RANT MODE = 1)

These days I’m giving a DBIR presentation that highlights the fact that SQLi is 10 years old, and yet is still one of the favorite vectors for data breaches.

And while CISO’s love it when I bring this fact up in front of their dev. teams, in all deference to software developers and any ignorance of secure coding, we  (the security industry) are just as guilty of equal (and perhaps equally damaging) stupidity.

Take as an example, this computerworld article.  Please.

Ostensibly, this article is about “Six Enterprise Security Leaks You Should Plug Right Now“.    Now when I see “Six Enterprise Security Leaks You Should Plug Right Now” I think – wow, these are going to be the most common and serious causes of data breaches and security incidents.  Pull the emergency rip chord, scramble the F-5’s for interception and send launch codes to the subs.   This is going to be proven attack vectors and techniques enumerated in a no-BS manner for the CIO.

Unfortunately, this is a great piece of circa 1999 FUD horseshit.

First example:  The Bluetooth Gun

Things I think when I see this pic:

  • 55% “Compensating for something?”
  • 33% “Because pointing a 4.5 foot gun at someone is COMPLETELY inconspicuous!”
  • 12%  “It’s not just Bluetooth, it shoots PSYCHIC BULLETS!!!!!!1!1”

Like really?  Bluetooth rifles?  In all fairness, this first “panic and plug” piece of the article is ostensibly about “smartphone security”.  And I guess RSnake is giving fine advice.  But really, trying to tell execs that they can’t have an iPhone because there’s some corn-fed, hand-spanked Insane Clown Posse fan with a “bluetooth rifle” might “steal their address book” is going to make you look like, well, and insane clown.

Hilarious, of course, is the recommendation that organizations give company sanctioned “robust” platforms like Android.  Because when I think “robust”, time-tested, and controlled platform for development and enterprise adoption, I think “Android” (not a knock on Android per se, fans of the platform, no.  But seriously, you have to acknowledge that it’s pretty much still a “newer” smartphone platform and not exactly one under the control of Google in terms of quality to market).

Second is “Open Ports on a Network Printer.”  Really.

I’m gonna be frank and honest with you, dear reader. My PenTest experience is about 5 years out of date, but I’m willing to bet that things haven’t changed that much and I can still say with all certainty and seriousness that if your internal security is tight enough that you have to worry about “open ports on networked printers” you’re already in the 98th percentile of capable security organizations.  Take a break,  pat yourself on the back, have an octoberfest something, and then tomorrow you can worry about something like solving Log Management issues.  Forget “network printers”.

“One of the reasons you do not hear about it is because there is no effective way to shut them down,” says Jay Valentine, a security expert.

“Another one of the reasons you do not hear about them is because  in terms of security issues within the network perimeter, printers are about as important as, say, the possibility that some mentally unstable SEO/ Web analytics employee has a 4.5 foot bluetooth gun in his cube and is using it to capture screen shots of your CFO playing Angry Birds on her iPad, posting them to Facebook, Yahoo Forums, and otherwise embarrassing the CFO’s 14 year old son because his Mom plays Angry Birds at work.” – Alex Hutton, not an expert at anything

Custom-developed Web apps with bad code is, actually, at least according to the DBIR, something to worry about.   I’ll limit the snakiness on this one, they got it right.

Next is something I really have an issue with.  They label it “Social Network Spoofing” and I have to ask – is this an enterprise “leak” that IT can “plug”?

I mean, RSnake’s example is a phone based social attack where someone impersonates monster.com.  And phone attacks do make up something like 21% of Social attacks in the DBIR data set.  That’s fine.  But we’re dealing with a phone based attack, here.  Not something having to do with Facebook.  And really, after the whole stupidity and non-story of Robin Sage – is it a good idea to even bring this up?   We can add to the craziness the fact that there isn’t a lot of evidence for this attack vector and the remediation that enterprises should take, and most confoundingly (yes, it’s a Yosemite Sam word, deal with it), according to this article, is “email verification that confirm the identity of the sender”.  Yeah.  Because that shit’s ubiquitous.  What you really want is to limit your users ability to interact with customers, vendors, and other business silos because they don’t use compatible “email verification” platforms.

Look, I know awareness programs are much maligned.  And I’m completely aware that most of them suck it.  But really, there’s no way that you’re going to combat phone based scams with technology.  It’s called SOCIAL ENGINEERING for a reason.  You’re not manipulating systems, you’re manipulating people.  This may be a leaky hole or something (proportionately, it’s not, really if you take data breach stats with any seriousness) but the remediation is, well, strange.  In my social engineering experience, I’ve gotten tons of information without using email as a vector as a follow up from a phone call.

Employees Downloading Illegal Movies and Music

Winn Shwartau is right, there’s no reason that folks should be putting this crap on work boxes, and P2P is a filth pit of code.

But I’m looking in the DLDB/Verizon/USSS data sets and you know what?  I’m finding a lot more basic crap people need to worry about in terms of both frequency and impact.  P2P is bad, get marimba or something and keep that crap off user endpoints, sure.  But P2P just doesn’t seem to be a top 6 enterprise leaky whole thingy stop the world and panic write a computerworld article about it.

Finally, we have SMS text messaging spoofs and malware infections.

You know how often SMS text messaging is a vector in the data sets I’ve seen?  really?  zero.  none.  I’ve never seen any incident of any magnitude be more than a “proof-of-concept work” (schwartau’s words, not mine).

Seriously, folks.  Look at hacking and malware paths in the DBIR.  And be very, very concerned about SQLi and other remote access.  Draw on bot net stats from the Microsoft SIR and be wholly uncomfortable with the complexity and size of your network.  Read the Visible Ops for Security book (now on Kindle!) and understand how far from process maturity your IT and IT security processes are and weep accordingly.  Sort through DLDB and be afraid at what evil lurks in the laptops of end users.  There are real problems with the state of corporate IT Security.

But SMS text messages, getting “business intelligence” (these words you use, I don’t think they mean what you’re thinking) or SMS text messages installing mobile malcode?  Not one of the big problems.  not even close.  Hell, I’d love for the industry to be in such a secure state that malcode installed by SMS *is* a big “enterprise leak thing” to panic and plug.  Same with Network Printers.  But right now, every ounce of data says that going to “your carrier and work with them to make sure that they’re using malware blocking software” is not only a complete waste of energy, but it’s basically bat-shit insane.

They’re coming to take your cell phones away, haha

What bothers me most about this article is that two people I esteem, RSnake and Winn, are feeding the FUD.  Focusing on the possible (seriously, that bluetooth gun pic cracks me up everytime I see it) and ignoring what’s actually causing breaches for the sake of media sensationalism is a complete FAIL.

Let’s see if our own FUD makes the Defcon FAIL panel this year.

Sorry for the Nick Cage with crazy eyes.

Re-architecting the internet?

Information Security.com reports that:

[Richard Clarke] controversially declared “that spending more money on technology like anti-virus and IPS is not going to stop us losing cyber-command. Instead, we need to re-architect our networks to create a fortress. Let’s spend money on research to create a whole new architecture, which will cost just a fraction of what we spend on all of the technology crap that doesn’t work”, he said, to a loud round of applause.

In the book, we wrote:

Given the nature of these issues, perhaps we should consider the radical step of rebuilding our information technologies from the ground up to address security problems more effectively.

The challenge is that building complex systems such as global computer networks and enterprise software is hard. There are valid comparisons to the traditional engineering disciplines in this respect. Consider the first bridge built across the Tacoma Narrows in Washington state. It swayed violently in light winds and ultimately collapsed because of a subtle design flaw. The space shuttle is an obvious example of a complex system within which minor problems have resulted in catastrophic outcomes. At the time this book was written, the Internet Archive project had 85 billion web objects in its database, taking up 1.5 million gigabytes of storage. During the 1990s, such statistics helped people understand or just be awed by the size of the internet, but the internet is undoubtedly one of the largest engineering projects ever undertaken. Replacing it would be challenging.

Even if we “just” tried to recreate the most popular pieces of computer software in a highly secure manner, how likely is it that no mistakes would creep in? It seems likely that errors in specification, design, and implementation would occur, all leading to security problems, just as with other software development projects. Those problems would be magnified by the scale of an effort to replace all the important internet software. So, after enormous expense, a new set of problems would probably exist, and there is no reason to expect any fewer than we have today, or that they would be any easier to deal with.

Given how much we’ve learned about security in development, I think that we’d likely start with fewer bugs than the current stack started with. Would we have fewer bugs than we have today, after 30 years of testing? Not so obvious.

I’m curious why Richard (or anyone else) thinks that developing, testing and deploying a whole new architecture and converting over all the myriad services which have been built on the extant technologies would cost “just a fraction of what we spend” today. To go further, I think that’s an extraordinary claim (and an extraordinary applause line) and it requires extraordinary proof.

Another personal data invariant that varies

Just about anything a database might store about a person can change. People’s birthdays change (often because they’re incorrectly reported or recorded). People’s gender can change. One thing I thought didn’t change was blood type, but David Molnar pointed out to me that I’m wrong:

Donors for allogeneic stem-cell transplantation are selected based on their HLA type (tissue type), and not on their blood type. Therefore, it is quite common that the donor and patient have different blood types. The blood type is determined by the red cells. After transplant and bone-marrow recovery the red cells will come from the donor and have the donor’s blood type. As an example, if the patient is blood type A, and the donor is blood type O, the patient after transplant will become blood type O. The long-term outcome of an allogeneic stem-cell transplant is affected only to a small degree by the blood types of the donor and recipient. If an ABO difference exists, the transplant itself may create some technical difficulties, but these can be easily overcome. Red-cell recovery may be delayed after such transplants, and the patient may need support with red-cell transfusions for a prolonged period of time. More importantly, the patient should be aware that the blood type has changed or will change, and that old blood type cards are no longer valid. IBMT will provide you with a laminated card that indicates that your blood type may have changed. After your bone-marrow function has fully recovered, you may receive red cells of your new blood type. During the transplant process, usually red cells of blood type O are used, since these can be used for any patient (universal donor).
(“Indiana Blood and Marrow Transplantation“)

David continues:

The Seattle Cancer Care Alliance is the #1 by volume in the U.S and does several thousand per year. So that means several people per day are having their blood type changed right here in Seattle.

Does your database and e-health record support updating your blood type record?