Signalling by Counting Low Hanging Fruit?
I’ve been thinking a lot about signaling software security quality. Recall that a good signal should be easy to send, and should be easier for a higher quality product.
I’d like to consider how running a tool like RATS (link) might work as a signal. RATS, the Rough Auditing Tool for Security, is a static source code analyzer. Would it work to provide a copy of the results of RATS, run across your code? Firstly, this is pretty easy to do. You run rats -R * > report.txt
and you get a report. A company could give this report to customers, who could weigh it, and have more information than they have today. (Literally. A long report, taking more pages, means worse software. At least, it means worse software as seen through a RATS filter.)
That filter is imperfect. First, it rewards worthless behavior such as changing strcpy(dest, "foo")
to strncopy(dest, "foo", 3)
so that RATS won’t complain. Next, it rewards writing code in languages that RATS doesn’t scan. This is somewhat useful–code written in C will have more string management errors than code written in another language that doesn’t have string manipulation problems. Given the number of such errors, the added incentive to move away from C is not economically perverse.
It would be fascinating to know if the items that RATS detects are predictive of other bug density. On the one hand, much research into quality assurance and testing indicates that bugs do cluster. On the other, the use of a library call that sometimes has security problems may be disjunct from other types of bugs in how concentrated it is. Knowing if RATS is predictive would allow us to judge how useful a signal it is. There may be other useful things to do with the data, too.
If RATS output became accepted in the marketplace, would it be easy to forge the signals? Unfortunately, it would be. Generating a report that is 2 pages shorter than the competitions is easy. Just cut lines from the file. Simple inspection won’t reveal that. There are ways to examine binaries, but they require skill and a little time. I don’t think this is likely behavior. A company that certifies that it ran a test, and alters the results of the test is engaging in deceptive trade practices. And yes, there may well be used car dealers who offer fake warranties, but they’re few and far between. The downside is too large.
Finally, I’d like to run this through a 5 step process proposed by Schneier in the April, 2002 Crypto-gram, to see what we learn. (Read the article for clarification on why this is a fine evaluation framework. I’m abusing it slightly, by looking at a signal, rather than at a security measure.)
- What problem does the security measure solve?
- How well does the security measure solve the problem?
- What other security problems does the measure cause?
- What are the costs of the security measure?
- Given the answers to steps two through four, is the security measure worth it?
Distributing RATS output helps to solve the question of how a customer should evaluate software. The question of how well it does this, as noted is open. There are some clear problems. There’s no security problem caused by the technique. It’s cheap to do. And so, even though its not a great signal, its probably worthwhile.
Security Coding Best Practices – Java adds yet another little check, and boom…
Over on Adam’s blog he has a developing theme on ‘security signalling.’ He asks whether a code-checking program like RATS would signal to the world that a product is a good secure product? It’s an important question, and if you…
Following up “Liability for Bugs”
(Posted by Adam) Chris just wrote a long article on “Liability for bugs is part of the solution.” It starts “Recently, Howard Schmidt suggested that coders be held personally liable for damage caused by bugs in code they write.”…
Following up “Liability for Bugs”
Chris just wrote a long article on “Liability for bugs is part of the solution.” It starts “Recently, Howard Schmidt suggested that coders be held personally liable for damage caused by bugs in code they write.” Chris talks about…