Empirical Evaluation of Secure Development Processes

Earlier this year, I helped to organize a workshop at Schloss Dagstuhl on Empirical Evaluation of Secure Development Processes. I think the workshop was a tremendous success, we’ve already seen publications inspired by it, such as Moving Fast and Breaking Things: How to stop crashing more than twice, and I know there’s more forthcoming.

I’m also pleased to say that the workshop report is now available at https://dx.doi.org/10.4230/DagRep.9.6.1. The framing of the workshop (from the announcement) was:

The problem of how to design and build secure systems has been long-standing. For example, as early as 1978 Bisbey and Hollingworth[6] complained that there was no method of determining what an appropriate level of security for a system actually was. In the early years various design principles, architectures and methodologies were proposed: in 1972 Anderson described the “reference monitor” concept, in 1974 Saltzer described the “Principle of least privilege”, and in 1985 the US Department of Defense issued the Trusted Computer System Evaluation Criteria.

Since then, although much progress has been made in software engineering, cybersecurity and industrial practices, much of the fundamental scientific foundations have not been addressed – there is little empirical data to quantify the effects that these principles, architectures and methodologies have on the resulting systems.

This situation leaves developers and industry in a rather undesirable situation. The lack of this data makes it difficult for organizations to effectively choose practices that will cost-effectively reduce security vulnerabilities in a given system and help development teams achieve their security objectives. There has been much work creating security development lifecycles…

Also, I am quite pleased that Dagstuhl takes a very open approach — the report is licensed as CC-BY-3.