Re-architecting the internet?
Information Security.com reports that:
[Richard Clarke] controversially declared “that spending more money on technology like anti-virus and IPS is not going to stop us losing cyber-command. Instead, we need to re-architect our networks to create a fortress. Let’s spend money on research to create a whole new architecture, which will cost just a fraction of what we spend on all of the technology crap that doesn’t work”, he said, to a loud round of applause.
In the book, we wrote:
Given the nature of these issues, perhaps we should consider the radical step of rebuilding our information technologies from the ground up to address security problems more effectively.
The challenge is that building complex systems such as global computer networks and enterprise software is hard. There are valid comparisons to the traditional engineering disciplines in this respect. Consider the first bridge built across the Tacoma Narrows in Washington state. It swayed violently in light winds and ultimately collapsed because of a subtle design flaw. The space shuttle is an obvious example of a complex system within which minor problems have resulted in catastrophic outcomes. At the time this book was written, the Internet Archive project had 85 billion web objects in its database, taking up 1.5 million gigabytes of storage. During the 1990s, such statistics helped people understand or just be awed by the size of the internet, but the internet is undoubtedly one of the largest engineering projects ever undertaken. Replacing it would be challenging.
Even if we “just” tried to recreate the most popular pieces of computer software in a highly secure manner, how likely is it that no mistakes would creep in? It seems likely that errors in specification, design, and implementation would occur, all leading to security problems, just as with other software development projects. Those problems would be magnified by the scale of an effort to replace all the important internet software. So, after enormous expense, a new set of problems would probably exist, and there is no reason to expect any fewer than we have today, or that they would be any easier to deal with.
Given how much we’ve learned about security in development, I think that we’d likely start with fewer bugs than the current stack started with. Would we have fewer bugs than we have today, after 30 years of testing? Not so obvious.
I’m curious why Richard (or anyone else) thinks that developing, testing and deploying a whole new architecture and converting over all the myriad services which have been built on the extant technologies would cost “just a fraction of what we spend” today. To go further, I think that’s an extraordinary claim (and an extraordinary applause line) and it requires extraordinary proof.