So in recent news, the United States was ‘p0wned’ by itself during a virtual cyber-attack exercise. The outcome more or less showed we have quite a journey ahead of us to properly prepare politically … and technically. The latter is mentioned not so much in that our CNO (Computer Network Operations) is in question but rather the vulnerabilities to require us to have a CNO response in the first place. Mitre and SANS together have assembled the 2010 enumeration of common weaknesses in programming, a list that seems to change very little over the years showing how systemic many of the problems really are. For example, the list is still peppered with buffer overruns, improper bounding and type overflows amongst a slew of web vulnerabilities.
At first, one might think, “but who’s really going to find that little bug?” The hacking culture has always had slews of folks interested in finding that super 0-day hack that takes down the world. An expose into the underground culture of hacking for profit in China shows just how prevalent, numerous and competitive the potential crowd may be for finding these bugs. One of open source’s claims to security is the principle that many eyes will inevitably discover the bugs (just as the many eyes of hackers do) and therefore have them closed more successfully. An interesting blog from the Microsoft Developer Network actually challenges that notion quite well, defending a concept called the Security Development Lifecycle instead. An allusion is made to the Sardonix project where it was shown that despite having a large involvement of programmers, nobody was doing anything because they were all waiting for someone else to do the audit. Before anybody falls for Microsoft’s take, just consider the security patch MS10-015 that caused blue screens of death because it was patching code already “patched” by a rookit infestation.
Similarly tagged OmniNerd content: