A biological approach to security

analysis
Jul 17, 20085 mins

Nervous marmots and masturbating monkeys: We can learn a lot about how to address software vulnerabilities by studying how animals react to threats in the wild.

Over at the Open Sources blog, Savio Rodrigues calls attention to two critical security vulnerabilities in the Spring Framework for Java. They were discovered by security consultancy Ounce Labs, which disclosed the exploits in a detailed report. If you use Spring for critical business applications, you’ll definitely want to be aware of the threats and take appropriate measures.

While awareness of security is always important, however, not everyone agrees that vocal public disclosure of vulnerabilities, as Ounce Labs and the Spring developers have done, is the right approach. For example, when working on the Linux kernel, Linus Torvalds prefers to keep security-related chatter to a minimum.

“I personally consider security bugs to be just ‘normal bugs,'” Torvalds writes on the Linux kernel development mailing list. “I don’t cover them up, but I also don’t have any reason whatsoever to think it’s a good idea to track them and announce them as something special.” If nothing else, he says, doing so only gives would-be attackers an advantage when developing their exploits.

This is a perennial debate, and one that’s likely to go on indefinitely. We should note, however, that it is by no means limited to software development. Security is a constant concern throughout the world — not merely in other aspects of human society, but in the animal kingdom, as well. In an interview with New Scientist magazine, marine biologist Raphael Sagarin proposes that humans can gain a lot of insight into how to best address security issues by studying animal models.

“You can look at virtually any question about security through a biological lens,” Sagarin says. “You look at what the most successful organisms do to solve their security problems, and then you try to use that.”

Like organisms in nature, businesses want to be successful. One generally accepted means of getting ahead in business is to mediate risk wherever possible. That’s what companies are doing when they subscribe to security alerts about their software: By staying informed about the latest vulnerabilities, they hope to minimize the risk that they will fall victim to unknown exploits.

“But organisms inherently understand that there is risk in life,” Sagarin says. “The idea that we can eliminate these risks would be selected against quickly in the natural world, since any organism that tried to do so would not have enough resources left for reproduction, or feeding itself.”

Apparently, Torvalds agrees — quite explicitly. “I think the OpenBSD crowd is a bunch of masturbating monkeys,” he says by way of example, “in that they make such a big deal about concentrating on security to the point where they pretty much admit that nothing else matters to them.”

Torvalds’ jibes against rival operating systems aside, he makes a good point. According to Sagarin, humans are easily tempted to pay too much attention to specific threat signals, regardless of the overall level of danger. We sometimes call such signals “crying wolf” — a phrase that undoubtedly hits home for marmot populations in the wild.

“One of our working-group members, Dan Blumstein at the University of California, Los Angeles, looks at how marmots respond to predators. He has noticed there are marmots he calls “nervous nellies” that signal all the time. Rather than ignore them, the others spend more time on the nervous nellies’ signals because they’re trying to find out if they are honest or not,” Sagarin says.

Like the nervous marmots, software vulnerability bulletins can raise awareness of real danger, but they can also distract from other activities that could be more productive. In the case of a large-scale software project, such as the Linux kernel, all that wasted effort can become a serious drain.

“One reason I refuse to bother with the whole security circus is that I think it glorifies — and thus encourages — the wrong behavior,” Torvalds says. “It makes ‘heroes’ out of security people, as if the people who don’t just fix normal bugs aren’t as important.” In other words, it draws attention to the people doing the signaling — the so-called security experts — rather than the overall process of improving software quality.

“In fact,” Torvalds goes on to say, “all the boring normal bugs are way more important, just because there’s a lot more of them.”

Could Torvalds and Ounce Labs both be right? In nature, different organisms approach security through different means. Similarly, what works for the Linux kernel may actually not work for the Spring Framework, and vice versa. As Savio Rodrigues points out in the Open Sources blog, Spring is largely a single-vendor project, while the Linux kernel’s governance model is much more community- and merit-based.

As developers, then, how do we find the right balance? How can we make sure that users stay informed of the security risks associated with our software, while at the same time maintaining an orderly and holistic development process? Ultimately, we may be forced to confront an uncomfortable thought: If software ecosystems really do resemble their biological equivalents, then the process of natural selection may play a greater role than we suspect.