Reports of Internet bugs like like Heartbleed and the recent shellshock are growing more frequent and the problems they pose are increasingly more dangerous.
Why? For two reasons that aren’t going to change anytime soon.
The Internet was never meant for this. We use the Internet for banking, business, education and national defense. These things require privacy and the assurance that you are actually who you say you are.
The Internet, as it was designed, offers neither. When the Worldwide Web was built 25 years ago, it existed as a channel for physicists to pass research back and forth. It was a small, closed community. The scientists at Stanford trusted the researchers at the University of California – Los Angeles.
In 2014, it’s still standard to send Internet communication in plain text. Anyone could tap into a connection and observe what you’re saying. Engineers developed HTTPS nearly 20 years ago to protect conversations by encrypting them — but major email providers and social media sites are only now enabling this. And sites like Instagram and Reddit still don’t use it by default.
The Internet was also built on set of rules that requires every packet of sent information to have a valid address, kind of like a phone number — but the rules aren’t strict about validating the source. So, it can be spoofed. As a result, hackers can fake a return address. When millions of fraudulent packets are “returned to sender” all at once, a website can get shut down by a flood of illegitimate traffic — known as a Denial of Service attack.
“When the Internet evolved, the climate was friendly. That’s not true now,” said Paul Vixie, who was instrumental in developing how we connect to websites today. “A trusted network of academics is not a global network for all of humanity.”
Software is a hodgepodge of flawed Lego blocks. The big, ugly secret in the world of computer science is that developers don’t check their apps closely enough for bugs.
Today, software is so profitable that developers are under intense pressure to churn out apps as quickly as possible.
When developer Peter Welch wrote a frightening essay revealing the sausage-making process, he explained how modern day developers rapidly stack together building blocks of code — without reviewing it for mistakes or ensuring the whole thing won’t collapse or let in a hacker.
“People will start cutting corners and speeding up,” Welch said in an interview. “It’s less about understanding the academic value of code and more about producing the product. We’ve lost some safety for speed.”
Sometimes, that flawed code becomes widespread. Most of the world relies on open-source software that’s built to be shared and maintained by volunteers and used by everyone — startups, banks, even governments.
There’s an illusion of safety. The thinking goes: So many engineers see the code, they’re bound to find bugs. Therefore, open-source software is safe, even if no one is directly responsible for reviewing it.
Nope. Last week’s shellshock bug is the perfect example of that flawed thinking. Bash, a program so popular it’s been placed on millions of machines worldwide, was found to have a fatal flaw that’s more than 20 years old. Eyes were on it, but no one caught it until now.
“It’s not Toyota having a recall,” explained Scott Hanselman, a programmer and former college professor in Oregon. “It’s like tires as a concept have been recalled and someone says, ‘Holy crap, tires?! We’ve been using tires for years!’ It’s that level of bad.”