The first key event that could be called the beginning of the net neutrality debate was the publication of a paper entitled End-to-End Arguments in System Design by three computer scientists, Jerome Saltzer, David Reed and David Clark. For the real nerds among us I’ve included a link to that paper. This paper was written for a conference and was not intended as a scholarly piece, and yet it shaped the thinking of the early public Internet.
In the paper the authors said that the only logical way to design a network that had limited resources and that had to serve a large number of users with widely different interests was to have a network that performed logical operations on the edges, rather than the core. What they meant by this was that the core of the Internet should consist only of fast but dumb pipes and that any manipulation of data, and the paper focused on error correction as the example, should be done at or near the edge of the network with the last mile ISP or the user.
This paper had a big influence on the way the Internet was operated and for many years the Internet operated in a way consistent with this paper. Everything that was done on the Internet was done near the edge. For instance, the servers for large services like CompuServe or AOL were on the edge. The functions that ISPs made to receive and reconstruct files were on the edge. And end user software was contained in computers on the edge. In the middle were a handful of large carriers that transmitted data from hub to hub.
As the general public got introduced to the Internet the idea that the Internet ought to somehow be regulated arose. People who used the Internet liked the wide open feel of it and were worried that commercial uses of the Internet would change the nature and experience for everybody. During the 19080s we started seeing things like early versions of VPNs where large corporate data was given priority over other data. There was talk of creating priority bits for real time events like voice calls and video. And so the discussion began on whether the government ought to intervene and regulate the Internet in some fashion.
In 2000 Harvard law professor Lawrence Lessig published a book Code and Other Laws of Cyberspace. This was a scholarly work that explored the topic of Internet regulation. Lessig said that the end-to-end principle was one of the most important reasons that the Internet had produced growth and innovation and that a free and open Internet ought to be maintained. Lessig argued that there was a role for government, which was to maintain the end-to-end principle. He thought that without government regulation of some sort that commercial interests would chip away at the freedom and function of the Internet until it would lose the characteristics that make it so beneficial to society.
He used the word ‘code’ as a surrogate for software, meaning that whoever controls the software of the Internet can control what happens on it. He thought, rightfully so, that either commercial or government code could eventually interfere with the operation of the Internet. Today it’s obvious that both kinds of control are going on. Entire countries have been carved away from the open Internet by governments and other countries like Russia are considering doing the same. US carriers want to create Internet fast lanes and the ones in Europe have already done so. And we find ourselves being spied upon by governments and by commercial entities who either record everything we do or who plant spyware on our computers.
Tim Wu, a law professor at the University of Virginia built on the ideas in Lessig’s book and published an article in 2002, A Proposal for Network Neutrality. Wu argued in favor of the same end-to-end principle and said that an open internet caused a Darwinian competition among every conceivable use of the Internet and that only the best uses would survive. He said that network neutrality (he coined the phrase) was necessary to make sure that there was no bias against any use of the Internet.
Wu understood that some network bias was unavoidable, such as giving priority to voice packets so that voice could be transmitted over the Internet. But he thought that there should be some sort of defined dividing line between permissible bias and impermissible bias. And that dividing line, almost by definition has to be defined by regulators.
And so today we are still at the same point where Wu left the argument. Sadly much of the debate about network neutrality has wandered off into political directions and no longer has to do with the way we manage packets. But absent some sort of regulation it seem clear to me that commercial and government use of the Internet will continue to chip away a little at a time until the Internet is a controlled environment, and that any user’s Internet experience is going to be subject to the whims of whoever controls their local part of the Internet.