When Metallica Sued Napster

napster11Anybody who reads this blog knows that I am intrigued by the history of technology and I look back periodically at past events when they seem to be relevant to something happening today. This past week I saw an article that mentioned that April was the fifteenth anniversary of the lawsuit between Metallica and Napster. In retrospect, that was a very important ruling that has had implications in a lot of what we allow or don’t allow on the web today.

So let me look back at a few of the facts of that case and then discuss why this was so important. The first thing that surprised me about this is that this was only fifteen years ago. I remember the case vividly, but in my memory it was older than that and was back at the beginning of the Internet (and in many ways it does).

The case was very straightforward. If you recall, Napster was the first major peer-to-peer file-sharing service. It was very simple in operations and it allowed you to see MP3 files on other Napster users’ computers as long as you agreed to make your own files available. Napster users were then free to download any file they found in the Napster system. You could do simple searches by either song name or artist to navigate the system.

Of course, Napster put the whole music industry into a tizzy because people were using it to download millions of music files free every day. This was illegal for anybody who downloaded songs since they were violating copyrights and getting music without paying the musicians or the record companies.

The industry railed loudly against Napster, but I’m not sure they knew entirely what to do about them. While users of Napster were breaking the law, it was not quite so clear that Napster was doing anything wrong, and the industry feared a court case that might give a legal go-ahead to Napster. The industry was looking at legislative fixes to the problem.

But then along came Metallica. The band got incensed when their song ‘I Disappear’ from the Mission Impossible II soundtrack appeared on Napster before it was even officially released. The band decided to sue Napster to stop the practice and in the process became the spokespeople for the whole industry. The Recording Industry Association of America (RIAA) and Metallica offered settlement alternatives to Napster, such as scrubbing their system of all copyrighted material, but this was impossible at that time (and probably still is). In the process of trying to negotiate a settlement, Napster went bankrupt paying to defend itself.

But this lawsuit sparked an ongoing and major debate about ownership rights of content versus the rights of Internet companies to make content available. While it became clear that blatant file-sharing like what Napster did is illegal, there are plenty of more nuanced fights today in the ongoing battle between artists and internet companies.

The fight moved on from Napster to Apple’s battle against Digital Rights Management (DRM), the practice of tying music recordings to a music player. From there the fight migrated to Congress with attempts to pass the Stop Online Piracy Act (SOPA) and the Protect IP Act (PIPA) that were pushed by the music and movie industries to give them a leg up over internet companies.

You still see this same fight today when artists like Taylor Swift are fighting with Spotify to be justly paid for their content. You see this same battle between authors and Amazon for not properly rewarding them for their works. There is also a never ending battle between video content providers and sites like YouTube that allow people to easily upload copyrighted material.

It’s likely that the battle is going to be ongoing. Some visionaries foresee a day when micropayments are widely accepted and users can easily buy content directly from artists. But that is never going to be a perfect solution because people love the convenience of services like Spotify or Beats that put the content they like in an easy-to-use format. And as we saw with Napster, millions of people will grab copyrighted content for free if they are allowed to.

Predictions Ten Years Later

Alexander_Crystal_SeerI often report on how industry experts see the future of our industry. It’s an interesting thought experiment, if nothing else, to speculate where technology is moving. In 2004 the Pew Internet Project asked 1,286 industry experts to look ten years forward and to predict what the Internet would be like in 2014. I found it really interesting to see that a significant percentage of experts got many of the predictions wrong. Here are some of the specific predictions made in 2004:

66% of the experts thought that there would be at least one devastating cyberattack within the following ten years. While there have been some dramatic hacks against companies, mostly to steal credit card numbers and related information, there have been no cyberattacks that could be categorized as crippling. The experts at the time predicted that terrorists would be able to take over power plants or do other drastic things that have never materialized.

56% thought that the internet would lead to a widespread expansion of home-schooling and telecommuting. There certainly has been growth in telecommuting, but not nearly to the extent predicted by the experts. It’s the same with home schooling, and while it’s grown there is not yet a huge and obvious advantage of home schooling over traditional schooling. The experts predicted that the quality and ease of distance learning would make home schooling an easy choice for parents and that has not yet materialized.

50% of them thought that there would be free peer-to-peer music sharing networks. Instead the recording industry has been very successful in shutting down peer-to-peer sites and there are instead services like Spotify that offer a huge variety of free music legally, paid for by advertising.

Only 32% thought that people would use the Internet to support their political bias and filter out information they disagree with. Studies now show that this is one of the major consequences of social networking, in that people tend to congregate with others who share their world view. This finding is related to the finding that only 39% thought that social networks would be widespread by 2014. The experts en masse did not foresee the wild success that would be enjoyed by Facebook, twitter and other social sites.

52% said that by 2014 that 90% of households would have broadband that was much faster than what was available in 2004. At the end of 2013 Leichtman Research reported that 83% of homes had some sort of broadband connection. That number was lower than predicted by the majority of experts, but what was even lower is the average speed that people actually purchase. Akamai reports that the average connection speed in the US at the end of 2013 was 8.7 Mbps. But this was not distributed in the expected bell curve and that average consists of a small percentage of homes with very fast connections (largely driven by Verizon FiOS and other fiber providers) but with many homes with speeds that are not materially faster than what was available in 2004. For example, Time Warner just announced this past week that they are finally increasing the speed of their base product from 3 Mbps to 6 Mbps.

32% thought that online voting would be secure and widespread by 2014. There are now a number of states that allow on-line voter registration, but only a tiny handful of communities have experimented with on-line voting. It has become obvious that there is a real potential for hacking and fraud with on-line voting.

57% of them thought that virtual classes would become widespread in mainstream education. This has become true in some cases. General K-12 education has not moved to virtual classes. Many schools have adopted distance learning to bring distant teachers into the classroom, but there has been no flood of K-12 students moving to virtual education. Virtual classes, however, have become routine for many advanced degrees. For example, there are hundreds of master degree curriculums that are almost entirely on-line and self-paced.

But the experts did get a few things right. 59% thought that there would be a significant increase in government and business surveillance. This has turned out to be true in spades. It seems everybody is now spying on us, and not just on the Internet, but with our smartphones, with our smart TVs, and even with our cars and with the IOT devices in our homes.

The Pew Institute continues to conduct similar surveys every few years and it will be interesting to see if the experts of today can do better than the experts of 2004. What those experts failed to recognize were things like the transformational nature of smartphones, the widespread phenomenon of social networking and the migration from desktops to smaller and more mobile devices. Those trends are what drove us to where we are today. In retrospect if more experts had foreseen those few major trends correctly then they probably would have also guessed more of the details correctly. Within the sample of experts there were undoubtedly some experts who guessed really well, but the results were not published by expert and so we can’t see who had the best crystal ball.

A New Internet

A Wikimedia server room.

A Wikimedia server room. (Photo credit: Wikipedia)

A research group in Europe has proposed to overhaul the way the Internet looks for data. The group was funded as part of a project called ‘Pursuit’ and their ideas are described in this Pursuit Fact Sheet.

The proposal involves changing the way that the Internet searches for data. Today searches are done by URL, or Universal Resource Locator. What URL searches do is to identify the original server that holds the desired data. When you do a Google search that is what you find – the address of the original server. The problem with looking for data this way is that everybody looking for that same data is going to be sent to that same server.

There are several problems that are associated with searches based upon looking for the original server that holds a piece of data. It means that everybody looking for that data is sent to the same server. If enough people look for that data at the same time the original server might crash. The original server can also be effectively shut down by denial of service attacks. And sending everybody to the original server is inefficient. If the original content everybody is looking for is a video, then that video is downloaded to each person who asks to see it, if you and your neighbors all decide to watch the same video, then it is downloaded individually to each one of you and will be sent through the Internet many times.

The Pursuit proposal is suggesting that we instead change the Internet to use URIs (Universal Resource Identifiers) to search for data. This kind of search is going to look for the content you are looking for rather than for the server that originally stored the data. So if you are looking for a TV show, it will look to see where that show is currently stored. If somebody in your local network has recently watched that show then the data is already available locally and you will be able to download it much faster and also not have to initiate a new download from the original server.

This is somewhat akin to the way that file-sharing sites work and you might be given a menu of sites that hold the data you are looking for. By choosing the nearest site you will be retrieving the data from somewhere other than the original server. The closer it is to you (network-wise, no geographically) the faster and more efficiently you will be able to retrieve it.

But more likely the retrieval will be automated, and you may download the content from many locations – grabbing a piece of the desired video from the different networks that currently hold that data.

This is not a new concept and networks that use switched digital video have been using the same concept. In those systems, the first person in a neighborhood node that watches a certain channel will open up a connection for that channel. But the second person then shares the already-open channel and does not initiate a new request back to the TV server. This means that a given channel is opened only once for a given node on the network.

There are huge advantages to this kind of shift in the Internet. Today the vast majority of data being sent through the Internet is video. And one has to imagine that very large numbers of people watch the same content. And so changing to a system where a given video is sent to your local node only one time is a huge improvement in efficiency. This is going to take the strain off of content servers and is also going to relive a lot of the congestion on the Internet backbone. In fact, once the data has been dispersed the Internet the original server could be taken out of service, but the content will live on.

There are some downsides to this kind of system. For example, one often hears of somebody pulling down content that they don’t want viewed any longer. But in an information-centric network it would not matter if data is removed from the original server. As long as somebody was recently watching the content it would live on, independent of the original server.

There are a lot of changes that need to be made to make a transition to an information-centric web. This is going to take changes to the transport, caching systems, error control, flow controls and other core processes involved in retrieving data. But the gigantic increase in efficiency from this change means that it is inevitable that this is going to come to pass.