New European Copyright Laws

I’ve always kept an eye on European Union regulations because anything that affects big web companies or ISPs in Europe always ends up bleeding over into the US. Recently the EU has been contemplating new rules about online copyrights, and in September the European Parliament took the first step by approving two new sets of copyright rules.

Article 11 is being referred to as a link tax. This legislation would require that anybody that carries headlines or snippets of longer articles online must pay a fee to the creator of the original content. Proponents of Article 11 argue that big companies like Google, Facebook and Twitter are taking financial advantage of content publishers by listing headlines of news articles with no compensation for the content creators. They argue that these snippets are one of the primary reasons that people use social media and they browse articles suggested by their friends. Opponents of the new law argue that it will be extremely complicated for a web service to track the millions of headlines listed by users and that they will react to this rule by only allowing headline snippets from large publishers. This would effectively shut small or new content creators from gaining access to the big platforms – articles would be from only a handful of content sources rather than from tens of thousands of them.

Such a law would certainly squash small content originators like this blog. Many readers find my daily blog articles via short headlines that are posted on Twitter and Linked-In every time I release a blog or when one of my readers reposts a blog. It’s extremely unlikely that the big web platforms would create a relationship with somebody as small as me and I’d lose my primary way to distribute content on the web. I guess, perhaps, that the WordPress platform where I publish could make arrangements with the big web services – otherwise their value as a publishing platform would be greatly diminished.

This would also affect me as a user. I mostly follow other people in the telecom and the rural broadband space by browsing through my feed on Twitter and LinkedIn to see what those folks are finding to be of interest. I skip over the majority of headlines and snippets, but I stop and read news articles I find of interest. The beauty of these platforms is that I automatically select the type of content I get to browse by deciding who I want to follow on the platforms. If the people I follow on Twitter can’t post small and obscure articles, then I would have no further interest in being on Twitter.

The second law, Article 13 is being referred to as the upload filter law. Article 13 would make a web platform liable for any copyright infringements for content posted by users. This restriction would theoretically not apply to content posted by users as long as they are acting non-commercially.

No one is entirely sure how the big web platforms would react to this law. At one extreme a platform like Facebook or Reddit might block all postings of content, such as video or pictures, for which the user can’t show ownership. This would mean the end of memes and kitten videos and much of the content posted by most Facebook users.

At the other extreme, this might mean that the average person could post such links since they have no commercial benefit from posting a cute cat video. But the law could stop commercial users from posting content that is not their own – a movie reviewer might not be able to include pictures or snippets from a film in a review. I might not be able to post a link to a Washington Post article as CCG Consulting but perhaps I could post it as an individual. While I don’t make a penny from this blog, I might be stopped by web platforms from including links to news articles in my blog.

In January the approval process was halted when 11 countries including Germany, Italy, and the Netherlands said they wouldn’t support the final language in these articles. EU law has an interesting difference from US law in that for many EU ordinances each country gets to decide, within reason, how they will implement the law.

The genesis of these laws comes from the observation that the big web companies are making huge money from the content created by others and not fairly compensating content creators. We are seeing a huge crisis for content creators – they used to be compensated through web advertising ‘hits’, but these revenues are disappearing quickly. The EU is trying to rebalance the financial equation and make sure that content creators are fairly compensated – which is the entire purpose of copyright laws.

The legislators are finding out how hard it will be to make this work in the online world. Web platforms will always try to work around laws to minimize payments. The lawyers of the web platforms are going to be cautious and advise the platforms to minimize massive class action suits.

But there has to be a balance. Content creators deserve to be paid for creating content. Platforms like Facebook, Twitter, Reddit, Instagram, Tumblr, etc. are popular to a large degree because users of the platforms upload content that they didn’t create – the value of the platform is that users get to share things of interest with their friends.

We haven’t heard the end of these efforts and the parties are still looking for language that the various EU members can accept. If these laws eventually pass they will raise the same questions here because the policies adopted by the big web platforms will probably change to match the European laws.

Facebook Takes a Stab at Wireless Broadband

Facebook has been exploring two technologies in its labs that they hope will make broadband more accessible for the many communities around the world that have poor or zero broadband. The technology I’m discussing today is Terragraph which uses an outdoor 60 GHz network to deliver broadband. The other is Project ARIES which is an attempt to beef up the throughput on low-bandwidth cellular networks.

The Terragraph technology was originally intended as a way to bring street-level WiFi to high-density urban downtowns. Facebook looked around the globe and saw many large cities that lack basic broadband infrastructure – it’s nearly impossible to fund fiber in third world urban centers. The Terragraph technology uses 60 GHz bandwidth and the 802.11ay standard – this technology combination was originally called AirGig.

Using 60GHz and 801.11ay together is an interesting choice for an outdoor application. On a broadcast basis (hotspot) this frequency only carries between 35 and 100 feet depending upon humidity and other factors. The original intended use of the AirGig was as an indoor gigabit wireless network for offices. The 60 GHz spectrum won’t pass through anything, so it was intended to be a wireless gigabit link within a single room. 60 GHz faces problems as an outdoor technology since the frequency is absorbed by both oxygen and water vapor. But numerous countries have released 60Ghz as unlicensed spectrum, making it available without costly spectrum licenses, and the channels are large enough to still be able to deliver bandwidth even with the physical limitations.

It turns out that a focused beam of 60 GHz spectrum will carry up to about 250 meters when used as backhaul. The urban Terragraph network planned to mount 60 GHz units on downtowns poles and buildings. These units would act as both hotspots and to create a backhaul mesh network between units. This is similar to the WiFi networks we saw being tried in a few US cities almost twenty years ago. The biggest downside to the urban idea is the lack of cheap handsets that can use this frequency.

Facebook took a right turn on the urban idea and completed a trial of the technology deployed in a different network design. Last May Facebook worked with Deutsche Telekom to deploy a fixed Terragraph network in Mikebuda, Hungary. This is a small town of about 150 homes covering 0.4 square kilometers – about 100 acres. This is drastically different than a dense urban deployment with a far lower housing density than US suburbs – this is similar to many small rural towns in the US with large lots, and empty spaces between homes. The only current broadband in the town was about 100 DSL customers.

In a fixed mesh network every unit deployed is part of the mesh network each unit can deliver bandwidth into that home as well as bounce signal to the next home. In Mikebuda the two companies decided that the ideal network would be to serve 50 homes (not sure why they couldn’t serve all 100 of the DSL customers). The network is delivering about 650 Mbps to each home, although each home is limited to about 350 Mbps due to the limitations of the 802.11ac WiFi routers inside the home. This is a big improvement over the 50 Mbps DSL that is being replaced.

The wireless mesh network is quick to install and the network was up and running to homes within two weeks. The mesh network configures itself and can instantly reroute and heal to replace a bad mesh unit. The biggest local drawback is the need for pure line-of-sight since 60 GHz can’t tolerate any foliage or other impediments, and tree trimming was needed to make this work.

Facebook envisions this fixed deployment as a way to bring bandwidth to the many smaller towns that surround most cities. However, they admit in the third world that the limitation will be for backhaul bandwidth since the third world doesn’t typically have much middle mile fiber outside of cities – so figuring out how to get the bandwidth to the small towns is a bigger challenge than serving the homes within a town. Even in the US, the cost of bandwidth to reach a small town is often the limiting factor on affordably building a broadband solution. In the US this will be a direct competitor to 5G for serving small towns. The Terragraph technology has the advantage of using unlicensed spectrum, but ISPs are going to worry about the squirrelly nature of 60 GHz spectrum.

Assuming that Facebook can find a way to standardize the equipment and get it into mass production, then this is another interesting wireless technology to consider. Current point-to-multipoint wireless network don’t work as well in small towns as they do in rural areas, and this might provide a different way for a WISP to serve a small town. In the third world, however, the limiting factor for many of the candidate markets will be getting backhaul bandwidth to the towns.

Regulating Digital Platforms

It seems like one of the big digital platforms is in the news almost daily – and not in a positive way. Yet there has been almost no talk in the US of trying to regulate digital platforms like Facebook and Google. Europe has taken some tiny steps, but regulation there are still in the infancy state. In this country the only existing regulations that apply to the big digital platforms are antitrust laws, some weak privacy rules, and general corporate regulation from the Federal Trade Commission that protect against general consumer fraud.

Any time there has been the slightest suggestion of regulating these companies we instantly hear the cry that the Internet must be free and unfettered. This argument harkens back to the early days of the Internet when the Internet was a budding industry and seems irrelevant now that these are some of the biggest corporations in the world that hold huge power in our daily lives.

For example, small businesses can thrive or die due to a change in an algorithm on the Google search engine. Search results are so important to businesses that the billion-dollar SEO industry has grown to help companies manipulate their search results. We’ve recently witnessed the damage that can be done by nefarious parties on platforms like Facebook to influence voting or to shape public opinion around almost any issue.

Our existing weak regulations are of little use in trying to control the behavior of these big companies. For example, in Europe there have been numerous penalties levied against Google for monopoly practices, but the fines haven’t been very effective in controlling Google’s behavior. In this country our primary anti-trust tool is to break up monopolies – an extreme remedy that doesn’t make much sense for the Google search engine or Facebook.

Regulating digital platforms would not be easy because one of the key concepts of regulation is understanding a business well enough to craft sensible rules that can throttle abuses. We generally regulate monopolies and the regulatory rules are intended to protect the public from the worst consequences of monopoly use. It’s not hard to make a case that both Facebook and Google are near-monopolies – but it’s not easy to figure out what we would do to regulate them in any sensible way.

For example, the primary regulations we have for electric companies is to control profits of the monopolies to keep rates affordable. In the airline industry we regulate issues of safety to force the airlines to do the needed maintenance on planes. It’s hard to imagine how to regulate something like a search engine in the same manner when a slight change in a search engine algorithm can have big economic consequences across a wide range of industries. It doesn’t seem possible to somehow regulate the fairness of a web search.

Regulating social media platforms would be even harder. The FCC has occasionally in the past been required by Congress to try to regulate morality issues – such as monitoring bad language or nudity on the public airwaves. Most of the attempts by the FCC to follow these congressional mandates were ineffective and often embarrassing for the agency. Social platforms like Facebook are already struggling to define ways to remove bad actors from their platform and it’s hard to think that government intervention in that process can do much more than to inject politics into an already volatile situation.

One of the problems with trying to regulate digital platforms is defining who they are. The FCC today has separate rules that can be used to regulate telecommunications carriers and media companies. How do you define a digital platform? Facebook, LinkedIn and Snapchat are all social media – they share some characteristics but also have wide differences. Just defining what needs to be regulated is difficult, if not impossible. For example, all of the social media platforms gain much of their value from user-generated content. Would that mean that a site like WordPress that houses this blog is a social media company?

Any regulations would have to start in Congress because there is no other way for a federal agency to be given the authority to regulate the digital platforms. It’s not hard to imagine that any effort out of Congress would concentrate on the wrong issues, much like the rules that made the FCC the monitor of bad language. I know as a user of the digital platforms that I would like to see some regulation in the areas of privacy and use of user data – but beyond that, regulating these companies is a huge challenge.

Should We Regulate Google and Facebook?

I started to write a blog a few weeks ago asking the question of whether we should be regulating big web companies like Google and Facebook. I put that blog on hold due to the furor about Cambridge Analytica and Facebook. The original genesis for the blog was comments made by Michael Powell, the President and CEO of NCTA, the lobbying arm for the big cable companies.

At a speech given at the Cable Congress in Dublin, Ireland Powell said that edge providers like Facebook, Google, Amazon and Apple “have the size, power and influence of a nation state”. He said that there is a need for antitrust rules to reign in the power of the big web companies. Powell put these comments into a framework of arguing that net neutrality is a weak attempt to regulate web issues and that regulation ought to instead focus on the real problems with the web for issues like data privacy, technology addiction and fake news.

It was fairly obvious that Powell was trying to deflect attention away from the lawsuits and state legislation that are trying to bring back net neutrality and Title II regulations. Powell did make same some good points about the need to regulate big web companies. But in doing so I think he also focuses the attention back on ISPs for some of the same behavior he sees at the big web providers.

I believe that Powell is right that there needs to be some regulation of the big edge providers. The US has made almost no regulations concerning these companies. It’s easy to contrast our lack of laws here to the regulations of these companies in the European Union. While the EU hasn’t tackled everything, they have regulations in place in a number of areas.

The EU has tackled the monopoly power of Google as a search engine and advertiser. I think many people don’t understand the power of Google ads. I recently stayed at a bed and breakfast and the owner told me that his Google ranking had become the most important factor in his ability to function as a business. Any time they change their algorithms and his ranking drops in searches he sees an immediate drop-off in business.

The EU also recently introduced strong privacy regulations for web companies. Under the new rules consumers must opt-in the having their data collected and used. In the US web companies are free to use customer information in any manner they choose – and we just saw from the example of Cambridge Analytica how big web companies like Facebook monetize consumer data.

But even the EU regulations are going to have little impact if people grant the ability for the big companies to use their data. One thing that these companies know about us is that we willingly give them access to our lives. People take Facebook personality tests without realizing that they are providing a detailed portrait of themselves to marketeers. People grant permissions to apps to gather all sorts of information about them, such a log of every call made from their cellphone. Recent revelations show that people even unknowingly grant the right to some apps to read their personal messages.

So I think Powell is right in that there needs to be some regulations of the big web companies. Probably the most needed regulation is one of total transparency where people are told in a clear manner how their data will be used. I suspect people might be less willing to sign up for a game or app if they understood that the app provider is going to glean all of the call records from their cellphone.

But Powell is off base when he thinks that the actions of the edge providers somehow lets ISPs off the hook for similar regulation. There is one big difference between all of the edge providers and the ISPs. Regardless of how much market power the web companies have, people are not required to use them. I dropped off Facebook over a year ago because of my discomfort from their data gathering.

But you can’t avoid having an ISP. For most of us the only ISP options are one or two of the big ISPs. Most people are in the same boat as me – my choice for ISP is either Charter or AT&T. There is some small percentage of consumers in the US who can instead use a municipal ISP, an independent telco or a small fiber overbuilder that promises not to use their data. But everybody else has little option but to use one of the big ISPs and is then at their mercy of their data gathering practices. We have even fewer choices in the cellular world since four providers serve almost every customer in the country.

I was never convinced that Title II regulation went far enough – but it was better than nothing as a tool to put some constraints on the big ISPs. When the current FCC killed Title II regulation they essentially set the ISPs free to do anything they want – broadband is nearly totally unregulated. I find it ironic that Powell wants to see some rules the curb market abuse for Google and Facebook while saying at the same time that the ISPs ought to be off the hook. The fact is that they all need to be regulated unless we are willing to live with the current state of affairs where ISPs and edge providers are able to use customer data in any manner they choose.

AT&T and Net Neutrality

The big ISPs know that the public is massively in favor of net neutrality. It’s one of those rare topics that polls positively across demographics and party lines. Largely through lobbying efforts of the big ISPs, the FCC not only killed net neutrality regulation but they surprised most of the industry by walking away from regulating broadband at all.

We now see states and cities that are trying to bring back net neutrality in some manner. A few states like California are creating state laws that mimic the old net neutrality rules. Many more states are limiting purchasing for state telecom to ISPs that don’t violate net neutrality. Federal Democratic politicians are creating bills that would reinstate net neutrality and force it back under FCC jurisdiction.

This all has the big ISPs nervous. We certainly see this in the way that the big ISPs are talking about net neutrality. Practically all of them have released statements talking about how much they support the open Internet. These big companies already all have terrible customer service ratings and they don’t want to now be painted as the villains who are trying to kill the web.

A great example is AT&T. The company’s blog posted a letter from Chairman Randall Stephenson that makes it sound like AT&T is pro net neutrality. It fails to mention how the company went to court to overturn the FCC’s net neutrality decision or how much they spent lobbying to get the ruling overturned.

AT&T also took out full-page ads in many major newspapers making the same points. In those ads the company added a new talking point that net neutrality ought to also apply to big web companies like Facebook and Twitter. That is a red herring because web companies, by definition, can’t violate net neutrality since they don’t control the pipe to the customers. Many would love to see privacy rules that stop the web companies from abusing customer data – but that is a separate issue than net neutrality. AT&T seems to be making this point to confuse the public and deflect the blame away from themselves.

Stephenson says that AT&T is favor of federal legislation that would ensure net neutrality. But what he doesn’t say is that AT&T favors a bill the big companies are pushing that would implement a feel-good watered-down version of net neutrality. Missing from that proposed law (and from all of AT&T’s positions) is any talk of paid priority – one of the three net neutrality principles. AT&T has always wanted paid prioritization. They want to be able to charge Netflix or Google extra to access their networks since those two companies are the largest drivers of web traffic.

In my mind, abuse of paid prioritization can break the web. ISPs already charge their customers enough money to fully cover the cost of the network needed to support broadband. Customers with unlimited data plans, like most landline connections, have the right to download as much content as they want. The idea of an AT&T then also charging the content providers for the privilege to get to customers is a terrible idea for a number of reasons.

Consider Netflix. It’s likely that they would pass any fees paid to AT&T on to customers. And in doing so, AT&T has violated the principle of non-discrimination of traffic, albeit indirectly, by making it more expensive for people to use Netflix. AT&T will always say that are not the cause of a Netflix rate increase – but AT&T is able to influence the market price of web services, and in doing so discriminate against web traffic.

The other problem with paid prioritization is that it is a barrier to the next Netflix. New companies without Netflix’s huge customer base could not afford the fees to connect to AT&T and other large ISPs. And that barrier will stop the next big web company from launching.

I’ve been predicting that the ISPs are not going to do anything that drastically violates net neutrality for a while. They are going to be cautious about riling up the public and legislators since they understand that Congress could reinstate both net neutrality and broadband regulation at any time. The ISPs are enjoying the most big-company friendly FCC there has ever been, and they are getting everything they want out of them.

But big ISPs like AT&T know that the political and regulatory pendulum can and will likely swing the other way. Their tactic for now seems to be to say they are for net neutrality while still working to make sure it doesn’t actually come back. So we will see more blogs and newspaper ads and support for watered-down legislation. They are clearly hoping the issue loses steam so that the FCC and administration don’t reinstate rules they don’t want. But they realistically know that they are likely to be judged by their actions rather than their words, so I expect them to ease into practices that violate net neutrality in subtle ways that they hope won’t be noticed.

Facebook’s Gigabit WiFi Experiment

Facebook and the city of San Jose, California have been trying for several years to launch a gigabit wireless WiFi network in the downtown area of the city. Branded as Terragraph, the Facebook technology is a deployment of 60 GHz WiFi hotspots that promises data speeds as fast as a gigabit. This delays in the project are a good example of the challenges of launching a new technology and is a warning to anybody working on the cutting edge.

The network was first slated to launch by the end of 2016, but is now over a year late. The City or Facebook won’t commit on when the network will be launched, and they are also no longer making any guarantees of the speeds that will be achieved.

This delayed launch highlights many of the problems faced by a first-generation technology. Facebook first tested an early version of the technology on their Menlo Park campus, but has been having problems making it work in a real-life deployment. The deployment on light and traffic poles has gone much slower than anticipated, and Facebook is having to spend time after each deployment to make sure that traffic lights still work properly.

There are also business factors affecting the launch. Facebook has had turnover on the Terragraph team. The company has also gotten into a dispute over payments with an installation vendor. It’s not unusual to have business-related delays on a first-generation technology launch since the development team is generally tiny and subject to disruption and the distribution and vendor chains are usually not solidified. There is also some disagreement between the City and Facebook on who pays for the core electronics supporting the network.

Facebook had touted that the network would be significantly less expensive than deploying fiber. But the 60 GHz spectrum gets absorbed by oxygen and water vapor, so Facebook is having to deploy transmitters no more than 820 feet apart – a dense network deployment. Without fiber feeding each transmitter the backhaul is being done using wireless spectrum, which is likely to be contributing to the complication of the deployment as well as the lower expected data speeds.

For now, this deployment is in the downtown area and involves 250 pole-mounted nodes to serve a heavy-traffic business district which also sees numerous tourists. The City hopes to eventually find a way to deploy the technology citywide since 12% of the households in the City don’t currently have broadband access – mostly attributed to affordability. The City was hoping to get Google Fiber, but Google canceled plans last year to build in the City.

Facebook says they are still hopeful that they can make the technology work as planned, but that there is still more testing and research needed. At this point there is no specific planned launch date.

This experiment reminds me of other first-generation technology trials in the past. I recall several cities including Manassas, Virginia that deployed broadband over powerline. The technology never delivered speeds much greater than a few Mbps and never was commercially viable. I had several clients that nearly went bankrupt when trying to deploy point-to-point broadband using the LMDS spectrum. And I remember a number of failed trials to deploy citywide municipal WiFi, such as a disastrous trial in Philadelphia, and trials that fizzled in places like Annapolis, Maryland.

I’ve always cautioned my smaller clients to never be guinea pigs for a first-generation technology deployment. I can’t recall a time when a first-generation deployment did not come with scads of problems. I’ve seen clients suffer through first-generation deployments of all of the technologies that are now common – PON fiber, voice softswitches, IPTV, you name it. Vendors are always in a hurry to get a new technology to market and the first few ISPs that deploy a new technology have to suffer through all of the problems that crop up between a laboratory and a real-life deployment. The real victims of a first-generation deployment are often the customers using the network.

The San Jose trial won’t have all of the issues as are experienced by commercial ISPs since the service will be free to the public. But the City is not immune from the public spurning the technology if it doesn’t work as promised.

The problems experienced by this launch also provide a cautionary tale for the many 5G technology launches promised in 2018 and 2019. Every new launch is going to experience significant problems which is to be expected when a wireless technology bumps up against the myriad of issues experienced in a real-life deployment. If we have learned anything from the past, we can expect a few of the new launches to fizzle and die while a few of the new technologies and vendors will plow through the problems until the technology works as promised. But we’ve also learned that it’s not going to go smoothly and customers connected to an early 5G network can expect problems.

The End of the Free Web

The web model of using advertising revenues to pay those who create content is quickly breaking down and it’s going to drastically change the free web we are all used to. It feels like a lot longer, but the advertising web model has now been operating for only twenty years. Before that people and companies built web sites and posted content they thought was interesting, but nobody got compensated for anything on the web.

But then a few companies like AOL discovered that companies were willing to pay to place advertising on web pages and the web advertising industry was born. Today news articles and other content on the web are plastered with ads of various kinds. And it is these ads that have funded the new industry of web content providers. These are now numerous web magazines and other websites that are largely funded by the revenues from ads. Most of the news articles you read on the web have been funded from the ad revenues.

But ad revenue of this kind are disappearing and this is likely going to mean a major transformation of the web in the near future. Here are some of the main reasons that ad revenues are changing:

  • People have changed the way that they find and read content. Twenty years ago we all had a list of our favorite bookmarked sites and we would peruse those web sites from time to time to catch up on their content. But today the majority of people get their content through an intermediate platform like Facebook, Twitter or Google. These platforms learn about your tastes and they direct articles of interest to you. We no longer search for content, but rather content finds us.
  • And that means that the big platforms like Facebook control the flow of content. A few years ago Facebook reacted to user complaints that their feeds were too long and busy and the company reacted to this by only flowing a percentage of potential content to users. That meant that a person might not see that an old high school friend bought a new puppy, but it also meant that each user on Facebook saw fewer web articles. The impact from this change was dramatic to web publishers, who on average saw a 50% immediate drop in their revenue from Facebook.
  • Meanwhile the big platforms decided that they should keep more advertising revenue and they are now promoting content directly on their platform. For example, Facebook now pays people to create content and Facebook favors this over content created elsewhere – which has further decreased ad revenues.
  • Advertisers have also gotten leery about the web advertising environment. This has worked using instantaneous auctions where web sites bid for advertising slots. Web sites willing to pay the most get the best advertising content, but the automated selling platforms strives to place every ad somewhere on the web. This resulted in large companies getting grief after finding their ads on unsavory web sites. Big companies were not enamored in finding that they were advertising on sites promoting racism or radical political views. So the big companies have been redirecting their advertising dollars away from the auction-driven ad system and have instead been placing ads directly on ‘safer’ sites or directly on the big web platforms. Google and Facebook together now collect the majority of web advertising.
  • There has also been a huge growth in ad blockers. People use ad blockers in an attempt to block many of the obnoxious ads – those that pop up and interrupt with reading content. But using ad blockers also deprive revenue for those sites that any user most values. While only miniscule amounts of money flow from each ad view, it all adds up and ad blockers are killing huge numbers of views.
  • The last straw is that web browsers are starting to block ads automatically. For example, the new version of Chrome will block ads by default. Soon, anybody using these browsers will be free of auction-generated ads, but in doing so will kill even more ad revenues that have been paying those that create the content that people want to read.

We are already seeing what this shift means. We are seeing content providers now asking readers to directly contribute to help keep them in business. More drastically we are seeing a lot of the quality content on the web go behind paywalls. That content is only being made available to those willing to subscribe to the content. And we are seeing a drop in new quality content being created since many content creators have been forced to make a living elsewhere.

But the quiet outcome of this is that a huge chunk of web content is going to disappear. This probably means the death of content like “The ten cutest puppies we found this week”, but it also means that writers and journalists that have been compensated from web advertising will disappear from the web. We’ll then be left with the content sponsored by the big platforms like Facebook or content behind paywalls like the Washington Post. And that means the end of the free web that we all love and have come to expect.

Regulating Edge Providers

The year is only half over and already it seems like this might be the most interesting year for regulations we’ve had in my lifetime. It seems like a lot of the telecom regulations we’ve lived with for decades are being reconsidered and that nothing is guaranteed to stay the same.

Perhaps the most novel new idea I’ve heard comes from Steve Bannon in the White House. He believes that Google and Facebook have become so dominant that they should be regulated as utilities. He envisions this being done in much the same manner as is done with telephone and cable companies.

It’s not an entirely novel concept and the European Union has kicked around ideas for curbing the power of big software companies like Microsoft, Google and Facebook. I find the concept to be a little strange coming out of this administration since they seem to be largely anti-regulation and seem to be intent on lowering regulations for both telephone and cable companies. Trying to regulate these companies would have to mean a lot of new regulations.

The first question that popped into my head when I heard this was to ask what a government might regulate with these companies. The European Union went after Google in 2016 for their practices of requiring that cellphones default to the Google search engine and to the Chrome browser. In 2015 they objected that Google used its market power to insist that cellphones use the Android operating system. But these kinds of issues are related to abuse of monopoly power and there are already rules in the US that can tackle these issues, should the government care to do so. I don’t think this is what Bannon has in mind.

It seems like it would be a real challenge to regulate the main business lines of the two companies. You can’t regulate prices because Google and Facebook are free to users. They don’t directly sell anything to their users on their core platforms. If these companies are large it’s because they have a platform that a lot of people want to use. People have a lot of options for using alternate social media platforms or search engines. People seem to use these two companies because they offer something people want – and I really can’t imagine how you can regulate that.

It’s also hard to envision a single country really regulating these entities. We already know what that looks like today by seeing how these big companies operate in China. Probably lesser known is there are many other countries where the companies offer something different that what we see in the US. My guess is that regulation wouldn’t fundamentally change these companies – but it could make them modify the public face of the company if we tried to regulate – something that their many users would probably strongly resent.

I think perhaps the best argument against regulating these two companies is that there is no guarantee that they are going to maintain their current market dominance, or even survive as companies for the long-haul.

The online world has proven to be fickle and people’s collective tastes change over time. Already today US teenagers have largely bailed on Facebook and view it as a platform for their parents and grandparents. I know my daughter only maintains a presence on the platform to communicate with older relatives and that she communicates with her friends elsewhere. Facebook may have over a billion users today, but that is not to say that over a few decades that something better might come along and that they could lose a lot of that market power.

Google faces an even bigger long-term problem. Google relies on people making searches on computers and cellphones. There are a lot of tech experts predicting that search engines will be passe within only a few decades. They predict that people will begin talking directly to an AI-based personal assistant to perform most of the tasks that cellphones do today.

Both Google and Facebook make most their money today from advertising. But in a future world where everybody communicates through a smart personal assistant the direct interface between people and web platforms like Google or Facebook might nearly disappear. The advertising aspect of the Google search engine will disappear if your smart personal assistant is making choices for you based upon your preferences. In an AI-driven future both search engines and social media are likely to be replaced by something drastically different.

The conclusion I reach is that government is not really in a position to regulate the ever-changing world of the big edge providers. Facebook or Google may have a dominant position in their market niches today but in a decade could be in a different place. Just go back and make a list of the big technology players of twenty years ago. It would have been a waste of time to regulate AOL, Compuserve or the other platforms that dominated the web then. Those companies were usurped by something people found to be of more value. Regulation, by definition, assumes a predictable world – something that is unlikely in the edge provider world.

Net Neutrality and the Digital Divide

There is an interesting idea floating around the industry that is bound to annoy fans of net neutrality. The idea comes from Roslyn Layton who does telecom research at Aalborg University in Denmark. She served on the FCC Transition team for the new administration.

She envisions zero-rating as the best way to solve the digital divide and to finally bring Internet access to everybody. She says that after decades of not finding any other solutions that this might the only reasonable path to get Internet access to people that can’t afford a monthly subscription.

The idea is simple – there are companies who will provide an advertising-driven broadband connection for free to customers, particularly on a cellphone. It’s not hard to envision big companies like Facebook or Google sponsoring cellphone connections and providing data access to customers who would be a captive audience for their ads and content.

This idea is already working elsewhere. Facebook offers this same service in other countries today under the brand name “Free Basics.’ While it certainly costs Facebook to buy the wholesale data connections they must have done the math and figured that having a new customer on their platform is worth more than the cost. Facebook’s stated goal is to serve most of the billions of people on earth and this is a good way to add a lot of customers. With Free basics customers get full use of the Facebook platform along with the basic ability to surf the web. However, the free basic service does not allow a user to freely watch streaming video or to do other data-intensive activities that are not part of the Facebook universe – it’s not an unlimited data plan. I can remember similar products in the US back in the dial-up days when several dial-up providers that gave free connections as long as the customers didn’t mind being bombarded by ads.

There are certainly upsides to this. Such a service would provide enough bandwidth for people to use the web for the basics like hunting for a job or doing school work. And users would get unlimited use of the Facebook platform for functions such as messaging or watching Facebook-sponsored video and content. There are still a substantial number of people in the US who can’t afford a broadband subscription and this would provide a basic level of broadband to anybody willing to deal with the ad-heavy environment.

But there are downsides. This idea violates net neutrality. Even if the current FCC does away with net neutrality one has to think that a future FCC will institute something similar. But even with net neutrality rules in place the FCC could make an exception for a service that tackles the digital divide.

The real downside is that this is not the same as the real internet access that others enjoy. Users would be largely trapped inside whatever platform sponsors their product. That could be Facebook or Google, but it could also be an organization with a social or political agenda. Anybody using this kind of free platform would have something less than unfettered Internet access, and they would be limited to whatever the platform sponsor allows them to see or do outside the base platform. At best this could be called curated Internet access, but realistically it’s a platform to give sponsors unlimited access to users.

But I think we have to be realistic that nobody has yet found a solution to the digital divide. The FCC’s Lifeline program barely makes a dent in it. And I’m not aware of any major ISP who has ever found any mechanism to solve the digital divide issue.

While Facebook offers this in many countries around the globe they received massive pushback when they tried to bring this to India. The Indian government did not want a class of people given a clearly inferior class of Internet connectivity. But in India the government is working hard themselves to solve the digital divide. But there is nobody in the US giving the issue any more than lip service. The issue has been with us since the dial-up days and there has been little progress in the decades since then.

I read some persuasive articles a few years ago when the net neutrality debate was being discussed about this kind of product. There were arguments made that there would be long-term negative ramifications from having a second-class kind of Internet access. The articles worried about the underlying sponsors heavily influencing people with their particular agenda.

But on the flip side, somebody who doesn’t have broadband access probably thinks this is a great idea. It’s unrealistic to think that people have adequate broadband access when they can only get it at the library or a coffee shop. For broadband to benefit somebody it needs to be available when and where they need to use it.

I lean towards thinking this as an idea worth trying. I would hope that there would be more than one or two companies willing to sponsor this, in which case any provider who is too obnoxious or restrictive would not retain customers. People who go to sites like Facebook today are already voluntarily subjected to ads, so this doesn’t seem like too steep of a price to pay to get more people connected to the Internet.

The End of Data Privacy?

Congress just passed a law that reverses the privacy rules that were ordered by the prior FCC. Those rules were recently put on hold by the current FCC and this new laws makes sure the privacy rules never go into effect. Congress did this to ensure that a future FCC cannot implement privacy rules without Congressional approval. It’s important to note that this law applies equally to both terrestrial and cellular broadband.

On paper this law doesn’t change anything since the FCC privacy rules never went into effect. However, even before the prior FCC adopted the privacy rules they had been confronting ISPs over privacy issues which kept the biggest ISPs from going too far with using customer data. Just the threat of regulation has curbed the worst abuses.

How will the big ISPs be likely to now use customer data? We don’t have to speculate too hard because some of them have already used customer data in various ways in the recent past, all of which seem to be allowable under this new law.

Selling Data to Marketers. This is the number one opportunity for big ISPs. Companies like Facebook and Google have been mining customer data, but they can only do that when somebody is inside their platforms – they have no idea what else you do outside their domains. But your ISP can know every keystroke you make, every email your write, every website you visit, and with a cellphone, every place you’ve been. With deep data mining ISPs can know everything about your on-line life.

We know some of the big ISPs have already been mining customer data. For example, last year AT&T offered to sell connections that were not monitored for a premium price. AT&T also has a product that has been selling masses of customer phone and data usage to federal and local law enforcement. Probably other ISPs have been doing this as well, but this has been a well-guarded secret.

Inserting Ads. This is another big revenue opportunity for the ISPs. The companies will be able to create detailed profiles of customers and then sell targeted advertising to reach specific customers. Today Google and a few other large advertising companies dominate the online advertising business of inserting ads into web sites. With the constraints off, the big ISPs can enter this business since they will have better customer profiles than anybody else. We know that both AT&T and Charter have already been doing this.

Hijacking Customer Searches. Back in 2011 a bunch of large ISPs like Charter, Frontier and others were caught hijacking customer DNS searches. When customers would hit buttons on web sites or on embedded links in articles the ISPs would sometimes send users to a different web site than the one they thought they were selecting. The FCC told these companies to stop the practice then, but the new law probably allows the practice again.

Inserting Supercookies. Verizon Wireless inserted Supercookies on cellphones back in 2014. AT&T started to do this as well but quickly backed off when the FCC came down hard on Verizon. These were undetectable and undeletable cookies that allowed the company to track customer behavior. The advantage of the supercookies is that they bypass most security schemes since they grab customer info before it can be encrypted or sent through a secure connection. For example, this let the company easily track customers with iPhones.

Pre-installing Tracking Software on Cellphones. And even better than supercookies is putting software on all new phones that directly snags data before it can be encrypted. AT&T, T-Mobile and Sprint all did this in the past – just using a different approach than supercookies. The pre-installed software would log things like every website visited and sent the data back to the cellular carriers.