Can the FCC Regulate Facebook?

At the urging of FCC Chairman Ajit Pai, the FCC General Counsel Tom Johnson announced in a recent blog that he believes that the FCC has the authority to redefine the immunity shield provided by Section 230 of the FCC’s rules that comes from the Communications Decency Act from 1996.

Section 230 of the FCC rules is one of the clearest and simplest rules in the FCC code:  “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider“.

In non-legalese, this means that a web companies is not liable for third-party content posted on its platform. It is this rule that enables public comments on the web. All social media consists of third-party content. Sites like Yelp and Amazon thrive because of public post reviews of restaurants and products. Third-party comments are in a lot more places on the web such as the comment section of your local newspaper, or even here on my blog.

Section 230 is essential if we are going to give the public a voice on the web. Without Section 230 protections, Facebook could be sued by somebody who doesn’t like specific content posted on the platform. That’s dangerous because there is somebody who hates every possible political position.  If Facebook can be sued for content posted by its billions of users, then the platform will have to quickly fold – there is no viable business model that can sustain the defense of huge volumes of lawsuits.

Section 230 was created when web platforms started to allow comments from the general public. The biggest early legal challenge to web content came in 1995 when Wall Street firm Stratton Oakmont sued Prodigy over a posting on the platform by a user that accused the president of Stratton Oakmont of fraud. Stratton Oakmont won the case when the New York Supreme Court ruled that Prodigy was a publisher because the platform exercised some editorial control by moderating content and because Prodigy had a clearly stated set of rules about what was allowable content on the Prodigy platform. As might be imagined, this court case had a chilling impact on the burgeoning web industry, and fledgling web platforms worried about getting sued over content posted by the public. This prompted Representatives Rob Wyden and Chris Cox to sponsor the bill that became the current Section 230 protections.

Tom Johnson believes the FCC has the authority to interpret Section 230 due to Section 201(b) of the Communications Act of 1934, which confers on the FCC the power to issue rules necessary to carry out the provisions of the Act. He says that when Congress instructed that Section 230 rules be added to FCC code, that implicitly means the FCC has the authority to interpret the rules.

But then Mr. Johnson does an interesting tap dance. He distinguishes between interpreting the Section 230 rules and regulating companies that are protected by these rules. If the FCC ever acts to somehow modify Section 230, the legal arguments will concentrate on this nuance.

The FCC has basically been authorized by Congress to regulate common carriers of telecommunications services as well as a few other responsibilities specifically assigned to the agency.

There is no possible way that the FCC could ever claim that companies like Facebook or Google are common carriers. If they can’t make that argument, then the agency likely has no authority to impose any obligations on these companies, even should it have the authority to ‘interpret’ Section 230. Any such interpretation would be meaningless if the FCC has no authority to impose such interpretations on the companies that rely on Section 230 protections.

What is ironic about this effort by the FCC is that the current FCC spent a great deal of effort to declassify ISPs from being common carriers. The agency has gone as far as possible to wipe its hands of any responsibility for regulating broadband provided by companies like AT&T and Comcast. It will require an amazing set of verbal gymnastics to somehow claim the ability to extend FCC authority to companies like Facebook and Twitter, which clearly have zero characteristics of being a common carrier while at the same time claiming that ISPs are not common carriers.

They’re Back

Facebook recently announced it will be introducing smart glasses in collaboration with Ray-Ban. This will be the second major attempt at introducing the technology since the failed attempt by Google in 2011 when it introduced Google Glass. For those who might not remember, Google Glass was shunned by the general public and people who wore the glasses in public were quickly deemed to be glassholes. People were generally uncomfortable talking to somebody who could be recording the conversation.

It will be interesting to see if the public is any more forgiving now. Pictured with this blog is Glass 2.0 that is being used in factories, but the first-generation public version was equally obvious as a piece of technology.

In terms of technology, 2011 is far behind us, and since then it’s common for anything done in public to end up being recorded by somebody’s smartphone. But that still doesn’t mean that people like the idea of being secretly recorded, particularly if the new glasses aren’t so obvious as Google Glass.

We still don’t know what the technology will look like, but Facebook will try to brand the new glasses as cool. Consider this video ad that accompanied the announcement of the new glasses – who doesn’t want to wear smart glasses like glasses worn in the past by James Dean, Marilyn Monroe, and Muhammed Ali? Facebook says the new glasses will function by being paired with a smartphone, so perhaps they’ll be a lot less obvious than were the Google Glass.

The glasses are the first step towards virtual presence. Facebook Mark Zuckerberg says his vision is being able to virtually invite friends into your home to play cards virtually. However, this first set of glasses isn’t going to include an integrated display that would be capable of generating or viewing holograms. That means the new glasses will likely include the same sort of features like Google Glass such as being able to record what’s in front of you, using the web to browse for facts, or dipping into the web to call-up information about people you meet. With the advances we’ve made in facial recognition since 2011, that last item is a lot scarier today than it was a decade ago.

I recall the tech industry excitement about Google Glass and other proposed wearables back in 2010. The vision was to seamlessly be able to carry tech with you to create a constant human-computer interface. Google was stunned when the public universally and loudly rejected the idea, because to most people the technology meant an invasion of privacy. Nobody wanted to have a casual conversation with a stranger and then later find it posted on social media.

It’s hard to think that is still not going to be the reaction again today. Of course, as a baby boomer, I am a lot leerier of technology than are the younger generations. It seems that Generation Z is a lot less concerned about privacy and it will be interesting to see if young people take to the new technology. We may have one of the biggest generational rifts ever between the first generation that finally embraces wearables and everybody older.

Google Glass never died and morphed into a pair of glasses to use in factories. It allows workers to pull up schematics in real-time to compare to work-in-progress in front of them. The technology is said to have greatly improved complex tasks like wiring a new jetliner – something we all want to be 100% correct.

I will likely remain leery of the technology. What might eventually bring me around is Zuckerberg’s vision of being able to play poker with distant friends. I’ve been predicting telepresence as the technology that will finally take advantage of gigabit fiber connections. I’m not sure that we need glasses that secretly hide the technology capability to make this work – but I guess this is an early step towards that vision.

Can the FCC Regulate Social Media?

There has been a lot of talk lately from the White House and Congress about having the FCC regulate online platforms like Facebook, Twitter, and Google. From a regulatory perspective, it’s an interesting question if current law allows for the regulation of these companies. It would be ironic if the FCC somehow tried to regulate Facebook after they went through series of legal gyrations to remove themselves from regulating ISPs for the delivery and sale of broadband – something that is more clearly in their regulatory wheelhouse.

All of the arguments for regulating the web companies centers around Section 230 of the FCC rules. Congress had the nascent Internet companies in mind when the wrote Section 230. The view of Congress was that the newly formed Internet needed to be protected from regulation and interference in order to grow. Congress was right about this at the time and the Internet is possibly the single biggest driver of our current economy. Congress specifically spelled out how web companies should be viewed from a regulatory perspective.

There are two sections of the statute that are most relevant to the question of regulating web companies. The first is Section 230(c)(1), which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This section of the law is unambiguous and states that an online platform can’t be held liable for content posted by users. This would hold true regardless of whether a platform allows users free access to say anything or if the platform heavily moderates what can be said. When Congress wrote Section 230 this was the most important part of the statute, because they realized that new web companies would never get off the ground or thrive if they have to constantly respond to lawsuits filed by parties that didn’t like the content posted on their platform.

Web platforms are protected by first amendment rights as publishers if they provide their own content, in exactly the same manner as a newspaper or magazine – but publishers can be sued for violating laws like defamation. But most of the big web platforms don’t create content – they just provide a place for users to publish content. As such, the language cited above completely shields Facebook and Twitter from liability, and also seemingly from regulation.

Another thing that must be considered is the current state of FCC regulation. The courts have given the FCC wide latitude in interpreting its regulatory role. In the latest court ruling that upheld the FCC’s deregulation of broadband and the repeal of net neutrality, the court said that the FCC had the authority to deregulate broadband since the agency could point to Congressional laws that supported that position. However, the court noted that the FCC could just as easily have adopted almost the opposite position, as had been done by the Tom Wheeler FCC, since there was also Congressional language that supports regulating broadband. The court said that an agency like the FCC is only required to find language in Congressional rules that support whatever position they take. Over the years there have been enough conflicting rules from Congress to give the FCC a lot of flexibility in interpreting Congressional intent.

It’s clear that the FCC still has to regulate carriers, which is why landline telephone service is still regulated. In killing Title II regulation, the FCC went through legal gymnastics to declare that broadband is an ‘information service’ and not a carrier service.

Companies like Facebook and Google are clearly also information services. This current FCC would be faced with a huge dilemma if they tried to somehow regulate companies like Facebook or Twitter. To do so would mean declaring that the agency has the authority to regulate information service providers – a claim that would be impossible to make without also reasserting jurisdiction over ISPs and broadband.

The bottom line is that the FCC could assert some limited form of jurisdiction over the web companies. However, the degree to which they could regulate them would be seriously restricted by the language in Section 230(c)(1). And any attempt to regulate the web companies would give major heartburn to FCC lawyers. It would force them to make a 180-degree turn from everything they’ve said and done about regulating broadband since Ajit Pai became Chairman.

The odds are pretty good that this concept will blow over because the FCC is likely to quietly resist any push to regulate web companies if that means they would have to reassert jurisdiction over information service providers. Of course, Congress could resolve this at any time by writing new bills that would explicitly regulate Google without regulating AT&T. But as long as we have a split Congress, that’s never going to happen.

Privacy in the Age of COVID-19

The Washington Post reports that a recent poll they conducted shows that 3 out of 5 Americans are unable or unwilling to use an infection-alerting app that is being developed jointly by Google and Apple. About 1 in 6 adults can’t use the app because they don’t own a smartphone – with the lowest ownership levels for those 65 and older. People with smartphones evenly split between those willing versus unwilling to use such an app.

The major concern among those not willing to use such an app comes from the distrust people have about the ability or willingness of those two tech companies to protect the privacy of their health data. This unwillingness to use such an app, particularly after already seeing the impact that the virus is having on the economy is disturbing to scientists who have said that 60% or more of the public would need to use such an app for it to be effective.

This distrust of tech companies is nothing new. In November the Pew Research Center published the results of the survey that showed how Americans feel about online privacy. That study’s preliminary finding was that more than 60% of Americans think it’s impossible to go through daily life without being tracked by tech companies or the government.

To make that finding worse, almost 70% of adults think that tech companies will use their data in ways they are uncomfortable with. Almost 80% believe that tech companies won’t publicly admit guilt if they are caught misusing people’s data. People don’t feel that data collected about them is secure and 70% believe data is less secure now than it was five years ago.

Almost 80% of people are concerned about what social media sites and advertisers know about them. Probably the most damning result of the survey is that 80% of Americans feel that they have no control over how data is collected about them.

Almost 97% of respondents to the poll said they have been asked to agree to a company’s privacy policy. But only 9% say they always read the privacy policies and 36% have never read them. This is not surprising since the legalese included in most privacy policies requires reading comprehension at a college level.

There is no mystery about why people are worried about the collection of personal data. There have been headlines for several years talking about how personal data has been misused. The Facebook / Cambridge Analytica data scandal showed a giant tech company selling personal data that was used to sway voters. The big cellular companies were caught several times selling customer location data that lets whoever buy it understand where people travel throughout each day. Phone apps of all sorts report back location data, web browsing data, and shopping habits and nobody seems to be able to tell us where that data is sold. Even the supposed privacy advocate Apple lets contractors listen to Siri recordings.

It’s not a surprise that with the level of distrust of tech companies that it’s becoming common for politicians to react to privacy breaches. For example, a bill was introduced into the House last year that would authorize the Federal Trade Commission to fine tech companies to as much as 4% of their gross revenues for privacy violations.

California recently enacted a new privacy law with strict requirements on web companies that mimic the regulations used in Europe. Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted from the site. Consumes must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data.

The unwillingness to use the COVID-tracking app is probably the societal signal that the hands-off approach we’ve had for regulating the Internet needs to come to an end. Most hands-off policies were developed twenty years ago when AOL was conquering the business world and legislators didn’t want to tamp down on a nascent industry. The tech companies are among the biggest and richest companies in the world and there is no reason to not regulate some of their worst practices. This won’t be an easy genie to put back in the bottle, but we have to try.

New European Copyright Laws

I’ve always kept an eye on European Union regulations because anything that affects big web companies or ISPs in Europe always ends up bleeding over into the US. Recently the EU has been contemplating new rules about online copyrights, and in September the European Parliament took the first step by approving two new sets of copyright rules.

Article 11 is being referred to as a link tax. This legislation would require that anybody that carries headlines or snippets of longer articles online must pay a fee to the creator of the original content. Proponents of Article 11 argue that big companies like Google, Facebook and Twitter are taking financial advantage of content publishers by listing headlines of news articles with no compensation for the content creators. They argue that these snippets are one of the primary reasons that people use social media and they browse articles suggested by their friends. Opponents of the new law argue that it will be extremely complicated for a web service to track the millions of headlines listed by users and that they will react to this rule by only allowing headline snippets from large publishers. This would effectively shut small or new content creators from gaining access to the big platforms – articles would be from only a handful of content sources rather than from tens of thousands of them.

Such a law would certainly squash small content originators like this blog. Many readers find my daily blog articles via short headlines that are posted on Twitter and Linked-In every time I release a blog or when one of my readers reposts a blog. It’s extremely unlikely that the big web platforms would create a relationship with somebody as small as me and I’d lose my primary way to distribute content on the web. I guess, perhaps, that the WordPress platform where I publish could make arrangements with the big web services – otherwise their value as a publishing platform would be greatly diminished.

This would also affect me as a user. I mostly follow other people in the telecom and the rural broadband space by browsing through my feed on Twitter and LinkedIn to see what those folks are finding to be of interest. I skip over the majority of headlines and snippets, but I stop and read news articles I find of interest. The beauty of these platforms is that I automatically select the type of content I get to browse by deciding who I want to follow on the platforms. If the people I follow on Twitter can’t post small and obscure articles, then I would have no further interest in being on Twitter.

The second law, Article 13 is being referred to as the upload filter law. Article 13 would make a web platform liable for any copyright infringements for content posted by users. This restriction would theoretically not apply to content posted by users as long as they are acting non-commercially.

No one is entirely sure how the big web platforms would react to this law. At one extreme a platform like Facebook or Reddit might block all postings of content, such as video or pictures, for which the user can’t show ownership. This would mean the end of memes and kitten videos and much of the content posted by most Facebook users.

At the other extreme, this might mean that the average person could post such links since they have no commercial benefit from posting a cute cat video. But the law could stop commercial users from posting content that is not their own – a movie reviewer might not be able to include pictures or snippets from a film in a review. I might not be able to post a link to a Washington Post article as CCG Consulting but perhaps I could post it as an individual. While I don’t make a penny from this blog, I might be stopped by web platforms from including links to news articles in my blog.

In January the approval process was halted when 11 countries including Germany, Italy, and the Netherlands said they wouldn’t support the final language in these articles. EU law has an interesting difference from US law in that for many EU ordinances each country gets to decide, within reason, how they will implement the law.

The genesis of these laws comes from the observation that the big web companies are making huge money from the content created by others and not fairly compensating content creators. We are seeing a huge crisis for content creators – they used to be compensated through web advertising ‘hits’, but these revenues are disappearing quickly. The EU is trying to rebalance the financial equation and make sure that content creators are fairly compensated – which is the entire purpose of copyright laws.

The legislators are finding out how hard it will be to make this work in the online world. Web platforms will always try to work around laws to minimize payments. The lawyers of the web platforms are going to be cautious and advise the platforms to minimize massive class action suits.

But there has to be a balance. Content creators deserve to be paid for creating content. Platforms like Facebook, Twitter, Reddit, Instagram, Tumblr, etc. are popular to a large degree because users of the platforms upload content that they didn’t create – the value of the platform is that users get to share things of interest with their friends.

We haven’t heard the end of these efforts and the parties are still looking for language that the various EU members can accept. If these laws eventually pass they will raise the same questions here because the policies adopted by the big web platforms will probably change to match the European laws.

Facebook Takes a Stab at Wireless Broadband

Facebook has been exploring two technologies in its labs that they hope will make broadband more accessible for the many communities around the world that have poor or zero broadband. The technology I’m discussing today is Terragraph which uses an outdoor 60 GHz network to deliver broadband. The other is Project ARIES which is an attempt to beef up the throughput on low-bandwidth cellular networks.

The Terragraph technology was originally intended as a way to bring street-level WiFi to high-density urban downtowns. Facebook looked around the globe and saw many large cities that lack basic broadband infrastructure – it’s nearly impossible to fund fiber in third world urban centers. The Terragraph technology uses 60 GHz bandwidth and the 802.11ay standard – this technology combination was originally called AirGig.

Using 60GHz and 801.11ay together is an interesting choice for an outdoor application. On a broadcast basis (hotspot) this frequency only carries between 35 and 100 feet depending upon humidity and other factors. The original intended use of the AirGig was as an indoor gigabit wireless network for offices. The 60 GHz spectrum won’t pass through anything, so it was intended to be a wireless gigabit link within a single room. 60 GHz faces problems as an outdoor technology since the frequency is absorbed by both oxygen and water vapor. But numerous countries have released 60Ghz as unlicensed spectrum, making it available without costly spectrum licenses, and the channels are large enough to still be able to deliver bandwidth even with the physical limitations.

It turns out that a focused beam of 60 GHz spectrum will carry up to about 250 meters when used as backhaul. The urban Terragraph network planned to mount 60 GHz units on downtowns poles and buildings. These units would act as both hotspots and to create a backhaul mesh network between units. This is similar to the WiFi networks we saw being tried in a few US cities almost twenty years ago. The biggest downside to the urban idea is the lack of cheap handsets that can use this frequency.

Facebook took a right turn on the urban idea and completed a trial of the technology deployed in a different network design. Last May Facebook worked with Deutsche Telekom to deploy a fixed Terragraph network in Mikebuda, Hungary. This is a small town of about 150 homes covering 0.4 square kilometers – about 100 acres. This is drastically different than a dense urban deployment with a far lower housing density than US suburbs – this is similar to many small rural towns in the US with large lots, and empty spaces between homes. The only current broadband in the town was about 100 DSL customers.

In a fixed mesh network every unit deployed is part of the mesh network each unit can deliver bandwidth into that home as well as bounce signal to the next home. In Mikebuda the two companies decided that the ideal network would be to serve 50 homes (not sure why they couldn’t serve all 100 of the DSL customers). The network is delivering about 650 Mbps to each home, although each home is limited to about 350 Mbps due to the limitations of the 802.11ac WiFi routers inside the home. This is a big improvement over the 50 Mbps DSL that is being replaced.

The wireless mesh network is quick to install and the network was up and running to homes within two weeks. The mesh network configures itself and can instantly reroute and heal to replace a bad mesh unit. The biggest local drawback is the need for pure line-of-sight since 60 GHz can’t tolerate any foliage or other impediments, and tree trimming was needed to make this work.

Facebook envisions this fixed deployment as a way to bring bandwidth to the many smaller towns that surround most cities. However, they admit in the third world that the limitation will be for backhaul bandwidth since the third world doesn’t typically have much middle mile fiber outside of cities – so figuring out how to get the bandwidth to the small towns is a bigger challenge than serving the homes within a town. Even in the US, the cost of bandwidth to reach a small town is often the limiting factor on affordably building a broadband solution. In the US this will be a direct competitor to 5G for serving small towns. The Terragraph technology has the advantage of using unlicensed spectrum, but ISPs are going to worry about the squirrelly nature of 60 GHz spectrum.

Assuming that Facebook can find a way to standardize the equipment and get it into mass production, then this is another interesting wireless technology to consider. Current point-to-multipoint wireless network don’t work as well in small towns as they do in rural areas, and this might provide a different way for a WISP to serve a small town. In the third world, however, the limiting factor for many of the candidate markets will be getting backhaul bandwidth to the towns.

Regulating Digital Platforms

It seems like one of the big digital platforms is in the news almost daily – and not in a positive way. Yet there has been almost no talk in the US of trying to regulate digital platforms like Facebook and Google. Europe has taken some tiny steps, but regulation there are still in the infancy state. In this country the only existing regulations that apply to the big digital platforms are antitrust laws, some weak privacy rules, and general corporate regulation from the Federal Trade Commission that protect against general consumer fraud.

Any time there has been the slightest suggestion of regulating these companies we instantly hear the cry that the Internet must be free and unfettered. This argument harkens back to the early days of the Internet when the Internet was a budding industry and seems irrelevant now that these are some of the biggest corporations in the world that hold huge power in our daily lives.

For example, small businesses can thrive or die due to a change in an algorithm on the Google search engine. Search results are so important to businesses that the billion-dollar SEO industry has grown to help companies manipulate their search results. We’ve recently witnessed the damage that can be done by nefarious parties on platforms like Facebook to influence voting or to shape public opinion around almost any issue.

Our existing weak regulations are of little use in trying to control the behavior of these big companies. For example, in Europe there have been numerous penalties levied against Google for monopoly practices, but the fines haven’t been very effective in controlling Google’s behavior. In this country our primary anti-trust tool is to break up monopolies – an extreme remedy that doesn’t make much sense for the Google search engine or Facebook.

Regulating digital platforms would not be easy because one of the key concepts of regulation is understanding a business well enough to craft sensible rules that can throttle abuses. We generally regulate monopolies and the regulatory rules are intended to protect the public from the worst consequences of monopoly use. It’s not hard to make a case that both Facebook and Google are near-monopolies – but it’s not easy to figure out what we would do to regulate them in any sensible way.

For example, the primary regulations we have for electric companies is to control profits of the monopolies to keep rates affordable. In the airline industry we regulate issues of safety to force the airlines to do the needed maintenance on planes. It’s hard to imagine how to regulate something like a search engine in the same manner when a slight change in a search engine algorithm can have big economic consequences across a wide range of industries. It doesn’t seem possible to somehow regulate the fairness of a web search.

Regulating social media platforms would be even harder. The FCC has occasionally in the past been required by Congress to try to regulate morality issues – such as monitoring bad language or nudity on the public airwaves. Most of the attempts by the FCC to follow these congressional mandates were ineffective and often embarrassing for the agency. Social platforms like Facebook are already struggling to define ways to remove bad actors from their platform and it’s hard to think that government intervention in that process can do much more than to inject politics into an already volatile situation.

One of the problems with trying to regulate digital platforms is defining who they are. The FCC today has separate rules that can be used to regulate telecommunications carriers and media companies. How do you define a digital platform? Facebook, LinkedIn and Snapchat are all social media – they share some characteristics but also have wide differences. Just defining what needs to be regulated is difficult, if not impossible. For example, all of the social media platforms gain much of their value from user-generated content. Would that mean that a site like WordPress that houses this blog is a social media company?

Any regulations would have to start in Congress because there is no other way for a federal agency to be given the authority to regulate the digital platforms. It’s not hard to imagine that any effort out of Congress would concentrate on the wrong issues, much like the rules that made the FCC the monitor of bad language. I know as a user of the digital platforms that I would like to see some regulation in the areas of privacy and use of user data – but beyond that, regulating these companies is a huge challenge.

Should We Regulate Google and Facebook?

I started to write a blog a few weeks ago asking the question of whether we should be regulating big web companies like Google and Facebook. I put that blog on hold due to the furor about Cambridge Analytica and Facebook. The original genesis for the blog was comments made by Michael Powell, the President and CEO of NCTA, the lobbying arm for the big cable companies.

At a speech given at the Cable Congress in Dublin, Ireland Powell said that edge providers like Facebook, Google, Amazon and Apple “have the size, power and influence of a nation state”. He said that there is a need for antitrust rules to reign in the power of the big web companies. Powell put these comments into a framework of arguing that net neutrality is a weak attempt to regulate web issues and that regulation ought to instead focus on the real problems with the web for issues like data privacy, technology addiction and fake news.

It was fairly obvious that Powell was trying to deflect attention away from the lawsuits and state legislation that are trying to bring back net neutrality and Title II regulations. Powell did make same some good points about the need to regulate big web companies. But in doing so I think he also focuses the attention back on ISPs for some of the same behavior he sees at the big web providers.

I believe that Powell is right that there needs to be some regulation of the big edge providers. The US has made almost no regulations concerning these companies. It’s easy to contrast our lack of laws here to the regulations of these companies in the European Union. While the EU hasn’t tackled everything, they have regulations in place in a number of areas.

The EU has tackled the monopoly power of Google as a search engine and advertiser. I think many people don’t understand the power of Google ads. I recently stayed at a bed and breakfast and the owner told me that his Google ranking had become the most important factor in his ability to function as a business. Any time they change their algorithms and his ranking drops in searches he sees an immediate drop-off in business.

The EU also recently introduced strong privacy regulations for web companies. Under the new rules consumers must opt-in the having their data collected and used. In the US web companies are free to use customer information in any manner they choose – and we just saw from the example of Cambridge Analytica how big web companies like Facebook monetize consumer data.

But even the EU regulations are going to have little impact if people grant the ability for the big companies to use their data. One thing that these companies know about us is that we willingly give them access to our lives. People take Facebook personality tests without realizing that they are providing a detailed portrait of themselves to marketeers. People grant permissions to apps to gather all sorts of information about them, such a log of every call made from their cellphone. Recent revelations show that people even unknowingly grant the right to some apps to read their personal messages.

So I think Powell is right in that there needs to be some regulations of the big web companies. Probably the most needed regulation is one of total transparency where people are told in a clear manner how their data will be used. I suspect people might be less willing to sign up for a game or app if they understood that the app provider is going to glean all of the call records from their cellphone.

But Powell is off base when he thinks that the actions of the edge providers somehow lets ISPs off the hook for similar regulation. There is one big difference between all of the edge providers and the ISPs. Regardless of how much market power the web companies have, people are not required to use them. I dropped off Facebook over a year ago because of my discomfort from their data gathering.

But you can’t avoid having an ISP. For most of us the only ISP options are one or two of the big ISPs. Most people are in the same boat as me – my choice for ISP is either Charter or AT&T. There is some small percentage of consumers in the US who can instead use a municipal ISP, an independent telco or a small fiber overbuilder that promises not to use their data. But everybody else has little option but to use one of the big ISPs and is then at their mercy of their data gathering practices. We have even fewer choices in the cellular world since four providers serve almost every customer in the country.

I was never convinced that Title II regulation went far enough – but it was better than nothing as a tool to put some constraints on the big ISPs. When the current FCC killed Title II regulation they essentially set the ISPs free to do anything they want – broadband is nearly totally unregulated. I find it ironic that Powell wants to see some rules the curb market abuse for Google and Facebook while saying at the same time that the ISPs ought to be off the hook. The fact is that they all need to be regulated unless we are willing to live with the current state of affairs where ISPs and edge providers are able to use customer data in any manner they choose.

AT&T and Net Neutrality

The big ISPs know that the public is massively in favor of net neutrality. It’s one of those rare topics that polls positively across demographics and party lines. Largely through lobbying efforts of the big ISPs, the FCC not only killed net neutrality regulation but they surprised most of the industry by walking away from regulating broadband at all.

We now see states and cities that are trying to bring back net neutrality in some manner. A few states like California are creating state laws that mimic the old net neutrality rules. Many more states are limiting purchasing for state telecom to ISPs that don’t violate net neutrality. Federal Democratic politicians are creating bills that would reinstate net neutrality and force it back under FCC jurisdiction.

This all has the big ISPs nervous. We certainly see this in the way that the big ISPs are talking about net neutrality. Practically all of them have released statements talking about how much they support the open Internet. These big companies already all have terrible customer service ratings and they don’t want to now be painted as the villains who are trying to kill the web.

A great example is AT&T. The company’s blog posted a letter from Chairman Randall Stephenson that makes it sound like AT&T is pro net neutrality. It fails to mention how the company went to court to overturn the FCC’s net neutrality decision or how much they spent lobbying to get the ruling overturned.

AT&T also took out full-page ads in many major newspapers making the same points. In those ads the company added a new talking point that net neutrality ought to also apply to big web companies like Facebook and Twitter. That is a red herring because web companies, by definition, can’t violate net neutrality since they don’t control the pipe to the customers. Many would love to see privacy rules that stop the web companies from abusing customer data – but that is a separate issue than net neutrality. AT&T seems to be making this point to confuse the public and deflect the blame away from themselves.

Stephenson says that AT&T is favor of federal legislation that would ensure net neutrality. But what he doesn’t say is that AT&T favors a bill the big companies are pushing that would implement a feel-good watered-down version of net neutrality. Missing from that proposed law (and from all of AT&T’s positions) is any talk of paid priority – one of the three net neutrality principles. AT&T has always wanted paid prioritization. They want to be able to charge Netflix or Google extra to access their networks since those two companies are the largest drivers of web traffic.

In my mind, abuse of paid prioritization can break the web. ISPs already charge their customers enough money to fully cover the cost of the network needed to support broadband. Customers with unlimited data plans, like most landline connections, have the right to download as much content as they want. The idea of an AT&T then also charging the content providers for the privilege to get to customers is a terrible idea for a number of reasons.

Consider Netflix. It’s likely that they would pass any fees paid to AT&T on to customers. And in doing so, AT&T has violated the principle of non-discrimination of traffic, albeit indirectly, by making it more expensive for people to use Netflix. AT&T will always say that are not the cause of a Netflix rate increase – but AT&T is able to influence the market price of web services, and in doing so discriminate against web traffic.

The other problem with paid prioritization is that it is a barrier to the next Netflix. New companies without Netflix’s huge customer base could not afford the fees to connect to AT&T and other large ISPs. And that barrier will stop the next big web company from launching.

I’ve been predicting that the ISPs are not going to do anything that drastically violates net neutrality for a while. They are going to be cautious about riling up the public and legislators since they understand that Congress could reinstate both net neutrality and broadband regulation at any time. The ISPs are enjoying the most big-company friendly FCC there has ever been, and they are getting everything they want out of them.

But big ISPs like AT&T know that the political and regulatory pendulum can and will likely swing the other way. Their tactic for now seems to be to say they are for net neutrality while still working to make sure it doesn’t actually come back. So we will see more blogs and newspaper ads and support for watered-down legislation. They are clearly hoping the issue loses steam so that the FCC and administration don’t reinstate rules they don’t want. But they realistically know that they are likely to be judged by their actions rather than their words, so I expect them to ease into practices that violate net neutrality in subtle ways that they hope won’t be noticed.

Facebook’s Gigabit WiFi Experiment

Facebook and the city of San Jose, California have been trying for several years to launch a gigabit wireless WiFi network in the downtown area of the city. Branded as Terragraph, the Facebook technology is a deployment of 60 GHz WiFi hotspots that promises data speeds as fast as a gigabit. This delays in the project are a good example of the challenges of launching a new technology and is a warning to anybody working on the cutting edge.

The network was first slated to launch by the end of 2016, but is now over a year late. The City or Facebook won’t commit on when the network will be launched, and they are also no longer making any guarantees of the speeds that will be achieved.

This delayed launch highlights many of the problems faced by a first-generation technology. Facebook first tested an early version of the technology on their Menlo Park campus, but has been having problems making it work in a real-life deployment. The deployment on light and traffic poles has gone much slower than anticipated, and Facebook is having to spend time after each deployment to make sure that traffic lights still work properly.

There are also business factors affecting the launch. Facebook has had turnover on the Terragraph team. The company has also gotten into a dispute over payments with an installation vendor. It’s not unusual to have business-related delays on a first-generation technology launch since the development team is generally tiny and subject to disruption and the distribution and vendor chains are usually not solidified. There is also some disagreement between the City and Facebook on who pays for the core electronics supporting the network.

Facebook had touted that the network would be significantly less expensive than deploying fiber. But the 60 GHz spectrum gets absorbed by oxygen and water vapor, so Facebook is having to deploy transmitters no more than 820 feet apart – a dense network deployment. Without fiber feeding each transmitter the backhaul is being done using wireless spectrum, which is likely to be contributing to the complication of the deployment as well as the lower expected data speeds.

For now, this deployment is in the downtown area and involves 250 pole-mounted nodes to serve a heavy-traffic business district which also sees numerous tourists. The City hopes to eventually find a way to deploy the technology citywide since 12% of the households in the City don’t currently have broadband access – mostly attributed to affordability. The City was hoping to get Google Fiber, but Google canceled plans last year to build in the City.

Facebook says they are still hopeful that they can make the technology work as planned, but that there is still more testing and research needed. At this point there is no specific planned launch date.

This experiment reminds me of other first-generation technology trials in the past. I recall several cities including Manassas, Virginia that deployed broadband over powerline. The technology never delivered speeds much greater than a few Mbps and never was commercially viable. I had several clients that nearly went bankrupt when trying to deploy point-to-point broadband using the LMDS spectrum. And I remember a number of failed trials to deploy citywide municipal WiFi, such as a disastrous trial in Philadelphia, and trials that fizzled in places like Annapolis, Maryland.

I’ve always cautioned my smaller clients to never be guinea pigs for a first-generation technology deployment. I can’t recall a time when a first-generation deployment did not come with scads of problems. I’ve seen clients suffer through first-generation deployments of all of the technologies that are now common – PON fiber, voice softswitches, IPTV, you name it. Vendors are always in a hurry to get a new technology to market and the first few ISPs that deploy a new technology have to suffer through all of the problems that crop up between a laboratory and a real-life deployment. The real victims of a first-generation deployment are often the customers using the network.

The San Jose trial won’t have all of the issues as are experienced by commercial ISPs since the service will be free to the public. But the City is not immune from the public spurning the technology if it doesn’t work as promised.

The problems experienced by this launch also provide a cautionary tale for the many 5G technology launches promised in 2018 and 2019. Every new launch is going to experience significant problems which is to be expected when a wireless technology bumps up against the myriad of issues experienced in a real-life deployment. If we have learned anything from the past, we can expect a few of the new launches to fizzle and die while a few of the new technologies and vendors will plow through the problems until the technology works as promised. But we’ve also learned that it’s not going to go smoothly and customers connected to an early 5G network can expect problems.