Categories
The Industry

Broadband for Communities

When talking about the benefits of broadband, it’s easy to overlook how broadband has become the glue that brings people and communities together. This is becoming particularly important for rural communities but matters to people everywhere.

Rural communities have been rapidly losing other forms of media that were the focal point in the past. 2004 was the peak of the newspaper business in terms of readership and revenues. Since then, the number of journalists has been cut in half. In the last fifteen years, we’ve lost more than 20% of all newspapers, and many remaining papers are just barely hanging on financially. Over half of the 3,143 counties in the country now only have one newspaper, which in the majority of cases means only a small weekly paper. In a recent count, over 200 counties have no newspaper.

We’ve also lost a huge number of local radio stations. Lost is not entirely the right word since many stations haven’t gone off the air but stopped being local. Local radio stations became endangered when Congress introduced deregulation into the radio business in the Telecommunications Act of 1996. Since that time, two-thirds of all radio stations are owned by ten parent companies that have gobbled up local radio stations. Instead of local news and content, the conglomerates pipe in national content, which also allows them to eliminate almost all of the staff at local stations. The big companies seek national advertisers, and communities don’t even hear ads for local businesses any longer.

The Internet has stepped in to fill some of this void. I see this directly when I help communities conduct broadband surveys and see how they go to get the word out. A lot of rural communities now have local Facebook forums (I note that I haven’t yet met anybody who has started to use the new name Meta). The local social media groups are popular, and I’ve seen communities drive a thousand survey responses through a local Facebook page. But not every local community has taken this approach, probably due to some of the downsides with social media.

Community life in one community I worked in recently all used a website created by the local radio station. I’ve worked in communities where church websites seem to be the predominant forum for local news.

In cities, we have more ways to keep up with local events. My own city has several newspapers, a few local radio stations, and one local TV station. In cities, the trend for the Internet is to get hyperlocal news directly in the neighborhood. There are a lot of people who use the Nextdoor app. While this is like other social media in that there’s a lot of gossip and squabbles, this is also the place to find out about crimes or events that you’d never know about otherwise.

One of the benefits of the Internet that is rarely talked about is the ability to become part of a larger community. I have a good friend I met strictly through the Internet who lives in Salt Lake City. I have a friend whose son is a competitive gamer, and his daily community is other gamers in Japan, Korea, Thailand, Vietnam, China, and Ukraine. My wife has developed friendships across the country when participating in forums on her various hobbies and interests. I’m not sure why we don’t mention ‘finding one’s tribe’ as one of the most important aspects of the Internet.

It’s easy to be cynical, and write off social media as being entertainment, but doing so ignores the real connections people make on the Internet. And yet, I’ve never seen any list of Internet benefits that includes the power of the Internet to provide local news and a sense of community.

Categories
The Industry

Big Internet Outages

Last year I wrote about big disruptive outages on the T-Mobile and the CenturyLink networks. Those outages demonstrate how a single circuit failure on a transport route or a single software error in a data center can spread quickly and cause big outages. I join a lot of the industry in blaming the spread of these outages on the concentration and centralization of networks where the nationwide routing of big networks is now controlled by only a handful of technicians in a few locations.

In early October, we saw the granddaddy of all network outages when Facebook, WhatsApp, and Instagram all crashed for much of a day. This was a colossal crash because the Facebook apps have billions of users worldwide. It’s easy to think of Facebook as just a social media company, but the app of suites is far more than that. Much of the third world uses WhatsApp instead of text messaging to communicate. Small businesses all over the world communicate with customers through Facebook and WhatsApp. A Facebook crash also affected many other apps. Anybody who automatically logs into other apps using the Facebook login credentials was also locked out since Facebook couldn’t verify their credentials.

Facebook blamed the outage on what it called routine software maintenance. I had to laugh the second I saw that announcement and the word ‘routine’. Facebook would have been well advised to have hired a few grizzled telecom technicians when it set up its data centers. We learned in the telecom industry many decades ago that there is no such thing as a routine software upgrade.

The telecom industry has long been at the mercy of telecom vendors that rush hardware and software into the real world without fully testing it. An ISP comes to expect to have issues in glitches when it is taking part in a technology beta test. But during the heyday of the telecom industry throughout the 80s, and 90s, practically every system small telcos operated was in beta test mode. Technology was changing quickly, and vendors rushed new and approved features onto the market without first testing them in real-life networks. The telcos and their end-user customers were the guinea pigs for vendor testing.

I feel bad for the Facebook technician who introduced the software problem that crashed the network. But I can’t blame him for making a mistake – I blame Facebook for not having basic protocols in place that would have made it impossible for the technician to crash the network.

I bet that Facebook has world-class physical security in its data centers. I’m sure the company has redundant fiber transport, layers of physical security to keep out intruders, and fire suppression systems to limit the damage if something goes wrong. But Facebook didn’t learn the basic Telecom 101 lesson that any general manager of a small telco or cable company could have told them. The biggest danger to your network is not from physical damage – that happens only rarely. The biggest danger is from software upgrades.

We learned in the telecom industry to never trust vendor software upgrades. Instead, we implemented protocols where we created a test lab to test each software upgrade on a tiny piece of the network before inflicting a faulty upgrade on the whole customer base. (The even better lesson most of us learned was to let the telcos with the smartest technicians in the state tackle the upgrade first before the rest of us considered it).

Shame on Facebook for having a network where a technician can implement a software change directly without first testing it and verifying it a dozen times. It was inevitable that a process without a prudent upgrade and testing process would eventually result in the big crash we saw. It’s not too late for Facebook – there are still a few telco old-timers around who could teach them to do this right.

Categories
Regulation - What is it Good For?

Can the FCC Regulate Facebook?

At the urging of FCC Chairman Ajit Pai, the FCC General Counsel Tom Johnson announced in a recent blog that he believes that the FCC has the authority to redefine the immunity shield provided by Section 230 of the FCC’s rules that comes from the Communications Decency Act from 1996.

Section 230 of the FCC rules is one of the clearest and simplest rules in the FCC code:  “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider“.

In non-legalese, this means that a web companies is not liable for third-party content posted on its platform. It is this rule that enables public comments on the web. All social media consists of third-party content. Sites like Yelp and Amazon thrive because of public post reviews of restaurants and products. Third-party comments are in a lot more places on the web such as the comment section of your local newspaper, or even here on my blog.

Section 230 is essential if we are going to give the public a voice on the web. Without Section 230 protections, Facebook could be sued by somebody who doesn’t like specific content posted on the platform. That’s dangerous because there is somebody who hates every possible political position.  If Facebook can be sued for content posted by its billions of users, then the platform will have to quickly fold – there is no viable business model that can sustain the defense of huge volumes of lawsuits.

Section 230 was created when web platforms started to allow comments from the general public. The biggest early legal challenge to web content came in 1995 when Wall Street firm Stratton Oakmont sued Prodigy over a posting on the platform by a user that accused the president of Stratton Oakmont of fraud. Stratton Oakmont won the case when the New York Supreme Court ruled that Prodigy was a publisher because the platform exercised some editorial control by moderating content and because Prodigy had a clearly stated set of rules about what was allowable content on the Prodigy platform. As might be imagined, this court case had a chilling impact on the burgeoning web industry, and fledgling web platforms worried about getting sued over content posted by the public. This prompted Representatives Rob Wyden and Chris Cox to sponsor the bill that became the current Section 230 protections.

Tom Johnson believes the FCC has the authority to interpret Section 230 due to Section 201(b) of the Communications Act of 1934, which confers on the FCC the power to issue rules necessary to carry out the provisions of the Act. He says that when Congress instructed that Section 230 rules be added to FCC code, that implicitly means the FCC has the authority to interpret the rules.

But then Mr. Johnson does an interesting tap dance. He distinguishes between interpreting the Section 230 rules and regulating companies that are protected by these rules. If the FCC ever acts to somehow modify Section 230, the legal arguments will concentrate on this nuance.

The FCC has basically been authorized by Congress to regulate common carriers of telecommunications services as well as a few other responsibilities specifically assigned to the agency.

There is no possible way that the FCC could ever claim that companies like Facebook or Google are common carriers. If they can’t make that argument, then the agency likely has no authority to impose any obligations on these companies, even should it have the authority to ‘interpret’ Section 230. Any such interpretation would be meaningless if the FCC has no authority to impose such interpretations on the companies that rely on Section 230 protections.

What is ironic about this effort by the FCC is that the current FCC spent a great deal of effort to declassify ISPs from being common carriers. The agency has gone as far as possible to wipe its hands of any responsibility for regulating broadband provided by companies like AT&T and Comcast. It will require an amazing set of verbal gymnastics to somehow claim the ability to extend FCC authority to companies like Facebook and Twitter, which clearly have zero characteristics of being a common carrier while at the same time claiming that ISPs are not common carriers.

Categories
Current News

They’re Back

Facebook recently announced it will be introducing smart glasses in collaboration with Ray-Ban. This will be the second major attempt at introducing the technology since the failed attempt by Google in 2011 when it introduced Google Glass. For those who might not remember, Google Glass was shunned by the general public and people who wore the glasses in public were quickly deemed to be glassholes. People were generally uncomfortable talking to somebody who could be recording the conversation.

It will be interesting to see if the public is any more forgiving now. Pictured with this blog is Glass 2.0 that is being used in factories, but the first-generation public version was equally obvious as a piece of technology.

In terms of technology, 2011 is far behind us, and since then it’s common for anything done in public to end up being recorded by somebody’s smartphone. But that still doesn’t mean that people like the idea of being secretly recorded, particularly if the new glasses aren’t so obvious as Google Glass.

We still don’t know what the technology will look like, but Facebook will try to brand the new glasses as cool. Consider this video ad that accompanied the announcement of the new glasses – who doesn’t want to wear smart glasses like glasses worn in the past by James Dean, Marilyn Monroe, and Muhammed Ali? Facebook says the new glasses will function by being paired with a smartphone, so perhaps they’ll be a lot less obvious than were the Google Glass.

The glasses are the first step towards virtual presence. Facebook Mark Zuckerberg says his vision is being able to virtually invite friends into your home to play cards virtually. However, this first set of glasses isn’t going to include an integrated display that would be capable of generating or viewing holograms. That means the new glasses will likely include the same sort of features like Google Glass such as being able to record what’s in front of you, using the web to browse for facts, or dipping into the web to call-up information about people you meet. With the advances we’ve made in facial recognition since 2011, that last item is a lot scarier today than it was a decade ago.

I recall the tech industry excitement about Google Glass and other proposed wearables back in 2010. The vision was to seamlessly be able to carry tech with you to create a constant human-computer interface. Google was stunned when the public universally and loudly rejected the idea, because to most people the technology meant an invasion of privacy. Nobody wanted to have a casual conversation with a stranger and then later find it posted on social media.

It’s hard to think that is still not going to be the reaction again today. Of course, as a baby boomer, I am a lot leerier of technology than are the younger generations. It seems that Generation Z is a lot less concerned about privacy and it will be interesting to see if young people take to the new technology. We may have one of the biggest generational rifts ever between the first generation that finally embraces wearables and everybody older.

Google Glass never died and morphed into a pair of glasses to use in factories. It allows workers to pull up schematics in real-time to compare to work-in-progress in front of them. The technology is said to have greatly improved complex tasks like wiring a new jetliner – something we all want to be 100% correct.

I will likely remain leery of the technology. What might eventually bring me around is Zuckerberg’s vision of being able to play poker with distant friends. I’ve been predicting telepresence as the technology that will finally take advantage of gigabit fiber connections. I’m not sure that we need glasses that secretly hide the technology capability to make this work – but I guess this is an early step towards that vision.

Categories
Regulation - What is it Good For?

Can the FCC Regulate Social Media?

There has been a lot of talk lately from the White House and Congress about having the FCC regulate online platforms like Facebook, Twitter, and Google. From a regulatory perspective, it’s an interesting question if current law allows for the regulation of these companies. It would be ironic if the FCC somehow tried to regulate Facebook after they went through series of legal gyrations to remove themselves from regulating ISPs for the delivery and sale of broadband – something that is more clearly in their regulatory wheelhouse.

All of the arguments for regulating the web companies centers around Section 230 of the FCC rules. Congress had the nascent Internet companies in mind when the wrote Section 230. The view of Congress was that the newly formed Internet needed to be protected from regulation and interference in order to grow. Congress was right about this at the time and the Internet is possibly the single biggest driver of our current economy. Congress specifically spelled out how web companies should be viewed from a regulatory perspective.

There are two sections of the statute that are most relevant to the question of regulating web companies. The first is Section 230(c)(1), which states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

This section of the law is unambiguous and states that an online platform can’t be held liable for content posted by users. This would hold true regardless of whether a platform allows users free access to say anything or if the platform heavily moderates what can be said. When Congress wrote Section 230 this was the most important part of the statute, because they realized that new web companies would never get off the ground or thrive if they have to constantly respond to lawsuits filed by parties that didn’t like the content posted on their platform.

Web platforms are protected by first amendment rights as publishers if they provide their own content, in exactly the same manner as a newspaper or magazine – but publishers can be sued for violating laws like defamation. But most of the big web platforms don’t create content – they just provide a place for users to publish content. As such, the language cited above completely shields Facebook and Twitter from liability, and also seemingly from regulation.

Another thing that must be considered is the current state of FCC regulation. The courts have given the FCC wide latitude in interpreting its regulatory role. In the latest court ruling that upheld the FCC’s deregulation of broadband and the repeal of net neutrality, the court said that the FCC had the authority to deregulate broadband since the agency could point to Congressional laws that supported that position. However, the court noted that the FCC could just as easily have adopted almost the opposite position, as had been done by the Tom Wheeler FCC, since there was also Congressional language that supports regulating broadband. The court said that an agency like the FCC is only required to find language in Congressional rules that support whatever position they take. Over the years there have been enough conflicting rules from Congress to give the FCC a lot of flexibility in interpreting Congressional intent.

It’s clear that the FCC still has to regulate carriers, which is why landline telephone service is still regulated. In killing Title II regulation, the FCC went through legal gymnastics to declare that broadband is an ‘information service’ and not a carrier service.

Companies like Facebook and Google are clearly also information services. This current FCC would be faced with a huge dilemma if they tried to somehow regulate companies like Facebook or Twitter. To do so would mean declaring that the agency has the authority to regulate information service providers – a claim that would be impossible to make without also reasserting jurisdiction over ISPs and broadband.

The bottom line is that the FCC could assert some limited form of jurisdiction over the web companies. However, the degree to which they could regulate them would be seriously restricted by the language in Section 230(c)(1). And any attempt to regulate the web companies would give major heartburn to FCC lawyers. It would force them to make a 180-degree turn from everything they’ve said and done about regulating broadband since Ajit Pai became Chairman.

The odds are pretty good that this concept will blow over because the FCC is likely to quietly resist any push to regulate web companies if that means they would have to reassert jurisdiction over information service providers. Of course, Congress could resolve this at any time by writing new bills that would explicitly regulate Google without regulating AT&T. But as long as we have a split Congress, that’s never going to happen.

Categories
Current News

Privacy in the Age of COVID-19

The Washington Post reports that a recent poll they conducted shows that 3 out of 5 Americans are unable or unwilling to use an infection-alerting app that is being developed jointly by Google and Apple. About 1 in 6 adults can’t use the app because they don’t own a smartphone – with the lowest ownership levels for those 65 and older. People with smartphones evenly split between those willing versus unwilling to use such an app.

The major concern among those not willing to use such an app comes from the distrust people have about the ability or willingness of those two tech companies to protect the privacy of their health data. This unwillingness to use such an app, particularly after already seeing the impact that the virus is having on the economy is disturbing to scientists who have said that 60% or more of the public would need to use such an app for it to be effective.

This distrust of tech companies is nothing new. In November the Pew Research Center published the results of the survey that showed how Americans feel about online privacy. That study’s preliminary finding was that more than 60% of Americans think it’s impossible to go through daily life without being tracked by tech companies or the government.

To make that finding worse, almost 70% of adults think that tech companies will use their data in ways they are uncomfortable with. Almost 80% believe that tech companies won’t publicly admit guilt if they are caught misusing people’s data. People don’t feel that data collected about them is secure and 70% believe data is less secure now than it was five years ago.

Almost 80% of people are concerned about what social media sites and advertisers know about them. Probably the most damning result of the survey is that 80% of Americans feel that they have no control over how data is collected about them.

Almost 97% of respondents to the poll said they have been asked to agree to a company’s privacy policy. But only 9% say they always read the privacy policies and 36% have never read them. This is not surprising since the legalese included in most privacy policies requires reading comprehension at a college level.

There is no mystery about why people are worried about the collection of personal data. There have been headlines for several years talking about how personal data has been misused. The Facebook / Cambridge Analytica data scandal showed a giant tech company selling personal data that was used to sway voters. The big cellular companies were caught several times selling customer location data that lets whoever buy it understand where people travel throughout each day. Phone apps of all sorts report back location data, web browsing data, and shopping habits and nobody seems to be able to tell us where that data is sold. Even the supposed privacy advocate Apple lets contractors listen to Siri recordings.

It’s not a surprise that with the level of distrust of tech companies that it’s becoming common for politicians to react to privacy breaches. For example, a bill was introduced into the House last year that would authorize the Federal Trade Commission to fine tech companies to as much as 4% of their gross revenues for privacy violations.

California recently enacted a new privacy law with strict requirements on web companies that mimic the regulations used in Europe. Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted from the site. Consumes must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data.

The unwillingness to use the COVID-tracking app is probably the societal signal that the hands-off approach we’ve had for regulating the Internet needs to come to an end. Most hands-off policies were developed twenty years ago when AOL was conquering the business world and legislators didn’t want to tamp down on a nascent industry. The tech companies are among the biggest and richest companies in the world and there is no reason to not regulate some of their worst practices. This won’t be an easy genie to put back in the bottle, but we have to try.

Categories
Regulation - What is it Good For?

New European Copyright Laws

I’ve always kept an eye on European Union regulations because anything that affects big web companies or ISPs in Europe always ends up bleeding over into the US. Recently the EU has been contemplating new rules about online copyrights, and in September the European Parliament took the first step by approving two new sets of copyright rules.

Article 11 is being referred to as a link tax. This legislation would require that anybody that carries headlines or snippets of longer articles online must pay a fee to the creator of the original content. Proponents of Article 11 argue that big companies like Google, Facebook and Twitter are taking financial advantage of content publishers by listing headlines of news articles with no compensation for the content creators. They argue that these snippets are one of the primary reasons that people use social media and they browse articles suggested by their friends. Opponents of the new law argue that it will be extremely complicated for a web service to track the millions of headlines listed by users and that they will react to this rule by only allowing headline snippets from large publishers. This would effectively shut small or new content creators from gaining access to the big platforms – articles would be from only a handful of content sources rather than from tens of thousands of them.

Such a law would certainly squash small content originators like this blog. Many readers find my daily blog articles via short headlines that are posted on Twitter and Linked-In every time I release a blog or when one of my readers reposts a blog. It’s extremely unlikely that the big web platforms would create a relationship with somebody as small as me and I’d lose my primary way to distribute content on the web. I guess, perhaps, that the WordPress platform where I publish could make arrangements with the big web services – otherwise their value as a publishing platform would be greatly diminished.

This would also affect me as a user. I mostly follow other people in the telecom and the rural broadband space by browsing through my feed on Twitter and LinkedIn to see what those folks are finding to be of interest. I skip over the majority of headlines and snippets, but I stop and read news articles I find of interest. The beauty of these platforms is that I automatically select the type of content I get to browse by deciding who I want to follow on the platforms. If the people I follow on Twitter can’t post small and obscure articles, then I would have no further interest in being on Twitter.

The second law, Article 13 is being referred to as the upload filter law. Article 13 would make a web platform liable for any copyright infringements for content posted by users. This restriction would theoretically not apply to content posted by users as long as they are acting non-commercially.

No one is entirely sure how the big web platforms would react to this law. At one extreme a platform like Facebook or Reddit might block all postings of content, such as video or pictures, for which the user can’t show ownership. This would mean the end of memes and kitten videos and much of the content posted by most Facebook users.

At the other extreme, this might mean that the average person could post such links since they have no commercial benefit from posting a cute cat video. But the law could stop commercial users from posting content that is not their own – a movie reviewer might not be able to include pictures or snippets from a film in a review. I might not be able to post a link to a Washington Post article as CCG Consulting but perhaps I could post it as an individual. While I don’t make a penny from this blog, I might be stopped by web platforms from including links to news articles in my blog.

In January the approval process was halted when 11 countries including Germany, Italy, and the Netherlands said they wouldn’t support the final language in these articles. EU law has an interesting difference from US law in that for many EU ordinances each country gets to decide, within reason, how they will implement the law.

The genesis of these laws comes from the observation that the big web companies are making huge money from the content created by others and not fairly compensating content creators. We are seeing a huge crisis for content creators – they used to be compensated through web advertising ‘hits’, but these revenues are disappearing quickly. The EU is trying to rebalance the financial equation and make sure that content creators are fairly compensated – which is the entire purpose of copyright laws.

The legislators are finding out how hard it will be to make this work in the online world. Web platforms will always try to work around laws to minimize payments. The lawyers of the web platforms are going to be cautious and advise the platforms to minimize massive class action suits.

But there has to be a balance. Content creators deserve to be paid for creating content. Platforms like Facebook, Twitter, Reddit, Instagram, Tumblr, etc. are popular to a large degree because users of the platforms upload content that they didn’t create – the value of the platform is that users get to share things of interest with their friends.

We haven’t heard the end of these efforts and the parties are still looking for language that the various EU members can accept. If these laws eventually pass they will raise the same questions here because the policies adopted by the big web platforms will probably change to match the European laws.

Categories
Technology

Facebook Takes a Stab at Wireless Broadband

Facebook has been exploring two technologies in its labs that they hope will make broadband more accessible for the many communities around the world that have poor or zero broadband. The technology I’m discussing today is Terragraph which uses an outdoor 60 GHz network to deliver broadband. The other is Project ARIES which is an attempt to beef up the throughput on low-bandwidth cellular networks.

The Terragraph technology was originally intended as a way to bring street-level WiFi to high-density urban downtowns. Facebook looked around the globe and saw many large cities that lack basic broadband infrastructure – it’s nearly impossible to fund fiber in third world urban centers. The Terragraph technology uses 60 GHz bandwidth and the 802.11ay standard – this technology combination was originally called AirGig.

Using 60GHz and 801.11ay together is an interesting choice for an outdoor application. On a broadcast basis (hotspot) this frequency only carries between 35 and 100 feet depending upon humidity and other factors. The original intended use of the AirGig was as an indoor gigabit wireless network for offices. The 60 GHz spectrum won’t pass through anything, so it was intended to be a wireless gigabit link within a single room. 60 GHz faces problems as an outdoor technology since the frequency is absorbed by both oxygen and water vapor. But numerous countries have released 60Ghz as unlicensed spectrum, making it available without costly spectrum licenses, and the channels are large enough to still be able to deliver bandwidth even with the physical limitations.

It turns out that a focused beam of 60 GHz spectrum will carry up to about 250 meters when used as backhaul. The urban Terragraph network planned to mount 60 GHz units on downtowns poles and buildings. These units would act as both hotspots and to create a backhaul mesh network between units. This is similar to the WiFi networks we saw being tried in a few US cities almost twenty years ago. The biggest downside to the urban idea is the lack of cheap handsets that can use this frequency.

Facebook took a right turn on the urban idea and completed a trial of the technology deployed in a different network design. Last May Facebook worked with Deutsche Telekom to deploy a fixed Terragraph network in Mikebuda, Hungary. This is a small town of about 150 homes covering 0.4 square kilometers – about 100 acres. This is drastically different than a dense urban deployment with a far lower housing density than US suburbs – this is similar to many small rural towns in the US with large lots, and empty spaces between homes. The only current broadband in the town was about 100 DSL customers.

In a fixed mesh network every unit deployed is part of the mesh network each unit can deliver bandwidth into that home as well as bounce signal to the next home. In Mikebuda the two companies decided that the ideal network would be to serve 50 homes (not sure why they couldn’t serve all 100 of the DSL customers). The network is delivering about 650 Mbps to each home, although each home is limited to about 350 Mbps due to the limitations of the 802.11ac WiFi routers inside the home. This is a big improvement over the 50 Mbps DSL that is being replaced.

The wireless mesh network is quick to install and the network was up and running to homes within two weeks. The mesh network configures itself and can instantly reroute and heal to replace a bad mesh unit. The biggest local drawback is the need for pure line-of-sight since 60 GHz can’t tolerate any foliage or other impediments, and tree trimming was needed to make this work.

Facebook envisions this fixed deployment as a way to bring bandwidth to the many smaller towns that surround most cities. However, they admit in the third world that the limitation will be for backhaul bandwidth since the third world doesn’t typically have much middle mile fiber outside of cities – so figuring out how to get the bandwidth to the small towns is a bigger challenge than serving the homes within a town. Even in the US, the cost of bandwidth to reach a small town is often the limiting factor on affordably building a broadband solution. In the US this will be a direct competitor to 5G for serving small towns. The Terragraph technology has the advantage of using unlicensed spectrum, but ISPs are going to worry about the squirrelly nature of 60 GHz spectrum.

Assuming that Facebook can find a way to standardize the equipment and get it into mass production, then this is another interesting wireless technology to consider. Current point-to-multipoint wireless network don’t work as well in small towns as they do in rural areas, and this might provide a different way for a WISP to serve a small town. In the third world, however, the limiting factor for many of the candidate markets will be getting backhaul bandwidth to the towns.

Categories
Regulation - What is it Good For?

Regulating Digital Platforms

It seems like one of the big digital platforms is in the news almost daily – and not in a positive way. Yet there has been almost no talk in the US of trying to regulate digital platforms like Facebook and Google. Europe has taken some tiny steps, but regulation there are still in the infancy state. In this country the only existing regulations that apply to the big digital platforms are antitrust laws, some weak privacy rules, and general corporate regulation from the Federal Trade Commission that protect against general consumer fraud.

Any time there has been the slightest suggestion of regulating these companies we instantly hear the cry that the Internet must be free and unfettered. This argument harkens back to the early days of the Internet when the Internet was a budding industry and seems irrelevant now that these are some of the biggest corporations in the world that hold huge power in our daily lives.

For example, small businesses can thrive or die due to a change in an algorithm on the Google search engine. Search results are so important to businesses that the billion-dollar SEO industry has grown to help companies manipulate their search results. We’ve recently witnessed the damage that can be done by nefarious parties on platforms like Facebook to influence voting or to shape public opinion around almost any issue.

Our existing weak regulations are of little use in trying to control the behavior of these big companies. For example, in Europe there have been numerous penalties levied against Google for monopoly practices, but the fines haven’t been very effective in controlling Google’s behavior. In this country our primary anti-trust tool is to break up monopolies – an extreme remedy that doesn’t make much sense for the Google search engine or Facebook.

Regulating digital platforms would not be easy because one of the key concepts of regulation is understanding a business well enough to craft sensible rules that can throttle abuses. We generally regulate monopolies and the regulatory rules are intended to protect the public from the worst consequences of monopoly use. It’s not hard to make a case that both Facebook and Google are near-monopolies – but it’s not easy to figure out what we would do to regulate them in any sensible way.

For example, the primary regulations we have for electric companies is to control profits of the monopolies to keep rates affordable. In the airline industry we regulate issues of safety to force the airlines to do the needed maintenance on planes. It’s hard to imagine how to regulate something like a search engine in the same manner when a slight change in a search engine algorithm can have big economic consequences across a wide range of industries. It doesn’t seem possible to somehow regulate the fairness of a web search.

Regulating social media platforms would be even harder. The FCC has occasionally in the past been required by Congress to try to regulate morality issues – such as monitoring bad language or nudity on the public airwaves. Most of the attempts by the FCC to follow these congressional mandates were ineffective and often embarrassing for the agency. Social platforms like Facebook are already struggling to define ways to remove bad actors from their platform and it’s hard to think that government intervention in that process can do much more than to inject politics into an already volatile situation.

One of the problems with trying to regulate digital platforms is defining who they are. The FCC today has separate rules that can be used to regulate telecommunications carriers and media companies. How do you define a digital platform? Facebook, LinkedIn and Snapchat are all social media – they share some characteristics but also have wide differences. Just defining what needs to be regulated is difficult, if not impossible. For example, all of the social media platforms gain much of their value from user-generated content. Would that mean that a site like WordPress that houses this blog is a social media company?

Any regulations would have to start in Congress because there is no other way for a federal agency to be given the authority to regulate the digital platforms. It’s not hard to imagine that any effort out of Congress would concentrate on the wrong issues, much like the rules that made the FCC the monitor of bad language. I know as a user of the digital platforms that I would like to see some regulation in the areas of privacy and use of user data – but beyond that, regulating these companies is a huge challenge.

Categories
Current News Regulation - What is it Good For?

Should We Regulate Google and Facebook?

I started to write a blog a few weeks ago asking the question of whether we should be regulating big web companies like Google and Facebook. I put that blog on hold due to the furor about Cambridge Analytica and Facebook. The original genesis for the blog was comments made by Michael Powell, the President and CEO of NCTA, the lobbying arm for the big cable companies.

At a speech given at the Cable Congress in Dublin, Ireland Powell said that edge providers like Facebook, Google, Amazon and Apple “have the size, power and influence of a nation state”. He said that there is a need for antitrust rules to reign in the power of the big web companies. Powell put these comments into a framework of arguing that net neutrality is a weak attempt to regulate web issues and that regulation ought to instead focus on the real problems with the web for issues like data privacy, technology addiction and fake news.

It was fairly obvious that Powell was trying to deflect attention away from the lawsuits and state legislation that are trying to bring back net neutrality and Title II regulations. Powell did make same some good points about the need to regulate big web companies. But in doing so I think he also focuses the attention back on ISPs for some of the same behavior he sees at the big web providers.

I believe that Powell is right that there needs to be some regulation of the big edge providers. The US has made almost no regulations concerning these companies. It’s easy to contrast our lack of laws here to the regulations of these companies in the European Union. While the EU hasn’t tackled everything, they have regulations in place in a number of areas.

The EU has tackled the monopoly power of Google as a search engine and advertiser. I think many people don’t understand the power of Google ads. I recently stayed at a bed and breakfast and the owner told me that his Google ranking had become the most important factor in his ability to function as a business. Any time they change their algorithms and his ranking drops in searches he sees an immediate drop-off in business.

The EU also recently introduced strong privacy regulations for web companies. Under the new rules consumers must opt-in the having their data collected and used. In the US web companies are free to use customer information in any manner they choose – and we just saw from the example of Cambridge Analytica how big web companies like Facebook monetize consumer data.

But even the EU regulations are going to have little impact if people grant the ability for the big companies to use their data. One thing that these companies know about us is that we willingly give them access to our lives. People take Facebook personality tests without realizing that they are providing a detailed portrait of themselves to marketeers. People grant permissions to apps to gather all sorts of information about them, such a log of every call made from their cellphone. Recent revelations show that people even unknowingly grant the right to some apps to read their personal messages.

So I think Powell is right in that there needs to be some regulations of the big web companies. Probably the most needed regulation is one of total transparency where people are told in a clear manner how their data will be used. I suspect people might be less willing to sign up for a game or app if they understood that the app provider is going to glean all of the call records from their cellphone.

But Powell is off base when he thinks that the actions of the edge providers somehow lets ISPs off the hook for similar regulation. There is one big difference between all of the edge providers and the ISPs. Regardless of how much market power the web companies have, people are not required to use them. I dropped off Facebook over a year ago because of my discomfort from their data gathering.

But you can’t avoid having an ISP. For most of us the only ISP options are one or two of the big ISPs. Most people are in the same boat as me – my choice for ISP is either Charter or AT&T. There is some small percentage of consumers in the US who can instead use a municipal ISP, an independent telco or a small fiber overbuilder that promises not to use their data. But everybody else has little option but to use one of the big ISPs and is then at their mercy of their data gathering practices. We have even fewer choices in the cellular world since four providers serve almost every customer in the country.

I was never convinced that Title II regulation went far enough – but it was better than nothing as a tool to put some constraints on the big ISPs. When the current FCC killed Title II regulation they essentially set the ISPs free to do anything they want – broadband is nearly totally unregulated. I find it ironic that Powell wants to see some rules the curb market abuse for Google and Facebook while saying at the same time that the ISPs ought to be off the hook. The fact is that they all need to be regulated unless we are willing to live with the current state of affairs where ISPs and edge providers are able to use customer data in any manner they choose.

Exit mobile version