Reauthorizing the FCC

FCC_New_LogoSenator John Thune (republican from South Dakota and head of the Senate Commerce Committee) has introduced a reauthorization bill for the FCC. This hasn’t been done at the FCC for nearly 25 years but is a routine process for most federal agencies. The authorization bill would reestablish a fresh basis for how the FCC operates and is funded. As long as controversial riders don’t get attached such as trying to undo net neutrality or something similar, the bill ought to sail through Congress and get signed by the president.

The reauthorization is only for two years, which signals Congress’s intent to tackle a new telecommunications act in the near future. Here are a few things the bill will do if passed:

Examine FCC Regulatory Fees. The FCC charges regulatory fees and the bill would authorize the GAO to take a look at how those fees line up with the costs of running the FCC. Since this hasn’t been examined in a long time it’s easy to imagine fees being added, deleted, or modified to bring the fees in line with current activity and costs of running the FCC.

The FCC charges a number of different kinds of fees. For instance there are fees for processing applications for equipment approval, tariff filings, antenna site registration and other similar functions. The FCC also charges a host of annual regulatory fees to cable television systems, wireless carriers, satellite providers, interstate telco providers, media companies and submarine cable network owners. The agency also charges fees to participate in spectrum auctions.

Protects E-Rate Funding. The bill would shield E-Rate payments made to schools and libraries from being lowered due to any other government funding action, such as last year’s sequester.

Changes FCC Transparency Practices. The Commerce committee oversees a number of other government agencies such as the Consumer Product Safety Commission, the Surface Transportation Board, and the Federal Energy Regulatory Commission. Over the last decade those agencies have had changes in how documents at the agencies are made public and how they report to Congress. The FCC is the only agency that has not been subject to these transparency best practices. However, earlier this year the FCC started to voluntarily make some of these changes on their own.

Clarifies USF Support Rules. In 2004 the USF Joint Board recommended that a household only count for calculating universal service funding one time. This means that a household can’t count for calculating a subsidy for both a wireline and a wireless connection. Congress has been renewing this requirement in the annual funding appropriation for the FCC every year since then and this would codify that restriction to be permanent.

Clarifies Terms of Office for Commissioners. The bill would allow a Commissioner to stay in office after their 5-year term until a successor has been appointed. This has sometimes been done in practice, but this would make this a permanent rule.

Continues Funding for Spectrum Auctions. The bill assures that the FCC will be sufficiently funded to operate spectrum auctions, which often brings in huge revenues to the general government coffers.

The bill is expected to be voted on in the Commerce Committee by the end of March. The danger between now and becoming a law is going to be the temptation of Republicans to use the bill as a way to change a few FCC rules that they don’t like. The two recent FCC rulings they most hate are net neutrality and the rules that ban states from restricting municipal participation in broadband. Both of those FCC rules are under appeal in federal courts, but a number of congressmen have never stopped publicly complaining against the rulings. If the bill gets riders that try to change those rulings it would probably be impossible to get a presidential signature since President Obama strongly supports both of those FCC rulings.

New Life for Open Access Networks?

CoH_City_seal_BlueGoogle Fiber and Huntsville Alabama just announced an interesting public private partnership. This is something that’s new for Google. In this partnership Huntsville is going to build and own the network and Google will lease connections on the network. Other ISPs will also be able to get on the network making this an open access network.

The details of the arrangement were not announced but there have already been a couple of interested parties that have made public records requests about the deal, so we ought to know more soon about how it will work.

There are a number of different ways to operate an open access network. For instance, a city can only own the fiber network and leave it up to ISPs to install the needed fiber drops and the customer electronics. At the other extreme a city could pay for everything. Since it’s been widely reported that Google uses some proprietary electronics my guess is that Google will be responsible for the electronics and the city for the rest. But we’ll have to wait a bit to see those details.

If Google does utilize a custom set of electronics it will be interesting to see how the city proposes to handle adding other ISPs to the networks. There are a lot of networks that would have a hard time handling different kinds of electronics mixed everywhere throughout the network.

The real question that everybody is going to want to know is if the city can make enough revenue from this arrangement to pay for the network. I’ve modeled open access networks many times and about the only way I can see for the network owner to break even with open access is if there are a lot of customers using the network.

And that is the biggest dilemma for owning an open access work. The big open access networks in Europe have a very high overall penetration rate because there are literally a dozen quality ISPs that compete on each network – basically multiple Googles. But if customer penetration rates fall below 50% it gets harder to see a path towards profitability for the network owner.

Fairly simple math can be used to demonstrate the dilemma for open access. If the network has a high penetration rate, say 70% or higher like happens in Europe, then the network owner can charge a relatively small fee per connection and can still break even. But should that same network have a small penetration, say 30% or 40%, the network owner would have to charge twice as much per connection to recover their costs.

The dilemma for network owners is that charging a high connection rate naturally leads the ISPs to cherry pick – that is, not sign up customers with low revenues that don’t create a good enough margin over and above the cost of the network connection. To give an example of this, if a network has a connection charge of $15 per customer, then some ISP in the market is probably going to be willing to use that connection to sell relatively low-price broadband, perhaps at $35 to $40 per month. But if the connection charge is instead $30 per customer, then no ISP is likely to chase those same $40 customer revenue opportunities and will only pursue customers willing to pay more.

This puts network owners in an economic bind. If they charge a low rate but don’t get a lot of customers they don’t make enough revenue to recover their costs. But if they raise the connection charge they force the ISPs to cherry pick and only sell more expensive products, and the network owner still might not sell enough connections to break even. The higher the connection charge, the fewer the potential connection that can be sold. It’s an interesting economic dynamic and one that puts all of the risk on the network owner.

I’m sure the deal is good for Google or they wouldn’t have signed it. It certainly relieves Google of a huge capital outlay. What others cities are going to be most interested in is if this a good deal for Huntsville. Most of the open access networks in the country have not done well for the network owner and it will be interesting to see if having a premiere tenant like Google will make a difference in the open access dynamic.

Wi-FM

Wi-FiAnytime there are too many WiFi networks in the same proximity it’s inevitable to have contention between networks. Such contention will cause a WiFi network to slow down since the current WiFi standards tells a WiFi device to back-off whenever it sees interference, meaning that two neighboring WiFi networks will both back off when there is contention. People with slow WiFi tend to blame their router for their problems, but often it is this contention that is slowing them down.

Using research first done at MIT and recently revived at Northwestern University, engineers have figured out a way to greatly reduce the contention between neighboring WiFi networks using a technology they are dubbing as Wi-FM since it uses a tiny slice of FM frequency to resolve conflicts.

It’s not hard to imagine situations where WiFi can become congested. For instance, consider somebody living in an apartment building who has other WiFi routers over, under and on all sides, all relatively close. We tend to think of WiFi as being a pretty reliable transmission medium, but when there are many networks all trying to work at the same time there can be a tremendous amount of interference, and a major degradation of throughput.

The Wi-FM technology uses the tiny slice of FM radio spectrum that is reserved for the Radio Data System (RDS). This is the spectrum that is used to transmit the content information about the FM radio programming and is used in your car radio, for example, to tell you the name of the song and the artist you are listening to.

Along with the broadcast information the RDS system also utilizes a time slot technology that allows it to sync up the broadcast information with songs as they change. The Wi-FM technology takes advantage of these time slots and uses the quiet times when there is no broadcast information being sent to monitor the WiFi signals and to direct packets so that they don’t interfere.

WiFi utilizes multiple channels, and if all channels are used efficiently then much of the interference between neighboring networks can be avoided. But there are no techniques that can direct WiFi to change channels on the fly that can be done easily from inside the WiFi spectrum without eating up a lot of the available spectrum in the effort. Using the slice of FM frequency as an external traffic cop allows for the rapid routing of contentious packets to different channels and can greatly reduce contention and interference.

This technology would probably be best used today in places like apartment buildings where there are multiple WiFi networks. But we are moving into a future where there is likely to be a lot more WiFi interference. For example, there are plans to use a continuous WiFi signal to power cellphones and small IoT sensors. And the cellular industry wants to use WiFi as overflow for LTE calls.

So however busy WiFi is today, the chances are that it’s going to get a lot busier in the future. And that means there will be lot more interference between packets. Wi-FM is just one of many techniques that are probably going to be needed if we want to keep the public spectrum usable in busy places. Otherwise, the interference will just accumulate to shut the spectrum down at the busiest times of the day.

PPPs – Issues to Consider

ppp_logoOne of the hottest topic in the broadband industry today is Public Private Partnerships (PPPs) where a commercial ISP partners in some manner with a city or county to provide broadband. The trend is probably being nudged forward by the many communities that are becoming desperate to find better broadband and which are waking up to the fact that they are going to have to put some skin in the game if they want somebody to build broadband.

Many carriers are used to creating partnerships or joint ventures with other carriers. But many carriers have never considered working with a government entity – be that a city, a county, or perhaps a school district. Working with government entities is definitely different than working with commercial companies and below I highlight some of the differences to be prepared for.

This list might sound negative and drive a carrier away from thinking about a PPP. But there are strategies for dealing with each of these issues. And generally, the smaller the government entity, the fewer of these issues probably apply. Working with small towns can be fairly easy while big cities might have every issue listed below, and even more. There are some great PPPs in the country, and today there are more communities willing to commit some funding towards paying for a broadband network. So the rewards for working with a PPP can well be worth the extra effort needed to create a successful partnership. I use the word ‘city’ below very generically, and these same things can be true for any municipal entity.

Politics. Government entities, by definition, are political. The main issue with politics is not that you can’t negotiate a good deal with a willing city, but rather the fear that the city can change over time and in the future the city might turn into a partner very different than the one you partnered with.

Decision Making. Cities cannot make decisions very quickly. The process of making municipal decisions involves a very specific process that often requires public meetings and allowing time for public comment. This is not that much of a hurdle in getting a new partnership started, but after it is up and running, a city will not be a nimble partner that can make a quick decision when needed.

Public Disclosure. In most places there are public disclosure laws that mean that almost everything you do with a city will be subject to disclosure to the public should somebody want to see it. There are states where ‘commercially sensitive information’ can be protected to a degree, but in some states everything can be made public. Generally even any records of negotiating a PPP might be publicly discoverable.

Purchasing Process. Even when you negotiate a PPP with a city, some of them will feel obligated to then send the whole deal out to the public on an RFP or an RFI to make sure there isn’t a better deal available. Cities are often cautious about agreeing to sole-source deals without having gone through the process to see if they could have negotiated a better arrangement with somebody else. This is often a case of CYA in case the deal ever goes sour later.

Different Goals. It’s always important to remember that a city partner will not have to same goals as a commercial partner. They might care, for example, about making sure that there is broadband brought to the poorest parts of a city while the commercial partner cares most about profits and cash flows.

Ownership. Most cities cannot own a share of a corporation or a for-profit partnership. This means that if the city is to be a true partner that some alternate mechanism must be found to compensate them for their contribution to the partnership.

The “Anti-Voices”. Since the process is usually at least somewhat public, you must expect that there will be some citizens who will be loudly vocal against whatever the PPP is doing. This is inevitable because there are some citizens that are against almost everything. This is something that governments are all used to but which might be an eye-opener for an ISP.

You need to keep all of these things in mind when negotiating or working with a municipal partner. At the end of a day a city can be a great partner and there is at least anecdotal evidence that a broadband venture with a city partner will get more customers than a pure commercial venture – due probably to the fact that many people like their city governments and trust them to do the right thing.

Our Degrading Networks

cheetah-993774Lately I’ve been hearing a lot of stories about rural broadband with a common theme. People say that their broadband has been okay for years and is now suddenly terrible. This seems to be happening more on DSL networks than with other technologies, but you hear this about rural cable networks as well.

There are several issues which contribute to the problem – more customers sharing a local network, increasing data usage for the average customer, and a data backbone feeding the neighborhood that is has grown too small for the current usage.

Broadband adoption rates have continued to grow as more and more households find it mandatory to use broadband. And so neighborhoods that once had 50% of homes using a local network will have grown to more than 70%. That alone can stress a local network.

Household broadband usage has also been increasing. A lot of the new usage is streaming video. This video doesn’t just come from Netflix but there is now video all over the web and social media. It’s hard to go to the web today and not encounter video. As more and more customers are using video at the same time they can quickly be asking for more aggregate data in a network than the network can supply. Where the demand has outstripped network capability there is a remedy available for most situations and increasing the size of the bandwidth pipe feeding a neighborhood will typically fix the problem.

Let’s look at an example. Consider a neighborhood that has 100 DSL customers and that is fed by a DS3 (45 Mbps). In the days before a lot of streaming video such a neighborhood probably felt like it had good broadband. The odds against more than a few customers trying to download something really large at exactly the same time meant that there was almost always enough bandwidth for everybody.

But today people want to watch streaming video. Netflix recommends that there be at least a 1.5 Mbps continuous stream available to watch a video. So up to about 30 households in this theoretical neighborhood could watch Netflix at the same time. That math is not quite that linear as I will explain below, but you can see how the math works. The problem is that it’s not hard to imagine that with 100 homes that there would be demand for more than 30 video streams at the same time, particularly when considering that some households want to watch more than one Netflix stream at the same time.

The problems in this theoretical neighborhood are made worse by what is called packet loss. Packet loss occurs when a network tries to download multiple signals at the same time. When that happens some packets are accepted, but some are just lost. Our current web protocols correct this problem by sending out a message from the receiving router asking for the retransmission of missing packets, and they are sent again. As networks get busy the amount of contention and packet loss increases and the percentage of the packets that are sent multiple times increases. And so as networks get busy they grow increasingly less efficient. Where this theoretical neighborhood network can theoretically accommodate 30 Netflix streams, in real life it might actually only handle 20 due to the extra traffic caused by resending lost packets.

This theoretical network has grown over time from being efficient to now being totally inadequate. Customers who were once happy with speeds are now unable to watch Netflix on an average evening. The network will still function great at 4:00 AM when nobody is trying to use it, but during the times when people want use it, it will fail more often than not. The only way to fix this theoretical neighborhood is increase the backbone from 45 Mbps to something much larger. And that requires capital – and we all know that the large telcos are not putting capital into copper neighborhoods.

Cellular companies have been dealing with these growth issues for a number of years now. Cellular networks are seeing annual growth between 60% and 120% per year, meaning that any improvement in the network is quickly eaten up by increased demand. But t’s a much bigger issue to keep upgrading all landline networks. While there are just over 200,000 cell towers in the US there must be several million local broadband backbone connections into neighborhoods. These range from tiny backbones with a few T1s feeding a few homes up to networks with a few hundred people sharing a larger backbone. Upgrading that many networks backbone connection means a huge capital outlay is needed to maintain acceptable levels of service.

Unfortunately my theoretical neighborhood is not really all that theoretical. The big increase in landline broadband demand is now starting to max out the bandwidth utilization in many neighborhoods. The FCC says that there are 34 million people in the country that don’t have adequate broadband today. But with the rate that neighborhood networks are degrading, that number of households with inadequate broadband is growing rapidly – and not get smaller as the FCC is hoping.

An Idea for Funding Rural Fiber

eyeballI’ve been working for a long time with rural broadband and it has become clear to me that there is no way on our current path that we can build fiber everywhere in the US. The borrowing capacity of all of the small telcos and coops is not nearly large enough to fund fiber everywhere. It’s often difficult to have a business case for rural fiber that you can get funded at a bank. It certainly doesn’t look like the federal government has any plans to fund fiber, and even if they did they would probably spend too much by imposing unreasonable rules that would drive up construction costs.

But there are other ways that we could fund fiber everywhere. For example, consider the utility model. Utilities are generally able to get funded because they are guaranteed a rate of return of perhaps something like 10% on their investment. It has been the guaranteed returns that have allowed rural telephone to borrow the money needed to operate.

If fiber networks had a guaranteed return there would be many commercial lenders and other investors willing to provide the money to build rural fiber networks. There is a huge amount of money available from pension funds, insurance companies, and other large pots of money that would be attracted to a steady 10% return.

There is no reason the utility model can’t be applied to rural fiber. The primary characteristic of a regulated utility is that it is a monopoly, or nearly a monopoly. In rural America today there is no real broadband competition. Rural areas are served by a combination of dial-up, satellite, cell phone data, very poor DSL or fixed wireless systems. There are many millions of households with no other options other than dial-up or satellite.

There are two keys to making this idea work – how to pay for it and how to monitor and regulate the earnings of these new broadband fiber monopolies. We already have a historical model of how to pay for this. Rural telephone companies for years were regulated in this manner and there was a combination of funding mechanisms used to fund rural telephony. Most obvious was local revenues collected from customers. I know of rural fiber networks today with 70% to 80% broadband penetration rates, so these networks would get a significant number of customers and local revenue.

Telephone companies have also pooled some of their revenues nationally in process called cost separations. Phone companies throw all of their interstate revenues into a pot and divvy up the money according to need. There is no reason that some of the broadband revenues couldn’t be pooled.

Finally, the telcos had direct subsidies from the Universal Service Fund to make up any shortfall. The biggest complaint about the Universal Service Fund is that it isn’t paid to companies on the same basis as other revenues, and thus it enriches some companies unfairly and doesn’t give enough to others. But if USF revenues were put into the same pool as other revenues then it would be allocated each year where the financial need warrants it.

The process of pooling revenues has always been a bit complicated in practice, but simple in concept. Each company in a pool calculates their costs according to a specific formula – in the case of telcos, using rules proscribed by the FCC. Then, some external pooling body examines those calculations and administers the collection and distribution of pooled monies. This whole process of administering a pool adds only a tiny fractional additional cost onto the process – and is necessary to make sure everybody plays fair and that there is no fraud.

The basic concept of pooled revenues and some kind of broadband USF could provide the economic basis for obtaining the funding needed to build rural broadband. I would expect that rural telcos and cooperatives would jump onto this idea immediately and build more rural fiber. And there is no reason that the large telcos wouldn’t at least consider this. But since they are driven by Wall Street earnings they might pass on regulated returns.

We know this process can work because it has been in place for decades for rural telephony. The only alternative to this that I can think of is for the federal government to hand out grants to build rural broadband. But I just don’t see that happening in today’s political environment. So rather than wait for the federal government to finally decide to hand out huge grants we ought to take the model we know and get going with it.

This concept would need a boost from the federal government to get going. It would probably require an act of Congress, but this is something that is probably easier to get enacted than a huge grant program. And it is a lot more attractive politically since it provides a way for private investment to fund rural broadband rather than the government. We have too try something because the rural areas are falling quickly onto the wrong sie of the digital divide, and that is not good for any of us.

Who Controls Access to Poles?

telephone cablesAT&T has sued the City of Louisville, KY over a recent ordinance that amends the rules about providing access to poles to a carrier that wants to build fiber. Louisville is hoping to attract Google or some other fiber overbuilder to the city.

But there has been no announcement that any such deal is in place. It seems the city is trying to make it more attractive for a fiber overbuilder to come to the city and so they passed an ordinance that allows a new fiber builder relatively fast access to poles. The ordinance gives a new fiber builder the right to rearrange or relocate existing wires on poles if the other wire owners on the poles don’t act to do so within 30 days.

AT&T opposes the measure, and their court case says, “The Ordinance thus purports to permit a third party… to temporarily seize AT&T’s property, and to alter or relocate AT&T’s property, without AT&T’s consent and, in most circumstances, without prior notice to AT&T.” They argue that a new attacher will cause service outages and create other problems with their network.

The real issue at hand in the case is if a City has the right to make rules concerning poles. Today there are basic pole rules issued by the FCC that lays forth the fact that a competitive telecom provider must be given access to existing poles, ducts and conduits. Such rights were provided by the Telecommunications Act of 1996. In reading the FCC rules you might think that a new attacher already has the rights that are being granted by Louisville. The FCC rules allow a new attacher to go ahead and put their wires on poles if the pole owners don’t act quickly enough to process the needed paperwork to allow this.

But the rub comes in when there is not a clear space on an existing pole. There are FCC and national electrical standards that require that there be certain spacing between different kinds of cables on poles, mostly to protect the safety of those that have to work in that space. If you’ve ever looked up at poles much you’ll notice that it’s not unusual for the distances between the different utilities to vary widely from pole to pole, meaning that whoever hung the cables was not paying a lot of attention to the spacing.

In the industry, when there is not enough of a gap to accommodate a new attacher, the existing wire owners have to move their wires to create the needed space. If there is not enough space after such a rearrangement then a new taller pole must be erected and the wires all moved to the new pole. The new attacher is on the hook for all rearrangement costs. This process is called ‘make ready’ work and is one of the major costs of getting onto poles in busy urban environments.

The FCC has granted states the right to make additional rules concerning pole attachments, and many states have done so. This lawsuit asks if a city has the same right to make pole attachment rules as is granted to the states – and so this is basically a jurisdiction issue. It’s the kind of issue that probably is going to have to eventually go to the Supreme Court if the loser of this first suit doesn’t like the court’s answer.

To put all of this into perspective, pole issues have often been one of the biggest problems for new telecom providers. Back in the late 1990s I had one client that wanted to get on about 10,000 poles and was told by the local electric company that they were only willing to process paperwork for about a hundred poles per week. I had another client back in that same time frame that was told by a rural electric company that they just didn’t have the time to process any pole attachment requests.

And as you can imagine, when getting on poles bogs down, a new fiber project also bogs down. This can be extremely costly for the company making the expansion because they will have already begun spending the money to build the new network and they will have a pressing need to start generating revenues to pay for it.

Across the country the conditions of poles vary widely. In some cities the poles are relatively short and they are crammed full of wires. In other cities the poles are taller and do not require much make ready work for a new attacher. But when the poles are not ready for a new attacher this can be a costly and time-consuming process. It’s going to be interesting to see if the courts allow a city to get involved in this issue in the same way that states can.

Issues Facing Cellular Networks

Cell-TowerMost networks today are under stress due to growing broadband traffic. The networks that are easily the most stressed are cellular networks and I think that there can be lessons learned in looking how mobile providers are struggling to keep up with demand. Consider the following current issues faced by cellular network owners:

Traffic Volume Growth. Around the world cellular networks are seeing between 60% to 120% annual growth in data volumes. The problem with that kind of growth is that as soon as any upgrade is made to a part of the network it is consumed by the growth. This kind of growth means constant choke points in the network and problems encountered by customers.

The large cellular companies like Verizon and AT&T are handling this with big annual capital budgets for network improvements. But they will be the first to tell you that even with those expenditures they are only putting band-aids on the problem and are not able to get ahead of the demand curve.

WiFi Offload Not Effective. For years cellular networks have talked about offloading data to WiFi. But the industry estimates are that only between 5% and 15% of data through cellphones is being handled by WiFi. This figure does not include usage in homes and offices where the phone user elects to use their own local network, but rather is the traffic that is offloaded when users are outside of their base environment. Finding ways to increasing WiFi offload would lower the pressure on mobile networks.

Traffic has Moved Indoors. An astounding 75% of mobile network traffic originates from inside buildings. Historically mobile traffic came predominantly from automobiles and people outside, but the move indoors looks like a permanent new phenomenon driven by video and data usage.

The biggest impact of this shift is that most cellular networks were designed and the towers spaced for outdoor customers and so the towers and radios are in the wrong places to best serve where the volume is greatest today. This trend is the number one driver of micro cell sites that are aimed at relieving congestion for specific locations.

Network Problems Can be Extremely Local. The vagaries of wireless delivery mean that there can be network congestion at a location but no network issues as close as 50 yards away. This makes it very hard to diagnose and fix network issues. Problems can pop up and disappear quickly. A few more large data users than normal can temporarily cripple a given cell site.

Network owners are investigating technologies that will allow customers to pick up a more distant cell site when their closest one is full. Wireless networks have always allowed for this but it’s never worked very well in practice. The carriers are looking for a more dynamic process that will find he best way to serve each customer quickly in real time.

Networks are Operating too Many Technologies. It’s not unusual to find a given cell site operating several versions of 3G and 4G and sometimes still even 2G. The average cell site carries 2.2 different technologies, provided by 1.3 different vendors.

Cellular operators are working quickly towards software defined networks that will allow them to upgrade huge numbers of cell sites to a new version of software at the same time. They are also working to separate voice and data to different frequencies making it easier to handle each separately. Finally, the large cellular carriers are looking to develop and manufacture their own custom equipment to cut down on the number of vendors.

Still Too Many Failures. There are still a lot of dropped voice calls, and 80% of them are caused by mobility failures, meaning a failure of the network to handle a customer on the move. 50% of dropped data sessions are due to capacity issues.

Cellular providers are looking for the capacity to more dynamically assign radio resources on the fly at different times of the day. It’s been shown that there are software techniques that can optimize the local network and can reduce failures by as much as 25%.

Two Tales of DSL

DSL modemI had to chuckle the other day when I saw two articles about DSL that were going in opposite directions. In the first announcement AT&T announced that they are phasing the TV product out of their U-verse product. The same day I saw an announcement from Frontier that they are entering the video-over-DSL business in a big way.

The technology that is being used in both cases is paired DSL. This means putting DSL onto two copper phone lines and then using them together to create one data path. Under ideal conditions, meaning perfect copper, the technology can deliver about 40 Mbps through about 7,000 feet of copper. But of course, there is very little perfect copper in the real world and so actual speeds are typically somewhat slower than that.

In AT&T’s case this change makes sense. They purchased DirecTV and they are going to use the satellite platform to deliver the cable TV signal. This will free up the DSL pipe to be used strictly for data and VoIP, and this will extend the competitive ability of the DSL technology. In most cases the company can deliver 20 Mbps – 40 Mbps to homes that are close enough to a DSLAM. I’m sure that AT&T has been finding it increasingly difficult to deliver data and cable together on one DSL pipe.

The downside for AT&T is that not everybody can get DirecTV. Some people live where they can’t see the satellite and many people in apartments aren’t allowed to stick up a dish. So this isn’t a perfect solution for AT&T, but the increased data speeds probably mean a bigger potential customer base for the U-verse product.

Frontier is coming at this from a different direction. The company has seen declines in revenue as voice customers continue to drop off the network and as they continue to lose DSL customers to cable companies. The company saw a 1% decline in revenue just in the fourth quarter of 2015.

To try to generate new sales the company just announced this week that they are entering the business that AT&T is abandoning. The company launched IPTV in the 4th quarter of last year and announced that they are going to extend this to 40 other markets and pass 3 million customers with the product. They are going to use the same paired DSL as AT&T U-verse and will offer video on the DSL.

Frontier is hoping that this move, which will give them the triple play bundle will bring in more broadband customers and bolster both revenues and the bottom line. The company also expects to get a nice bump from finally closing on their purchase of Verizon properties in Texas, Florida and California. It is going to be a busy year for the company as they also hope to add 100,000 new broadband customers this year for the first of six years of an expansion funded by the CAF II funds from the FCC.

I have a lot of sympathy for a company like Frontier. They have purchased a lot of rural markets that have been neglected for years by Verizon and which don’t have very good copper. Where many smaller telcos are converting all of their rural areas to fiber, Frontier does not have access to the capital needed to do that, nor would they want to suffer through the earnings hit that comes from spending huge amounts on capital.

But the problem for all DSL providers is that within a few years the demand for broadband speed is going to exceed their capabilities. The statistic that I always like to quote is that household demand for broadband speeds doubles about every three years. This has happened since the earliest days of dial-up. One doesn’t have to chart out too many years in the future when the speeds that can be delivered on DSL are not going to satisfy anybody. The CAF II money is only requiring DSL that will be at least 10 Mbps download, which is already inadequate today for most families. But even the 20 – 40 Mbps paired-DSL is going to feel very slow when cable companies have upgraded to minimum speeds of 100 Mbps or faster. And if that DSL is also carrying video along with the data it’s going to feel really slow. I would not want to be one of the companies still trying to make copper work for broadband a decade from now.

The 5G Hype

Cell-TowerBoth AT&T and Verizon have had recent press releases about how they are currently testing 5G cellular data technology, and touting how wonderful it’s going to be. The AT&T Press release on 5G included the following statements:

Technologies such as millimeter waves, network function virtualization (NFV), and software-defined networking (SDN) will be among the key ingredients for future 5G experiences. AT&T Labs has been working on these technologies for years and has filed dozens of patents connected with them. . . . We expect 5G to deliver speeds 10-100 times faster than today’s average 4G LTE connections. Customers will see speeds measured in gigabits per second, not megabits.

AT&T went on to say that they are testing the technology now and plan to start applying it in a few applications this year in Austin, TX.

This all sounds great, but what are the real facts about 5G? Consider some of the following:

Let’s start with the standard for 5G. It has not yet been developed and is expected to be developed by 2018. The Next Generation Mobile Network Alliance (the group that will be developing the standard) states that the standard is going to be aimed at enabling the following:

  • Data rates of several tens of megabits per second should be supported for tens of thousands of users;
  • 1 gigabit per second can be offered simultaneously to workers on the same office floor;
  • Several hundreds of thousands of simultaneous connections to be supported for massive sensor deployments

How does this stack up against AT&T’s claims? First, let’s talk about how 4G does today. According to OpenSignal (who studies the speeds from millions of cellular connections), the average LTE download speeds in the 3rd quarter of last year for the major US carriers was 6 Mbps for Sprint, 8 Mbps for AT&T, and 12 Mbps for both Verizon and T-Mobile.

The standard is going to be aimed to improve average speeds for regular outdoor usage to ‘several tens of megabits per second’ which means speeds of maybe 30 Mbps. That is a great data speed on a cellphone, but it is not 10 to 100 times faster than today’s 4G speeds, but instead a nice incremental bump upward.

Where the hype comes from is the part of the standard that talks about delivering speeds within an office. With 5G that is going to be a very different application, and that very well might achieve gigabit speeds. This is where the millimeter waves come into play. As it turns out, AT&T and Verizon are talking about two totally different technologies and applications, but are purposefully making people think there will be gigabit cellular data everywhere.

The 5G standard is going to allow for the combination of multiple very high frequencies to be used together to create a very high bandwidth data path of a gigabit or more. But there are characteristics of millimeter wavelengths that limit this to indoor usage inside the home or office. For one, these frequencies won’t pass through hardly anything and are killed by walls, curtains, and to some extent even clear windows. And the signal from these frequencies can only carry large bandwidth a very short distance – at the highest bandwidth perhaps sixty feet. This technology is really going to be a competitor to WiFi but using cellular frequencies and standards. It will allow the fast transfer of data within a room or an office and would provide a wireless way to transmit something like Google’s gigabit broadband around an office without wires.

But these millimeter waves are not going to bring the same benefits outdoors that they can do indoors. There certainly can be places where somebody could get much faster speeds from 5G outdoor – if they are close to a tower and there are not many other users. But these much faster speeds are not going to work, for example, for somebody in a moving car. The use of multiple antennas for multiple high frequencies is going to require an intricate and complicated antenna array at both the transmitter and the receiver. But in any case the distance limitations and the poor penetration ability of millimeter frequencies means this application will never be of much use for widespread outdoor cellphone coverage.

So 5G might mean that you will be able to get really fast speeds inside your home, at a convention center or maybe a hotel, assuming that those places have a very fast internet backbone connection. But the upgrade to what you think of as cellular data is going to be a couple-fold increase in data speeds for the average user. And even that is going to mean slightly smaller coverage circles from a given cell tower than 4G.

The problem with this kind of hype is that it convinces non-technical people that we don’t need to invest in fiber because gigabit cellular service is coming very soon. And nothing could be further from the truth. There will someday be gigabit speeds, but just not in the way that people are hoping for. And both big companies make this sound like it’s right around the corner. There is no doubt that the positive press over this are is great for AT&T and Verizon. But don’t buy the hype – because they are not promising what people think they are hearing.