Femtocells Instead of Small Cells?

I have just seen the future of broadband and it does not consist of building millions of small 5G cell sites on poles. CableLabs has developed a femtocell technology that might already have made the outdoor 5G small cell site technology obsolete. Femtocells have been around for many years and have been deployed in rural areas to provide a connection to the cellular network through a landline broadband connection. That need has largely evaporated due the ability to use cellphones apps to directly make WiFi calls.

The concept of a femtocell is simple – it’s a small box that uses cellular frequencies to communicate with cellular devices that then hands-off calls to a landline data connection. Functionally a femtocell is a tiny cell site that can handle a relatively small volume of cellular calls simultaneously.

According to CableLabs, deploying a femtocell inside a household is far more efficient that trying to communicate with the household from a nearby pole-mounted transmitter. Femtocells eliminate one of the biggest weaknesses of outdoor small cell sites – much of the power of 5G is lost in passing through the external walls of a home. Deploying the cellular signal from within the house means a much stronger 5G signal throughout a home, allowing for more robust 5G applications.

This creates what I think is the ultimate broadband network – one that combines the advantages of a powerful landline data pipe combined with both 5G and WiFi wireless delivery within a home. This is the vision I’ve had for over a decade as the ultimate network – big landline data pipe last mile and powerful wireless networks for connecting to devices.

It’s fairly obvious that a hybrid femtocell / WiFi network has a huge cost advantage over the deployment of outdoor small cell sites on poles. It would eliminate the need for the expensive pole-mounted transmitters – and that would eliminate the battles we’re having about the proliferation of wireless devices. It’s also more efficient to deploy a femtocell network – you would deploy only to those homes that want to the 5G features – meaning you don’t waste an expensive outdoor network to get to one or two customers. It’s not hard to picture an integrated box that has both a WiFi modem and a cellular femtocell, meaning the cost to get 5G into the home would be a relatively cheap upgrade to WiFi routers rather than deploying a whole new separate 5G network.

There are significant benefits for a home to operate both 5G and WiFi. Each standard has advantages in certain situations within the home. As much as we love WiFi, it has big inherent weaknesses.  WiFi networks bogs down, by definition, when there too many devices calling for a connection. Shuttling some devices in the home to 5G would reduce WiFi collisions and makes WiFi better.

5G also has inherent advantages. An in-home 5G network could use frequency slicing to deliver exactly the right amount of bandwidth to devices. It’s not hard to picture a network where 5G is used to communicate with cellphones and small sensors of various types while WiFi is reserved for communicating with large bandwidth devices like TVs and computers.

One huge advantage of a femtocell network is that it could be deployed anywhere. The cellular companies are likely to cherry pick the outdoor 5G network deployments only to neighborhoods where the cost of backhaul is affordable – meaning that many neighborhoods will never get 5G just like many neighborhoods in the northeast never got Verizon FiOS. You could deploy a hybrid femtocell to one customer on a block and still be profitable. Femtocells also eliminate the problems of homes that won’t have line-of-sight to a pole-mounted network.

This technology obviously favors those who have built fast broadband – that’s cable companies that have upgraded to DOCSIS 3.1 and fiber overbuilders. For those businesses this is an exciting new product and another new revenue stream to help replace shrinking cable TV and telephone networks.

One issue that would need to be solved is spectrum, since most of it is licensed to cellular companies. The big cable companies now own some spectrum, but smaller cable companies and fiber overbuilders own none. There is no particular reason why 5G inside a home couldn’t coexist with WiFi, with both using unlicensed spectrum, with some channels dedicated to each wireless technology. That would become even easier if the FCC goes through with plans to release 6 GHz spectrum as the next unlicensed band. The femtocell network could also utilize unlicensed millimeter wave frequency.

We’ll obviously continue to need outdoor cellular networks to accommodate roaming voice and data roaming, but these are already in place today. Rather than spend tens of billions to upgrade those networks for 5G data to homes, far less expensive upgrades can be made to augment those networks only where needed rather than putting multiple small cells on every city block.

It’s been my experience over forty years of watching the industry that in the long run the most efficient technology usually wins. If CableLabs develops the right home boxes for this technology, then the cable companies will be able blitz the market with 5G much faster, and for a far lower cost than Verizon or AT&T.

It would be ironic if the best 5G solution also happens to need the fastest pipe into the home. The decisions by big telcos to not deploy fiber over the last few decades might start looking like a huge tactical blunder. It looks to me like CableLabs and the cable companies might have found the winning 5G solution for residential service.

Do We Need 10 Gbps?

wraparound-glassesWe are just now starting to see a few homes nationwide being served by a 1 Gbps data connection. But the introduction of DOCSIS 3.1 cable modems and a slow but steady increase in fiber networks will soon make these speeds available to millions of homes.

Historically we saw home Internet speeds double about every three years, dating back to the 1980s. But Google Fiber and others leapfrogged that steady technology progression with the introduction of 1 Gbps for the home.

There are not a whole lot of home uses today that require a full gigabit of speed – but there will be. Home usage of broadband is still doubling about every three years and homes will catch up to that speed easily within a few years. Cisco recently said that the average home today needs 24 Mbps speeds but by 2019 will need over 50 Mbps. It won’t take a whole lot of doublings of those numbers to mean that homes will expect a lot more speed than we are seeing today.

There is a decent chance that the need for speed is going to accelerate. Phil McKinney of CableLabs created this video that shows what a connected home might look like in the near future. The home owns a self-driving car. The video shows a mother working at home with others using a collaboration wall, with documents suspended in the air. It shows one daughter getting a holographic lecture from Albert Einstein while another daughter is talking with her distant grandmother, seemingly in a meadow somewhere. And it shows the whole family using virtual / enhanced reality goggles to engage in a delightful high-tech game.

This may seem like science fiction, but all of these technologies are already being developed. I’ve written before about how we are at the start of the perfect storm of technology innovation. Our past century was dominated by a few major new technologies and the recent forty years has been dominated by the computer chip. But there are now literally dozens of potentially transformational technologies all being developed at the same time. It’s impossible to predict which ones will have the biggest influence on daily life – but many of them will.

Most of these new technologies are going to require a lot of bandwidth. Whether it’s enhanced reality, video collaboration, robots, medical monitoring, self-driving cars or the Internet of Things, we are going to see a lot of needs for bandwidth much greater than today’s surge due to video. The impact of video, while huge today, will pale against the bandwidth needs of these new technologies – particularly when they are used together as implied in this video.

So it’s not far-fetched to think that we are going to need homes with bandwidth needs beyond the 1 Gbps data speeds we are just now starting to see. I’m always disappointed when I see ISP executives talking about how their latest technology upgrades are making them future proof. There are only two technologies that can meet the kinds of speeds envisioned in McKinney’s video – fiber and cable networks. These speeds are not going to be delivered by telephone copper or wirelessly, and to think so is to ignore the basic physics underlying each technology.

Some of the technologies shown in KcKinney’s video are going to start becoming popular within five years, and within twenty years they will all be mature technologies that are part of everyday life. We need to have policies and plans that look towards building the networks we are going to need to achieve that future. We have to stop having stupid government programs that throw away money on expanding DSL and we need to build networks that have use beyond just a few years.

McKinney’s video is more than just an entertaining glimpse into the near-future; it’s also meant to prod us into making sure that we are ready for that future. There are many companies today investing in technologies that can’t deliver gigabit speeds – and such companies will grow obsolete and disappear within a decade or two. And policies that do anything other than promote gigabit networks are a waste of time and resources.

A New Cable Network Architecture

coaxial cableThere seems to be constant press about the big benefits that are going to come when cable coaxial networks upgrade to DOCSIS 3.1. Assuming a network can meet all of the requirements for a DOCSIS 3.1 upgrade the technology is promising to allow gigabit download speeds for cable networks and provide cable companies a way to fight back against fiber networks. But the DOCSIS 3.1 upgrade is not the only technological path that can increase bandwidth on cable networks.

All of the techniques that can increase speeds have one thing in common – the network operator needs to have first freed up channels on the cable system. This is the primary reason that cable systems have converted to digital – so that they could create empty channel slots on the network that can be used for broadband instead of TV.

The newest technology that offers an alternative to DOCSIS 3.1 is being called Distributed Access Architecture (DAA). This solution moves some or all of the broadband electronics from the core headend into the field. In a traditional DOCSIS cable network the broadband paths are generated to customers using a device called a CMTS (cable modem termination system) at the core. This is basically a router that puts broadband onto the cable network and communicates with the cable modems.

In the most extreme versions of DAA the large CMTS in the headend would be replaced by numerous small neighborhood CMTS units dispersed throughout the network. In the less extreme version of DAA there would be smaller number of CMTS units placed at existing neighborhood nodes. Both versions provide for improved broadband in the network. For example, in the traditional HFC network a large CMTS might be used to feed broadband to tens of thousands of customers. But dispersing smaller CMTS units throughout the network would result in a network where fewer customers are sharing bandwidth. In fact, if the field CMTS units can be made small enough and cheap enough a cable network could start to resemble a fiber PON network that typically shares bandwidth with up to 32 customers.

There are several major advantages to the DAA approach. First, moving the CMTS into the field carries the digital signal much deeper into the network before it gets converted to analog. This reduces interference which strengthens the signal and improves quality. And sending digital signals deeper into the network allows support for higher QAM, which is the signaling protocol used to squeeze more bits per hertz into the network. Finally, the upgrade to DAA is the first step towards migrating to an all-digital network – something that is the end game for every large cable company.

There is going to be an interesting battle between fans of DOCSIS 3.1 and those that prefer the DAA architecture. DOCSIS 3.1 was created by CableLabs, and the large cable companies who jointly fund CableLabs tend to follow their advice on an upgrade path. Today DOCSIS 3.1 is still in first generation deployment and is just starting to be field tested and there is already a backlog on ordering DOCSIS 3.1 core routers. This opens the door for the half dozen vendors that have developed a DAA solution as an alternative.

While CableLabs didn’t invent DAA, they have blessed three different variations of network design for the technology. The technology has already been trialed in Europe and the Far East and is now becoming available in the US. It’s been rumored that at least one large US cable company is running a trial of the equipment, but there doesn’t seem to be any press on this.

Cable networks are interesting in that you can devise a number of different migration paths to get to an all-digital network. But in this industry the path that is chosen by the largest cable companies tends to become the de facto standard for everybody else. As the large companies buy a given solution the hardware costs drop and the bugs are worked out. As attractive as DAA is, I suspect that as Comcast and others choose the DOCSIS 3.1 path that it will become the path of choice for most cable companies.

New CableLabs Standard will Improve Cable Networks

coaxial cableCableLabs just announced a new set of specifications that is going to improve cable HFC networks and their ability to deliver data services. They announced a new distributed architecture that they are calling the Converged Cable Access Platform (CCAP).

This new platform separates functions that have always been performed at the headend, which is going to allow for a more robust data network. Today, the cable headend is the place where all video is inserted, all cable management is done, where the QAM modulation and RF Modulation is performed, and most importantly where the CMTS (cable modem termination system) function is done.

The distributed CCAP allows these functions to be separated and geographically distributed as needed throughout the cable network. The main benefit of this is that a cable operator will be able to push pure IP to the fiber nodes. Today, the data path between the headend and the neighborhood nodes needs to carry two separate paths – both a video feed and a DOCSIS data feed. By moving the CMTS and the QAM modulators to the fiber node the data path to the node becomes a single all-IP path that contains both IPTV and IP data. The new CCAP node can then convert everything to RF frequencies as needed at the node.

We’ve been expecting this change since for the last few years Chinese cable networks have implemented the distributed network functions. Probably one of the biggest long-term potentials for this change is that it sets the stage for a cable company to offer IPTV over DOCSIS frequencies, although there is more development work to be done in this area.

There are several immediate benefits to a cable system. First, this improves video strength since the TV signals are now originating at the neighborhood nodes rather than back at the headend. This will be most noted by customers who are currently at the outer fringes of a cable node. The change also will boost the overall amount of data delivered to a neighborhood node between 20–40%. It’s not likely this mean faster speeds, but instead will provide more bandwidth for busy times and make it less likely that customers lose speed during peak hours. Finally, it means that a cable company can get more life out of existing cable nodes and will be able to wait longer before having to ‘split’ nodes to provide faster data to customers.

Cable companies are not likely to rush to implement this everywhere. It would mean an upgrade at each node and most cable companies have a node for every 200–400 customers—that’s a lot of nodes. But one would think this will quickly become the standard for new nodes and that cable companies will implement it over time into the existing network.

This is the first step of what is being called the IP transition for cable companies. Most of my readers are probably aware that the telcos are working feverishly towards making a transition to all-IP. But cable companies are going to want to do that for a different reason. There is a huge amount of bandwidth capability on coaxial cable and if the entire cable network becomes IP from end-to-end then the huge data capacity in the cable network would be realized. Today cable companies use a broadcast system where they send all cable channels to every home and they then provide data services on whatever bandwidth is left. But in an all-IP system they would only send a customer the channels they are watching, meaning that most of the bandwidth on the system would be available for high-speed Internet services.

So think of this as the first step in a transition to an all-IP cable network. There are a number of additional steps needed to get there, but this pushes IP out to the neighborhood nodes and starts the transition.

What’s Next?

Bell_Labs_1954I had the opportunity this week to visit CableLabs. CableLabs is a non-profit research laboratory founded in 1988 that is funded by the largest cable companies in the US and Europe. CableLabs works on both practical applications for cable networks while also looking ahead into the future to see what is coming next. CableLabs developed the DOCSIS standards that are now the basis for cable modems on coaxial networks. They hold numerous patents and have developed such things as orthogonal frequency division and VoIP.

I also had the opportunity over the years to visit Bell Labs a few time. Bell Labs has a storied history. They were founded by Alexander Graham Bell as Volta Laboratories and eventually became part of AT&T and became known as Bell Labs. They were credited with developing some of the innovations that have shaped our electronic world such as the transistor, the laser and radio astronomy. They developed information theory which has led to the ability to encode and send data and is the basis for the Internet. They also developed a lot of software including UNIX, C and C++. Bell Labs employed scientists who went on to win seven Nobel prizes for their inventions.

Both of these organizations are full of really bright, really innovative people. In visiting both places you can feel the energy of the places, which I think comes from the fact that the scientists and engineers that work there are free to follow good ideas.

When you visit places like these labs it makes you think about what is coming in the future. It’s a natural human tendency to get wrapped up in what is happening today and to not look into the future, but these places are tasked with looking both five years and twenty years into the future and trying to develop the networking technologies that are going to be needed then.

Some of this work done in these labs is practical. For example, both labs today are working on finding ways to distribute fast internet throughout existing homes and businesses using the existing wires. Google has helped to push the world into looking at delivering a gigabit of bandwidth to homes, business and schools, and yet the wiring that exists in those places is not capable with today’s technology to deliver that much bandwidth, short of expensive rewiring with category 5 cable. So both places are looking at technologies that will allow the existing wires to carry more data.

It’s easy some time to take for granted the way that new technologies work. What the general public probably doesn’t realize is the hard work that goes into to solving the problems associated with any new technology. The process of electronic innovation is two-fold. First scientist develop new ideas and work in the lab to create a working demonstration. But then the hard work comes when the engineers get involved and are tasked with turning a good lab idea into practical products. This means first finding ways to solve all the little bugs and challenges that are part of every complicated electronic medium. There are always interference issues, unexpected harmonics and all sorts of issues that must be tweaked and fixed before a new technology is ready to hit the street.

And then there are the practical issues associated with making new technology affordable. It’s generally much easier to make something work when there is no constraints of size or materials. But in the world of electronics we always want to make things smaller, faster, cheaper to manufacture and more reliable. And so engineers work on turning good ideas into workable products that can be profitable in the real world.

There are several big trends that we know will be affecting our industry over the next decade and these labs are knee-deep in looking at them. Yesterday I talked about how the low price of the cloud is bringing much of our industry to a tipping point where functions that were done locally will all move to the cloud. Everyone also predicts a revolution in the interface between people and technology due to the Internet of Things. And as mentioned earlier, we are on the cusp of bringing really fast Internet speeds to most people. Each of these three changes are transformational, and collectively they are almost overwhelming. Almost everything that we have taken for granted in the electronic world is going to change over the next decade. I for one am glad that there are some smart scientists and engineers who are going to help to make sure that everything still works.

Primer on DOCSIS

OLYMPUS DIGITAL CAMERAAnybody who uses a cable modem at home has probably heard the word DOCSIS. This is a set of standards that define how data is transmitted over coaxial cable networks. DOCSIS stands for Data Over Cable Service Interface Specification. It was developed by CableLabs, which is a research and standards organization that the cable companies have created for research and development purposes. CableLabs is to the large cable TV companies what Bell Labs has been for the large telephone companies.

DOCSIS 1.0 was first issued in 1997 as a standard and created the basis for cable modems. It established a data network that started with a CMTS (cable modem terminal system) that talked to cable modems in each home. DOCSIS 1.0 was limited to a single data channel which means that data speeds were limited to a usable 38 Mbps download and 9 Mbps upload for everybody together on a cable node. Because the data was shared with anywhere up to 200 homes, speeds on DOCSIS 1.0 were practically limited to a maximum of about 7 Mbps, although these speeds could be much slower at peak times.

The standard was updated in 1999 to DOCSIS 1.1 which allowed for QoS (Quality of Service) which enabled cable systems to carry voice calls, with priority, on the cable modem data path. There are still a significant number of field deployments using DOCSIS 1.0 and 1.1, particularly in smaller and rural cable systems.

DOCSIS 2.0 came out in 2001 and the major improvement was to increase upload speeds. Version 2.0 also improved the ability to transmit VoIP. The standard still kept the single channel downstream. As cable companies lowered node sizes there were DOCSIS 2.0 systems that supported speeds of up to 15 Mbps download.

The biggest improvement with DOCSIS came with version 3.0 which was released in August 2006. This standard allows for bonding cable channels together to make larger data paths to each node. Cable companies that have deployed DOCSIS 3.0 are offering much faster speeds than in the past. Comcast in the US offers 107 Mbps download in urban markets using the newer modems. In Canada, Shaw Cable and Videotron have used DOCSIS 3.0 to offer products over 200 Mbps download. Virgin Media in Britain announced a speed of 1.5 Gbps download and 150 Mbps upload.

Why don’t US cable companies offer speeds that fast? There is a trade-off in any cable system between the number of channels that are used for programming versus data. While US cable companies have undergone digital conversion to free up channels, they have used most of that new space to add high definition channels to their network rather than dedicate the extra space to data. In the future, cable companies will be able to free up even more space for data by converting their cable channels to IPTV. Today they multicast every channel in the system to every home, but with IPTV they would send only the channels people are watching.

The CEOs of the largest cable companies have often been quoted saying that they are providing the bandwidth that people need. And I am sure that the believe this. But we have a very long way to go to just convert all of the cable systems in the US to DOCSIS 3.0 and increase speeds. I work every day in markets where the speeds are far slower than they are in upgraded urban markets. But it’s good to know that the tools are there for the cable systems to increase speeds, when they finally decide the time is right to do so.