Categories
Technology

The Impending Cellular Data Crisis

There is one industry statistic that isn’t getting a lot of press – the fact that cellular data usage is more than doubling every two years. You don’t have to plot that growth rate very many years into the future to realize that existing cellular networks will be inadequate to handle the increased demand in just a few years. What’s even worse for the cellular industry is that the growth is the nationwide average. I have many clients who tell me there isn’t nearly that much growth at rural cellular towers – meaning there is likely even faster growth at some urban and suburban towers.

Much of this growth is a self-inflicted wound by the cellular industry. They’ve raised monthly data allowances and are often bunding in free video with cellular service, thus driving up usage. The public is responding to these changes by using the extra bandwidth made available to them.

There are a few obvious choke points that will be exposed with this kind of growth. Current cellphone technology limits the number of simultaneous connections that can be made from any given tower. As customers watch more video they eat up slots on the cell tower that otherwise could have been used to process numerous short calls and text messages. The other big chokepoint is going to be the broadband backhaul feeding each cell cite. When usage grows this fast it’s going to get increasingly expensive to buy leased backbone bandwidth – which explains why Verizon and AT&T are furiously building fiber to cell sites to avoid huge increases in backhaul costs.

5G will fix some, but not all of these issues. The growth is so explosive that cellular companies need to use every technique possible to make cell towers more efficient. Probably the best fix is to use more spectrum. Adding an additional spectrum to a cell site immediately adds capacity. However, this can’t happen overnight. Any new spectrum is only useful if customers can use it and it takes a number of years to modify cell sites and cellphones to work on a new spectrum. The need to meet growing demand is the primary reason that the CTIA recently told the FCC they need an eye-popping 400 MHz of new mid-range spectrum for cellular use. The industry painted that as being needed for 5G, but it’s needed now for 4G LTE.

Another fix for cell sites is to use existing frequency more efficiently. The most promising way to do this is with the use of MIMO antenna arrays – a technology to deploy multiple antennas in cellphones to combine multiple spectrum together to create a larger data pipe. MIMO technology can make it easier to respond to a request from a large bandwidth user – but it doesn’t relieve the overall pressure on a cell tower. If anything, it might do the exact opposite and let cell towers prioritize those that want to watch video over smaller users who might then be blocked from making voice calls or sending text messages. MIMO is also not an immediate fix and also needs to work through the cycle of getting the technology into cellphones.

The last strategy is what the industry calls densification, which is adding more cell sites. This is the driving force behind placing small cell sites on poles in areas with big cellular demand. However, densification might create as many problems as it solves. Most of the current frequencies used for cellular service travel a decent distance and placing cell sites too close together will create a lot of interference and noise between neighboring towers. While adding new cell sites adds additional local capacity, it also decreases the efficiency of all nearby cell sites using traditional spectrum – the overall improvement from densification is going to be a lot less than might be expected. The worse thing about this is that interference is hard to predict and is very much a local issue. This is the primary reason that the cellular companies are interested in millimeter wave spectrum for cellular – the spectrum travels a short distance and won’t interfere as much between cell sites placed closely together.

5G will fix some of these issues. The ability of 5G to do frequency slicing means that a cell site can provide just enough bandwidth for every user – a tiny slice of spectrum for a text message or IoT signal and a big pipe for a video stream. 5G will vastly expand the number of simultaneous users that can share a single cell site.

However, 5G doesn’t provide any additional advantages over 4G in terms of the total amount of backhaul bandwidth needed to feed a cell site. And that means that a 5G cell site will get equally overwhelmed if people demand more bandwidth than a cell site has to offer.

The cellular industry has a lot of problems to solve over a relatively short period of time. I expect that in the middle of the much-touted 5G roll-out we are going to start seeing some spectacular failures in the cellular networks at peak times. I feel sympathy for cellular engineers because it’s nearly impossible to have a network ready to handle data usage that doubles every two years. Even should engineers figure out strategies to handle five or ten times more usage, in only a few years the usage will catch up to those fixes.

I’ve never believed that cellular broadband can be a substitute for landline broadband. Every time somebody at the FCC or a politician declares that the future is wireless I’ve always rolled my eyes, because anybody that understands networks and the physics of spectrum can easily demonstrate that there are major limitations on the total bandwidth capacity at a given cell site, along with a limit on how densely cell sites can be packed in an area. The cellular networks are only carrying 5% of the total broadband in the country and it’s ludicrous to think that they could be expanded to carry most of it.

Categories
Technology

Massive MIMO

One of the technologies that will bolster 5G cellular is the use of massive MIMO (multiple-input, multiple-output) antenna arrays. Massive MIMO is an extension of smaller MIMO antennas that have been use for several years. For example, home WiFi routers now routinely use multiple antennas to allow for easier connections to multiple devices. Basic forms of the MIMO technology have been deployed in LTE cell sites for several years.

Massive MIMO differs from current technology by the use of big arrays of antennas. For example, Sprint, along with Nokia demonstrated a massive MIMO transmitter in 2017 that used 128 antennas, with 64 for receive and 64 for transmit. Sprint is in the process of deploying a much smaller array in cell sites using the 2.5 GHz spectrum.

Massive MIMO can be used in two different ways. First, multiple transmitter antennas can be focused together to reach a single customer (who also needs to have multiple receivers) to increase throughput. In the Sprint trial mentioned above Sprint and Nokia were able to achieve a 300 Mbps connection to a beefed-up cellphone. That’s a lot more bandwidth than can be achieved from one transmitter, which at the most could deliver whatever bandwidth is possible on the channel of spectrum being used.

The extra bandwidth is achieved in two ways. First, using multiple transmitters means that multiple channels of the same frequency can be sent simultaneously to the same receiving device. Both the transmitter and receiver must have the sophisticated and powerful computing power to coordinate and combine the multiple signals.

The bandwidth is also boosted by what’s called precoding or beamforming. This technology coordinates the signals from multiple transmitters to maximize the received signal gain and to reduce what is called the multipath fading effect. In simple terms the beamforming technology sets the power level and gain for each separate antenna to maximize the data throughput. Every frequency and its channel operates a little differently and beamforming favors the channels and frequency with the best operating capabilities in a given environment. Beamforming also allows for the cellular signal to be concentrated in a portion of the receiving area – to create a ‘beam’. This is not the same kind of highly concentrated beam that is used in microwave transmitters, but the concentration of the radio signals into the general area of the customer means a more efficient delivery of data packets.

The cellular companies, though, are focused on the second use of MIMO – the ability to connect to more devices simultaneously. One of the key parameters of the 5G cellular specifications is the ability of a cell site to make up to 100,000 simultaneous connections. The carriers envision 5G is the platform for the Internet of Things and want to use cellular bandwidth to connect to the many sensors envisioned in our near-future world. This first generation of massive MIMO won’t bump cell sites to 100,000 connections, but it’s a first step at increasing the number of connections.

Massive MIMO is also going to facilitate the coordination of signals from multiple cell sites. Today’s cellular networks are based upon a roaming architecture. That means that a cellphone or any other device that wants a cellular connection will grab the strongest available cellular signal. That’s normally the closest cell site but could be a more distant one if the nearest site is busy. With roaming a cellular connection is handed from one cell site to the next for a customer that is moving through cellular coverage areas.

One of the key aspects of 5G is that it will allow multiple cell sites to connect to a single customer when necessary. That might mean combining the signal from a MIMO antenna in two neighboring cell sites. In most places today this is not particularly useful since cell sites today tend to be fairly far apart. But as we migrate to smaller cells the chances of a customer being in range of multiple cell sites increases. The combining of cell sites could be useful when a customer wants a big burst of data, and coordinating the MIMO signals between neighboring cell sites can temporarily give a customer the extra needed bandwidth. That kind of coordination will require sophisticated operating systems at cell sites and is certainly an area that the cellular manufacturers are now working on in their labs.

Categories
Current News Technology

Spectrum and 5G

All of the 5G press has been talking about how 5G is going to be bringing gigabit wireless speeds everywhere. But that is only going to be possible with millimeter wave spectrum, and even then it requires a reasonably short distance between sender and receiver as well as bonding together more than one signal using multiple MIMO antennae.

It’s a shame that we’ve let the wireless marketeers equate 5G with gigabit because that’s what the public is going to expect from every 5G deployment. As I look around the industry I see a lot of other uses for 5G that are going to produce speeds far slower than a gigabit. 5G is a standard that can be applied to any wireless spectrum and which brings some benefits over earlier standards. 5G makes it easier to bond multiple channels together for reaching one customer. It also can increase the number of connections that can be made from any given transmitter – with the biggest promise that the technology will eventually allow connections to large quantities of IOT devices.

Anybody who follows the industry knows about the 5G gigabit trials. Verizon has been loudly touting its gigabit 5G connections using the 28 GHz frequency and plans to launch the product in up to 28 markets this year. They will likely use this as a short-haul fiber replacement to allow them to more quickly add a new customer to a fiber network or to provide a redundant data path to a big data customer. AT&T has been a little less loud about their plans and is going to launch a similar gigabit product using 39 GHz spectrum in three test markets soon.

But there are also a number of announcements for using 5G with other spectrum. For example, T-Mobile has promised to launch 5G nationwide using its 600 MHz spectrum. This is a traditional cellular spectrum that is great for carrying signals for several miles and for going around and through obstacles. T-Mobile has not announced the speeds it hopes to achieve with this spectrum. But the data capacity for 600 MHz is limited and binding numerous signals together for one customer will create something faster then LTE, but not spectacularly so. It will be interesting to see what speeds they can achieve in a busy cellular environment.

Sprint is taking a different approach and is deploying 5G using the 2.5 GHz spectrum. They have been testing the use of massive MIMO antenna that contain 64 transmit and 64 receive channels. This spectrum doesn’t travel far when used for broadcast, so this technology is going to be used best with small cell deployments. The company claims to have achieved speeds as fast as 300 Mbps in trials in Seattle, but that would require binding together a lot of channels, so a commercial deployment is going to be a lot slower in a congested cellular environment.

Outside of the US there seems to be growing consensus to use 3.5 GHz – the Citizens Band radio frequency. That raises the interesting question of which frequencies will end up winning the 5G race. In every new wireless deployment the industry needs to reach an economy of scale in the manufacture of both the radio transmitters and the cellphones or other receivers. Only then can equipment prices drop to the point where a 5G capable phone will be similar in price to a 4GLTE phone. So the industry at some point soon will need to reach a consensus on the frequencies to be used.

In the past we rarely saw a consensus, but rather some manufacturer and wireless company won the race to get customers and dragged the rest of the industry along. This has practical implications for early adapters of 5G. For instance, somebody buying a 600 MHz phone from T-Mobile is only going to be able to use that data function when near to a T-Mobile tower or mini-cell. Until industry consensus is reached, phones that use a unique spectrum are not going to be able to roam on other networks like happens today with LTE.

Even phones that use the same spectrum might not be able to roam on other carriers if they are using the frequency differently. There are now 5G standards, but we know from practical experience with other wireless deployments in the past that true portability between networks often takes a few years as the industry works out bugs. This interoperability might be sped up a bit this time because it looks like Qualcomm has an early lead in the manufacture of 5G chip sets. But there are other chip manufacturers entering the game, so we’ll have to watch this race as well.

The word of warning to buyers of first generation 5G smartphones is that they are going to have issues. For now it’s likely that the MIMO antennae are going to use a lot of power and will drain cellphone batteries quickly. And the ability to reach a 5G data signal is going to be severely limited for a number of years as the cellular providers extend their 5G networks. Unless you live and work in the heart of one of the trial 5G markets it’s likely that these phones will be a bit of a novelty for a while – but will still give a user bragging rights for the ability to get a fast data connection on a cellphone.

Categories
Technology What Customers Want

Gigabit LTE

Samsung just introduced Gigabit LTE into the newest Galaxy S8 phone. This is a technology with the capability to significantly increase cellular speeds, and which make me wonder if the cellular carriers will really be rushing to implement 5G for cellphones.

Gigabit LTE still operates under the 4G standards and is not an early version of 5G. There are three components of the technology:

  • Each phone has as 4X4 MIMO antenna, which is an array of four tiny antennae. Each antenna can make a separate connection to the cell tower.
  • The network must implement frequency aggregation. Both the phone and the cell tower must be able to combine the signals from the various antennas into one coherent data path.
  • Finally, the new technology utilizes the 256 QAM (Quadrature Amplitude Modulation) protocol which can cram more data into the cellular data path.

The amount of data speeds that can be delivered to a given cellphone under this technology is going to rely on a number of different factors:

  • The nearest cell site to a customer needs to be upgraded to the technology. I would speculate that this new technology will be phased in at the busiest urban cell sites first, then to busy suburban sites and then perhaps to less busy sites. It’s possible that a cellphone could make connections to multiple towers to make this work, but that’s a challenge with 4G technology and is one of the improvements promised with 5G.
  • The amount of data speed that can be delivered is going to vary widely depending upon the frequencies being used by the cellular carrier. If this uses existing cellular data frequencies, then the speed increase will be a combination of the impact of adding four data streams together, plus whatever boost comes from using 256 QAM, less the new overheads introduced during the process of merging the data streams. There is no reason that this technology could not use the higher millimeter wave spectrum, but that spectrum will use different antennae than lower frequencies.
  • The traffic volume at a given cell site is always an issue. Cell sites that are already busy with single antennae connections won’t have the spare connections available to give a cellphone more than one channel. Thus, a given connection could consist of one to four channels at any given time.
  • Until the technology gets polished, I’d have to bet that this will work a lot better with a stationary cellphone rather than one moving in a car. So expect this to work better in downtowns, convention centers, etc.
  • And as always, the strength of a connection to a given customer will vary according to how far a customer is from the cell site, the amount of local interference, the weather and all of those factors that affect radio transmissions.

I talked to a few wireless engineers and they guessed that this technology using existing cellular frequencies might create connections as fast as a few hundred Mbps in ideal conditions. But they could only speculate on the new overheads created by adding together multiple channels of cellular signal. There is no doubt that this will speed up cellular data for a customer in the right conditions, with the right phone near the right cell site. But adding four existing cellular signals together will not get close to a gigabit of speed.

It will be interesting to see how the cellular companies market this upgrade. They could call this gigabit LTE, although the speeds are likely to fall far short of a gigabit. They could also market this as 5G, and my bet is that at least a few of them will. I recall back at the introduction of 4G LTE that some carriers started marketing 3.5G as 4G, well before there were any actual 4G deployments. There has been so much buzz about 5G now for a year that the marketing departments at the cellular companies are going to want to tout that their networks are the fastest.

It’s always an open question about when we are going to hear about this. Cellular companies run a risk in touting a new technology if most bandwidth hungry users can’t yet utilize it. One would think they will want to upgrade some critical mass of cell sites before really pushing this.

It’s also going to be interesting to see how faster cellphone speeds affect the way people use broadband. Today it’s miserable to surf the web on a cellphone. In a city environment most connections are more than 10 Mbps today, but it doesn’t feel that fast because of shortfalls in the cellphone operating systems. Unless those operating systems get faster, there might not be that much noticeable different with a faster connection.

Cellphones today are already capable of streaming a single video stream, although with more bandwidth the streaming will get more reliable and will work under more adverse conditions.

The main impediment to faster cellphones really changing user habits is the data plans of the cellular carriers. Most ‘unlimited’ plans have major restrictions on using a cellphone to tether data for other devices. It’s that tethering that could make cellular data a realistic substitute for a home landline connection. My guess is until we reach a time when there are ubiquitous mini-cell sites spread everywhere that the cellular carriers are not going to let users treat cellular data the same as landline data. Until cellphones are allowed to utilize the broadband available to them, faster cellular data speeds might not have much impact on the way we use our cellphones.

Categories
Technology

The Future of WiFi

There are big changes coming over the next few years with WiFi. At the beginning of 2017 a study by Parks Associates showed that 71% of broadband homes now use WiFi to distribute the signal – a percentage that continues to grow. New home routers now use the 802.11ac standard, although there are still plenty of homes running the older 802.11n technology.

But there is still a lot of dissatisfaction with WiFi and many of my clients tell me that most of the complaints they get about broadband connections are due to WiFi issues. These ISPs deliver fast broadband to the home only to see WiFi degrading the customer experience. But there are big changes coming with the next generation of WiFi that ought to improve the performance of home WiFi networks. The next generation of WiFi devices will be using the 802.11ax standard and we ought to start seeing devices using the standard by early 2019.

There are several significant changes in the 802.11ax standard that will improve the customer WiFi experience. First is the use of a wider spectrum channel at 160 MHz, which is four times larger than the channels used by 802.11ac. A bigger channel means that data can be delivered faster, which will solve many of the deficiencies of current WiFi home networks. This will improve the network performance using the brute strength approach of pushing more data through a connection faster.

But probably more significant is the use in 802.11ax of 4X4 MIMO (multiple input / multiple output) antennas. These new antennas will be combined with orthogonal frequency division multiple access (ODMFA). Together these new technologies will provide for multiple and separate data streams within a WiFi network. In layman’s terms think of the new technology as operating four separate WiFi networks simultaneously. By distributing the network load to separate channels the interference on any given channel will decrease.

Reducing interference is important because that’s the cause of a lot of the woes of current WiFi networks. The WiFi standard allows for unlimited access to a signal and every device within the range of a WiFi network has an equal opportunity to grab the WiFi network. It is this open sharing that lets us connect lots of different devices easily to a WiFi network.

But the sharing has a big downside. A WiFi network shares signals by shutting down when it gets more than one request for a signal. The network pauses for a short period of time and then bursts energy to the first network it notices when it reboots. In a busy WiFi environment the network stops and starts often causing the total throughput on the network to drop significantly.

But with four separate networks running at the same time there will be far fewer stops and starts and a user on any one channel should have a far better experience than today. Further, with the ODMFA technology the data from multiple devices can coexist better, meaning that a WiFi router can better handle more than one device at the same time, further reducing the negative impacts of completing signals. The technology lets the network smoothly mix signals from different devices to avoid network stops and starts.

The 802.11ax technology ought to greatly improve the home WiFi experience. It will have bigger channels, meaning it can send and receive data to WiFi connected devices faster. And it will use the MIMO antennas to make separate connections with devices to limit signal collision.

But 802.11ax is not the last WiFi improvement we will see. Japanese scientists have made recent breakthroughs in using what is called the TeraHertz range of frequency – spectrum greater than 300 GHz per second. They’ve used the 500 GHz band to create a 34 Gbps WiFi connection. Until now work in these higher frequencies have been troublesome because the transmission distances for data transmission has been limited to extremely short distances of a few centimeters.

But the scientists have created an 8-array antenna that they think can extent the practical reach of fast WiFi to as much as 30 feet – more than enough to create blazingly fast WiFi in a room. These frequencies will not pass through barriers and would require a small transmitter in each room. But the scientists believe the transmitters and receivers can be made small enough to fit on a chip – making it possible to affordably put the chips into any device including cell phones. Don’t expect multi-gigabit WiFi for a while. But it’s good to know that scientists are working a generation or two ahead on technologies that we will eventually want.

Categories
Technology

More Pressure on WiFi

As if we really needed more pressure put onto our public WiFi spectrum, both Verizon and AT&T are now launching Licensed Assisted Access (LAA) broadband for smartphones. This is the technology that allows cellular carriers to mix LTE spectrum with the unlicensed 5 GHz spectrum for providing cellular broadband. The LAA technology allows for the creation of ‘fatter’ data pipes by combining multiple frequencies, and the wider the data pipe the more data that makes it to the end-user customer.

When carriers combine frequencies using LAA they can theoretically create a data pipe as large as a gigabit while only using 20 MHz of licensed frequency. The extra bandwidth for this application comes mostly from the unlicensed 5 GHz band and is similar to the fastest speeds that we can experience at home using this same frequency with 802.11AC. However, such high-speed bandwidth is only useful for a short distance of perhaps 150 feet and the most practical use of LAA is to boost cellphone data signals for customers closest to a cell tower. That’s going to make LAA technology most beneficial in dense customer environments like busy downtown areas, stadiums, etc. LAA isn’t going to provide much benefit to rural cellphone towers or those along interstate highways.

Verizon recently did a demonstration of the LAA technology that achieved a data speed of 953 Mbps. They did this using three 5 GHz channels combined with one 20 megahertz channel of AWS spectrum. Verizon used a 4X4 MIMO (multiple input / multiple output) antenna array and 256 QAM modulation to achieve this speed. The industry has coined the new term of four-carrier aggregation for the technology since it combines 4 separate bands of bandwidth into one data pipe. A customer would need a specialized MIMO antenna to receive the signal and also would need to be close to the transmitter to receive this kind of speed.

Verizon is starting to update selected cell sites with the technology this month. AT&T has announced that they are going to start introducing LAA technology along with 4-way carrier aggregation by the end of this year. It’s important to note that there is a big difference between the Verizon test with 953 Mbps speeds and what customers will really achieve in the real world. There are numerous factors that will limit the benefits of the technology. First, there aren’t yet any handsets with the right antenna arrays and it’s going to take a while to introduce them. These antennas look like they will be big power eaters, meaning that handsets that try to use this bandwidth all of the time will have short battery lives. But there are more practical limitations. First is the distance limitation and many customers will be out of range of the strongest LAA signals. A cellular company is also not going to try to make this full data connection using all 4 channels to one customer for several reasons, the primary one being the availability of the 5 GHz frequency.

And that’s where the real rub comes in with this technology. The FCC approved the use of this new technology last year. They essentially gave the carriers access to the WiFi spectrum for free. The whole point of licensed spectrum is to provide data pipes for all of the many uses not made by licensed wireless carriers. WiFi is clearly the most successful achievement of the FCC over the last few decades and providing big data pipes for public use has spawned gigantic industries and it’s hard to find a house these days without a WiFi router.

The cellular carriers have paid billions of dollars for spectrum that only they can use. The rest of the public uses a few bands of ‘free’ spectrum, and uses it very effectively. To allow the cellular carriers to dip into the WiFi spectrum runs the risk of killing that spectrum for all of the other uses. The FCC supposedly is requiring that the cellular carriers not grab the 5 GHz spectrum when it’s already busy in use. But to anybody that understands how WiFi works that seems like an inadequate protection, because any of the use of this spectrum causes interference by definition.

In practical use if a user can see three or more WiFi networks they experience interference, meaning that more than one network is trying to use the same channel at the same time. It is the nature of this interference that causes the most problems with WiFi performance. When two signals are both trying to use the same channel, the WiFi standard causes all competing devices to go quiet for a short period of time, and then both restart and try to grab an open channel. If the two signals continue to interfere with each other, the delay time between restarts increases exponentially in a phenomenon called backoff. As there are more and more collisions between competing networks, the backoff increases and the performance of all devices trying to use the spectrum decays. In a network experiencing backoff the data is transmitted in short bursts between the times that the connection starts and stops from the interference.

And this means that when the cellular companies use the 5 GHz spectrum they will be interfering with the other users of that frequency. That’s what WiFi was designed to do and so the interference is unavoidable. This means other WiFi users in the immediate area around an LAA transmitter will experience more interference and it also means a degraded WiFi signal for the cellular users of the technology – and they reason they won’t get speeds even remotely close to Verizon’s demo speeds. But the spectrum is free for the cellular companies and they are going to use it, to the detriment of all of the other uses of the 5 GHz spectrum. With this decision the FCC might well have nullified the tremendous benefits that we’ve seen from the 5 GHz WiFi band.

Categories
Technology

A New WiFi Standard

There is a new version of WiFi coming soon that ought to solve some of the major problems with using WiFi in the home and in busy environments. The new standard has been labeled as 802.11ax and should start shipping in new routers by the end of this year and start appearing in devices in early 2018.

It’s the expected time for a new standard since there has been a new one every four or five years. 802.11a hit the market in 1999, 802.11g in 2003, 802.11n in 2009 and 802.11ac in 2013.

One of the most interesting thing about this new standard is that it’s a hardware upgrade and not a real change in the standards. It will be backwards compatible with earlier versions of 802.11, but both the router and the end devices must be upgraded to use the new standard. This means that business travelers are going to get frustrated when visiting hotels without the new routers.

One improvement is that the new routers will treat the 2.4 GHz and 5 GHz spectrums as one big block of spectrum, making it more likely to find an open channel. Most of today’s routers make you pick one band or the other.

Another improvement in 801.11ax is that the routers will have more antennas in the array, making it possible to connect with more devices at the same time. It’s also going to use MIMO (multiple input, multiple output) antenna arrays, allowing it to identify individual users and to establish fixed links to them. A lot of the problems in current WiFi routers come when routers get overwhelmed with more requests for service than the number of antennas that are available.

In addition to more signal paths the biggest improvement will be that the new 801.22ax routers will be able to better handle simultaneous requests for use of a single channel. The existing 802.11 standards are designed to share spectrum and when a second request is made to use a busy channel, the first transmission is stopped while the router decides which stream to satisfy – and this keep repeating as the router bounces back and forth between the two users. This is not a problem when there are only a few requests for simultaneous use, but in a crowded environment the constant stopping and starting of the router results in a lot of the available spectrum going unused and in nobody receiving a sustained signal.

The new 802.11ax routers will use OFDMA (orthogonal frequency division multiplying) to allow multiple users to simultaneously use the same channel without the constant stopping and starting at each new request for service. A hotel with a 100 Mbps backbone might theoretically be able to allow 20 users to each receive a 5 Mbps stream from a single WiFi channel. No wireless system will be quite that efficient, but you get the idea. A router with 802.11ax can still get overwhelmed, but it takes a lot more users to get to that condition.

We’ll have to wait and see how that works in practice. Today, if you visit a busy business hotel where there might be dozens of devices trying to use the bandwidth, the constant stopping and starting of the WiFi signal usually results in a large percentage of the bandwidth not being given to any user – it’s lost during the on/off sequences. But the new standard will give everybody an equal share of the bandwidth until all of the bandwidth is used or until it runs out of transmitter antennas.

The new standard also allows for scheduling connections between the router and client devices. This means more efficient use of spectrum since the devices will be ready to burst data when scheduled. This will allow devices like cellphones to save battery power by ‘resting’ when not transmitting since they save on making unneeded requests for connection.

All these various changes also mean that the new routers will use only about one-third the energy of current routers. Because the router can establish fixed streams with a given user it can avoid the constant off/off sequences.

The most interesting downside to the new devices will be that their biggest benefits only kick in when most of the connected devices are using the new standard. This means that the benefits on public networks might not be noticeable for the first few years until a significant percentage of cellphones, tablets, and laptops have been upgraded to the new standard.

Categories
Technology

More on MIMO

One of the technologies that is going to be needed to make the Internet of Things work better is MIMO. MIMO stands for multiple-input, multiple output and refers to using an array of antennas to communicate instead of a single antenna. MIMO technology can apply to different kinds of wireless including WiFi and cellular.

MIMO has been around for a few years and the latest high performance WiFi routers include the first generation MIMO technology. These wireless routers include multiple antennas that work together and the purpose for the antennas is to establish separate wireless routes to different devices.

When done smartly, MIMO dynamically sets up a different wireless path to a given device, so there would be a separate wireless path to your cell phone, your TV and your speaker system. The current MIMO routers can only establish a few separate paths at a time. So if you have more than a few wireless devices running at the same time (which many of us now do), then there is also a general broadcast signal that can be picked up by any device within range.

As you can imagine, establishing separate paths and doing it well can be a challenge. Some devices like cell phones and tablets are mobile within the environment and the router has to keep track of where each device is at. Done well the router will determine the right amount of power and bandwidth to give to each device.

But fast forward a few years when you also have a host of IoT devices in your home. Today in my house we often are running seven WiFi devices, but add to this an array of smart appliances, smoke detectors, security cameras, medical monitors and various toys and it’s easy to see that the normal home router could get overwhelmed in a hurry.

Scientists are already working on more sophisticated MIMO devices so that they can understand the challenges of handling large numbers of multiple devices simultaneously. Scientists at Rice University have constructed an array of 96 MIMO antennas that is letting them a look into our future. They have named their array Argos and it is giving them a tool for exploring the ways to process and integrate inputs and outputs from many sources. They are calling their application Mammoth MIMO.

Mammoth MIMO antenna arrays are more efficient than a bunch of single antennas. The large array that Rice is studying can do a whole lot more than connect to 96 devices and they are claiming that  the multiplicative efficiency appears to make the large array as much as ten times more efficient than using a host of individual routers.

That kind of efficiency is going to be necessary in the future in two circumstances. First, this technology could be used immediately in crowded environments. We are all aware of how hard it is to get a cell phone signal when there are a lot of people together in a convention center or stadium. Mammoth MIMO could enable many more connections.

But the more widespread use will be in a world where the normal home or business is filled with scores of IoT devices all wanting to make connections to the network. Without improved MIMO this is not going to be possible.

Massive MIMO is going to require massive processing power to make sense of the huge inflow of simultaneous signals. That will require more computational and data storage locally just to process and make sense of IoT data. I have several friends who work in the field of artificial intelligence and they think their technology is going to be needed to help make sense of the massive data flood that will flow out of IoT.

Exit mobile version