Categories
Technology

Update on DOCSIS 4.0

LightReading recently reported on a showcase at CableLabs where Charter and Comcast demonstrated the companies’ progress in testing the concepts behind DOCSIS 4.0. This is the big cable upgrade that will allow the cable companies to deploy fast upload speeds – the one area where they have a major disadvantage compared to fiber.

Both companies demonstrated hardware and software that could deliver a lot of speed. But the demos also showed that the cable industry is probably still four to five years away from having a commercially viable product that cable companies can use to upgrade networks. That’s a long time to wait to get better upload speeds.

Charter’s demonstration was able to use frequencies within the coaxial cables up to 1.8 GHz. That’s a big leap up from today’s maximum frequency utilization of 1.2 GHz. As a reminder, a cable network operates as a giant radio system that is captive inside of the coaxial copper wires. Increasing the range of spectrums used means opening up a big range of additional bandwidth capacity inside of the transmission. These new breakthroughs are akin to the creation of G.Fast which harnesses higher frequencies inside the telephone copper wires. While engineers can theoretically guess how the higher frequencies will behave, the reason for these early tests is to find all of the unexpected quirks of how the various frequencies interact inside of the coaxial network in real-life conditions. A coaxial cable is not a sealed environment and allows interference from the outside world that can interfere unexpectedly with parts of the transmission path.

Charter used equipment supplied by Vicma for the node, Teleste for amplifiers, and ATX Networks for taps. The node is the electronics that sit in a neighborhood and converts the signal from fiber onto the coaxial network. Amplifiers are needed because the signals in a coaxial system don’t travel very far without having to be amplified and refreshed. Taps are the devices that peel signals from the coaxial distribution network to feed into homes. A cable company will have to replace all of these components, plus install new modems, to upgrade to a higher frequency network – which means the DOCSIS 4.0 upgrade will be expensive.

One of the impressive changes from the Charter demo was that the company said it could overlay the new DOCSIS system over top of an existing cable network without respacing. That’s a big deal because respacing would mean moving existing channels to make room for the new bandwidth allocation.

Charter was able to achieve a download speed of 8.9 Gbps download and 6.2 Gbps upload. They feel confident they will be able to get this over 10 Gbps. Comcast achieved speeds on its test of 8.2 Gbps download and 5.1 Gbps upload. In addition to researching DOCSIS 4.0, Comcast is also looking for ways to use the new technology to beef up existing DOCSIS 3.1 networks to provide faster upload speeds earlier.

Both companies face a market dilemma. They are both under pressure to provide faster upload speeds today. If they don’t find ways to do that soon, they will lose customers to fiber overbuilders and even the FWA wireless ISPs. It’s going to be devastating news for cable stock prices in the first quarter after Charter or Comcast loses broadband customers – but the current market trajectory shows that’s likely to happen.

Both companies are still working on lab demos and are using a breadboard chip designed specifically for this test. The normal lab development process means fiddling with the chip and trying new versions until the scientists are satisfied. That process always takes a lot longer than executives want but is necessary to roll out a product that works right. But I have to wonder if cable executives are in a big hurry to make an expensive upgrade to DOCSIS 4.0 so soon after upgrading to DOCSIS 3.1.

Categories
Technology

7G – Really?

I thought I’d check in on the progress that laboratories have made in considering 6G networks. The discussion on what will replace 5G kicked off with a worldwide meeting hosted in 2019 at the University of Oulu, in Levi, Lapland, Finland.

6G technology will explore the frequencies between 100 GHz and 1 THz. This is the frequency range that lies between radio waves and infrared light. These spectrums could support unimaginable wireless data transmission rates of up to one terabyte per second – with the tradeoff that such transmissions will only be effective for extremely short distances.

Scientists have already said 5G will be inadequate for some computing and communication needs. There is definitely a case to be made for applications that need huge amounts of data in real-time. For example, a 5G wireless signal at a few gigabits per second is not able to transmit enough data to support complex real-time manufacturing processes. There is not enough data being transmitted with a 5G network to support things like realistic 3D holograms and the future metaverse.

Scientists at the University of Oulu say they are hoping to have a lab demonstration of the ability to harness the higher spectrum bands by 2026, and they expect the world will start gelling on 6G standards around 2028. That all sounds reasonable and is in line with what they announced in 2019. One of the scientists at the University was quoted earlier this year saying that he hoped that 6G wouldn’t get overhyped as happened with both 4G and 5G.

I think it’s too late for that. You don’t need to do anything more than search for 6G on Google to find a different story – you’ll have to wade through a bunch of articles declaring we’ll have commercial 6G by 2030 before you even find any real information from those engaged in 6G research. There is even an online 6G magazine with news about everything 6G. These folks are already hyping that there will be a worldwide scramble as governments fight to be the first ones to master and integrate 6G – an upcoming 6G race.

I just shake my head when I see this – but it is nothing new. It seems every new technology these days spawns an industry of supposed gurus and prognosticators who try to monetize the potential for each new technology. The first technology I recall seeing this happen with was municipal WiFi in the 1990s. There were expensive seminars and even a paper monthly magazine touting the technology – which, by the way, barely worked and quickly fizzled. Since then, we’ve seen the guru industry pop up for every new technology like 5G, block-chain, AI, bitcoin, and now the metaverse and 6G. Most new cutting-edge technologies find their way into the economy but at a much slower pace than touted by the so-called early experts.

But before the imaginary introduction of 6G s by 2030, we will need to first integrate 5G into the world. Half of the cellphones in the world still connect using 3G. While 3G is being phased out in the U.S., it’s going to be a slower process elsewhere. While there are hundreds of Google links to articles that predict huge numbers of 5G customers this year – there aren’t any. At best, we’re currently at 4.1G or 4.2G – but the engineering reality is obviously never going to deter the marketers. We’ll probably see a fully compliant 5G cell site before the end of this decade, and it will be drastically different, and better, than what we’re calling 5G today. It’ll take another few years after that for real 5G technology to spread across U.S. urban areas. There will be a major discussion among cellular carriers about whether the 5G capabilities will make any sense in rural areas since the 5G technology is mostly aimed at solving overcrowded urban cellular networks.

Nobody is going to see a 6G cellphone in their lifetime, except perhaps as a gimmick. We’re going to need several generations of better batteries before any handheld device can process terabyte data without zapping the battery within minutes. That may not deter Verizon from showing a cellular speed test at 100 Gbps – but marketers will be marketers.

Believe it or not, there are already discussions about 7G – although nobody can define it. It seems that it will have something to do with AI and the Internet of Things. It’s a little fuzzy about how something after 6G will even be related to the evolution of cellular technology – but this won’t stop the gurus from making money off the gullible.

Categories
Technology

What Happened to AirGig?

You might remember press releases from AT&T in 2018 that promised a revolution in rural broadband from a technology called AirGig. The technology was described as using millimeter-wave spectrum to shoot focused radio beams along power lines, with the electric field of the powerlines somehow acting to keep the transmissions focused to follow the wires.

AT&T said at the time that the technology could deliver hundreds of megabits of data to rural homes using a network built from inexpensive plastic components mounted on power lines. The last I heard of the technology was this AT&T video released in 2019.

There had been a field trial of the technology conducted with Georgia Power, and the CEO of the electric company was enthusiastic at the time about the technology. AT&T talked about starting the process of manufacturing hardware. And then . . . crickets. There hasn’t been a word on the web about the technology since then.

I saw articles published by IEEE in 2019 that talked about a different broadband-over-powerline (BPS) technology developed by Panasonic. IEEE amended the standard for BPL to recognize Panasonic’s HD-PLC technology. Panasonic claims to have reached 60 Mbps transmissions using the technology but thought they could goose this to several hundred Mbps.

I always wondered how much of the AT&T announcement on AirGig was hype. Timing-wise, the AT&T AirGig announcement came in the middle of the 5G craze where the cellular carriers were trying to gain major concessions from the government to promote 5G. AT&T and the other carriers wanted a lot more spectrum – and they’ve largely gotten it. Perhaps they were using the AirGig to justify more spectrum. But the video shows that AT&T has gotten a pile of patents for the technology, so it seems to be the real deal.

Today’s blog asked what happened, and I hope somebody who knows will say. Did field trials reveal a fatal flaw in the technology? That’s always possible with any wireless technology. Did the technology just underperform and not deliver the promised broadband speeds? Or will AT&T spring a finished technology on the world one of these days?

Categories
Technology

Explaining SDN

Somebody asked me to explain software defined networking (SDN), and I thought a good way to answer the question was to send them to an article that explains the concept. I couldn’t find anything on the web that explains SDN in plain English. This is not unusual for technical topics since tech guys generally have problems explaining what they do to laypeople. They hate boiling things down to simple language because a simple description doesn’t capture the nuances of the technology. I’ve always challenged engineers I work with to explain what they do in a way that their mother could understand – and most look at me like I’m an alien. I won’t promise that this is in plain English, but here is my shot at explaining SDN to a non-technical person.

The basis for SDN is that it is a technology that allows networks to be centrally and intelligently controlled or programmed. What does that mean?

There was a time in early computing when a network owner purchased all of the network gear from one vendor. Doing so made it possible to control the network with one set of software as long as the network owner could master the protocols used by the vendor. This sent a whole generation of IT technicians to become Cisco certified to prove that they had mastered Cisco network gear.

But it’s no longer reasonable today to have a complex network provisioned from one vendor. For one thing, most networks now use the cloud to some extent as part of the network – meaning they use computing power that is outside the direct control of the network owner. The pandemic has also forced most companies into allowing their network to communicate with remote employees – something that many companies refused to consider in the past. Networks have also gotten more complex due to the need to control Internet of Things devices – networks don’t just communicate with computers anymore.

The first goal of SDN is to bring everything under one big software umbrella. SDN provides a software platform that lets a network owner visualize the entire network. What does that mean in plain English? The goal of a network owner is to efficiently flow data to where it needs to go and to do so safely. It’s incredibly challenging to understand the flow of data in a network comprised of multiple devices, multiple feeds to and from the outside world, and constantly shifting demand from users on how they want to use the data.

SDN is a software platform that enables the network owner to see the data flow between different parts of the platform. Modern SDN technology has evolved from the OpenFlow protocol developed in 2008 in a collaboration between Stanford University and the University of California at Berkeley. The original platform enabled a network owner to measure and manage data traffic between routers and switches, regardless of the brand of equipment.

Over time, SDN has grown in sophistication and can do much more. As an example, with SDN, a network owner can set different levels of security for different parts of the network. A network operator might wall off traffic between remote employees and core data storage so that somebody working remotely can’t get access to some parts of the network. SDN software provides a way to break a network into subsets and treat each of them differently in terms of security protocols, the priority of routing, and access to other parts of the network. This is something that can’t easily be done by tinkering with the software settings of each individual router and switch – which is what network operators tried to do before SDN.

There have been huge benefits from SDN. Probably the biggest is that SDN allows a network owner to use generic white-box devices in the network – inexpensive routers and switches that are not pre-loaded with expensive vendor software. The SDN software can direct the generic devices to perform a needed function without the box needing to be pre-programmed. That’s the second big benefit of SDN – the whole network can be programmed as if every device came from the same vendor. The SDN software can tell each part of the network what to do and can even override preset functions from vendors.

It’s not hard to see why this is hard for a network engineer to explain because they don’t want to explain the primary goals of SDN without dipping into how it does all of this – and that is something that is incredibly hard to explain without using technical language and jargon. For that, I’d send you to the many articles written on the topic.

Categories
Technology

Jitter – A Measure of Broadband Quality

Most people have heard of latency, which is a measure of the average delay of data packets on a network. There is another important measure of network quality that is rarely talked about. Jitter is the variance in the delays of signals being delivered through a broadband network connection. Jitter occurs when the latency increases or decreases over time.

We have a tendency in the industry to oversimplify technical issues. We take a speed test and assume the answer that pops out is our speed. Those same speed tests also measure latency, and even network engineers sometimes get mentally lazy and are satisfied to see an expected latency number on a network test. But in reality, the broadband signal coming into your home is incredibly erratic. From millisecond to millisecond, the amount of data hitting your home network varies widely. Measuring jitter means measuring the degree of network chaos.

Jitter increases when networks get overwhelmed, even temporarily. Delays are caused in any network when the amount of data being delivered exceeds what can be accepted. There are a few common causes of increased jitter:

·         Not Enough Bandwidth. Low bandwidth connections experience increased jitter when incoming packets exceed the capacity of the broadband connection. This effect can cascade and multiply when the network is overwhelmed – being overly busy increases jitter, and the worse jitter then makes it even harder to receive incoming packets.

·         Hardware Limitations. Networks can bog down when outdated routers, switches, or modems can’t fully handle the volume of packets. Even issues like old or faulty cabling can cause delays and increase jitter.

·         Network Handoffs. Any network bottlenecks are the most vulnerable point in the network. The most common bottleneck in all of our homes is the device that converts landline broadband into WiFi. Even a slight hiccup at a bottleneck will negatively impact performance in the entire network.

All of these factors help to explain why old technology like DSL performs even worse than might be expected. Consider a home that has a 15 Mbps download connection on DSL. If an ISP were to instead deliver a 15 Mbps connection on fiber, the same customer would see a significant improvement. A fiber connection would avoid the jitter issues caused by antiquated DSL hardware. We tend to focus on speeds, but a 100 Mbps connection on a fiber network will typically have a lot less jitter than a 100 Mbps connection on a cable company network. Customers who try a fiber connection for the first time commonly say that the network ‘feels’ faster – what they are noticing is the reduced jitter.

Jitter can be deadliest to real-time connections – most people aren’t concerned about jitter if means it takes a little longer to download a file. But increased jitter can play havoc with an important Zoom call or with maintaining a TV signal during a big sports event. It’s easiest to notice jitter when a real-time function hesitates or fails. Your home might have plenty of download bandwidth, and yet broadband connections still fail because small problems caused by jitter can accumulate to make the connection fail.

ISPs have techniques that can help to control jitter. One of the more interesting ones is to use a jitter buffer that grabs and holds data packets that arrive too quickly. It may not feel intuitive that slowing a network can improve quality. But recall that jitter is caused when there is a time delay between different packets in the same transmission. There is no way to make the slowest packets arrive any sooner – so slowing down the fastest ones increases the chance that Zoom call packets can be delivered evenly.

Fully understanding the causes of jitter in any specific network is a challenge because the causes can be subtle. It’s often hard to pinpoint a jitter problem because it can be here one millisecond and gone the next. But it’s something we should be discussing more. A lot of the complaints people have about their broadband connection are caused by too-high jitter.

Categories
Technology

Deploying 10-Gigabit PON

From a cost perspective, we’re not seeing any practical difference between the price of XGS-PON that offers a 10-gigabit data path and traditional GPON. I have a number of clients now installing XGS-PON, and we now recommend it for new fiber projects. I’ve been curious about how ISPs are going to deploy the technology in residential and small-business neighborhoods.

GPON has been the technology of choice for well over a decade. The GPON technology delivers a download path of 2.4-gigabits bandwidth to each neighborhood PON. Most of my clients have deployed GPON in groups of up to 32 customers in a neighborhood PON. In practical deployment, most of them pack a few less than 32 onto the typical GPON card.

I’m curious about how ISPs will deploy XGS-PON. From a pure math perspective, an XGS-PON network delivers four times as much bandwidth to each neighborhood than GPON. An ISP could maintain the same level of service to customers as GPON by packing 128 customers onto each GPON card. But network engineering is never that nicely linear, and there are a number of factors to consider when designing a new network.

All ISPs rely on oversubscription when deciding the amount of bandwidth needed for a given portion of a network. Oversubscription is shorthand for taking advantage of the phenomenon that customers in a given neighborhood rarely all use the bandwidth they’ve been assigned, and never all use it at the same time. Oversubscription allows an ISP to feel safe in selling gigabit broadband to 32 customers in a GPON network and knowing that collectively they will not ask to use more than 2.4 gigabits at the same time. For a more detailed description of oversubscription, see this earlier blog. There are ISPs today that put 64 customers or more on a GPON card – the current capacity is up to 128 customers. ISPs understand that putting too many customers on a PON card will start to emulate the poor behavior we see in cable company networks that sometimes bog down at busy times.

Most GPON networks today are not overstressed. Most of my clients tell me that they can comfortably fit 32 customers onto a GPON card and only rarely see a neighborhood maxed out in bandwidth. But ISPs do sometimes see a PON that gets overstretched if there are more than a few heavy users in the same PON. The easiest solution to that issue today is to reduce the number of customers in a busy PON – such as splitting into two 16-customer PONs. This isn’t an expensive issue because over-busy PONs are still a rarity.

ISPs understand that year after year that customers are using more bandwidth and engaging in more data-intensive tasks. Certainly, a PON with half a dozen people now working from home is a lot busier than it was before the pandemic. It might be years before a lot of neighborhood PONs get overstressed, but eventually, the growth in bandwidth demand will catch up to the GPON capacity. As a reminder, the PON engineering decision is based on the amount of demand at the busiest times of the day. That busy hour level of traffic is not growing as quickly as the overall level of bandwidth used by homes – which more than doubled in just the last three years.

There are other considerations in designing XGS-PON. Today, the worst that can happen with a PON failure is for 32 customers to lose bandwidth if a PON card fails. It feels riskier from a business perspective to have 128 customers sharing a PON card – that’s a much more significant network outage.

There is no magic metric for an ISP to use. You can’t fully trust vendors because they are going to sell more PON cards if an ISP were to be extremely conservative and put only 32 customers on a 10-gigabit PON. The ISP owners might not feel comfortable leaping to 128 or more customers on a PON. There are worse decisions to have to make because almost any configuration of PON oversubscription will work on a 10-gigabit network. The right solution will balance the need to make sure that customers get the bandwidth they request without being so conservative that the PON cards are massively underutilized. Over time, ISPs will develop internal metrics that work with their service philosophy and the demands of their customer base.

Categories
Technology The Industry

The Battle for IoT

There is an interesting battle going on to be the technology that monetizes the control of Internet of Things devices. Like a lot of tech hype, IoT has developed a lot slower than originally predicted – but it’s now finally becoming a big business. I think back to a decade ago when tech prognosticators said we’d soon be living in a virtual cloud of small monitors that would monitor everything in our life. According to those early predictions, our farm fields should already be fully automated, and we should all be living in the smart home envisioned by the Jetsons. Those predictions probably say more about the tech press that hypes new technologies than about IoT.

I’ve been noticing increasing press releases and articles talking about different approaches to monetizing IoT traffic. The one that we’ve all heard the most about is 5G. The cellular companies told Wall Street five years ago that the monitoring of IoT devices was going to fuel the 5G business plan. The wireless companies envisioned households all buying a second cellular subscription to monitor devices.

Except in a few minor examples, this business plan never materialized. I was reminded of it this week when I saw AT&T partnering with Smart Meter to provide patient monitoring for chronic conditions like diabetes and high blood pressure. The monitoring devices worn by patients include a SIM card, and patients can be monitored anywhere within range of a cellular signal. It’s a great way for AT&T to monetize IoT subscriptions – in this case, with monthly fees likely covered by health insurance. It sounds like an awesome product.

Another player in the IoT world is LEO satellites. In August of last year, Starlink made a rare acquisition by buying Swarm. This company envisions using satellites to be able to monitor outdoor IOT devices anywhere in the world. The Swarm satellites are less than a pound each, and the Swarm website says the goal is to have three of these small satellites in range of every point on earth by the end of 2022. That timeline slowed due to the purchase by Starlink, but this could be a huge additional revenue stream for the company. Swarm envisions putting small receivers in places like fields. Like with Starlink, customers must buy the receivers, and there is an IoT data monitoring plan that will allow the collection of 750 data packets per month for a price of $60 per year.

Also still active in pursuing the market are a number of companies promoting LoRaWAN technology. This technology uses tall towers or blimps and CBRS or some other low-power spectrum to communicate with IoT monitors over a large geographic area. The companies developing this technology can be found at the LoRa Alliance.

Of course, the current king of IoT is WiFi. Charter recently said it is connected to 5 billion devices on its WiFi network. WiFi has the advantage of a free IoT connection for the price of buying a broadband connection.

Each of these technologies has a natural market niche. The AT&T health monitoring system only makes sense on a cellular network since patients need to be monitored everywhere they go during the day. Cellular should be the go-to technology for mobile monitoring. The battle between LoRaWAN and satellites will be interesting and will likely eventually come down to price. Both technologies can be used to reach farm fields where cellular coverage is likely to never be ubiquitous. WiFi is likely to carry the signals from the devices in our homes – the AT&T vision of everybody buying an IoT cellular data plan sounds extremely unlikely since we all can have the same thing for the cost of a WiFi router.

Categories
Technology The Industry

When Will We See Real 5G?

The non-stop wireless industry claims that we’ve moved from 4G to 5G finally slowed to the point that I stopped paying attention to it during the last year. There is an interesting article in PC Magazine that explains why 5G has dropped off the front burner.

The article cites interviews with Art Pouttu of Finland’s University of Oulu about the current state and the future of 5G. That university has been at the forefront of the development of 5G technology and is already looking at 6G technology.

Pouttu reminds us that there is a new ‘G” generation of wireless technology about every ten years but that it takes twenty years for the market to fully embrace all of the benefits of a new generation of wireless technology.

We are just now entering the heyday of 4G. The term 4G has been bantered around by wireless marketing folks for so long that it’s hard to believe that we didn’t see a fully-functional 4G cell site until late in 2018. Since then, the cellular companies have beefed up 4G in two ways. First, the technology is now spread through cell sites everywhere. But more importantly, 4G systems have been bolstered by the addition of new bands of cellular spectrum. The marketing folks have gleefully been labeling this new spectrum as 5G, but the new spectrum is doing nothing more than supporting the 4G network.

I venture to guess that almost nobody thinks their life has been drastically improved because 4G cellphone speeds have climbed in cities over the last few years from 30 Mbps to over 100 Mbps. I can see that faster speed on my cellphone if I take a speed test, but I haven’t really noticed much difference between the performance of my phone today compared to four years ago.

There are two major benefits from the beefed-up 4G. The first benefits everybody but has gone unnoticed. The traditional spectrum bands used for 4G were getting badly overloaded, particularly in metropolitan areas. The new bands of spectrum have relieved the pressure on cell sites and are supporting the continued growth in cellular data use. Without the new spectrum, our 4G experience would be deteriorating.

The new spectrum has also enabled the cellular carriers to all launch rural fixed cellular broadband products. Before the new spectrum, there was not enough bandwidth on rural cell sites to support both cellphones and fixed cellular customers. The many rural homes that can finally buy cellular broadband that is faster than rural DSL are the biggest winners.

But those improvements have nothing to do with 5G. The article points out what has always been the case. The promise of 5G has never been about better cellphone performance. It’s always been about applications like using wireless spectrum in complex settings like factories where feedback from huge numbers of sensors needs to be coordinated in real-time.

The cellular industry marketing machine did a real number on all of us – but perhaps most of all on the politicians. We’ve had the White House, Congress, and State politicians all talking about how the U.S. needed to win the 5G war with China – and there is still some of that talk going around today. This hype was pure rubbish. What the cellular carriers needed was more spectrum from the FCC to stave off the collapse of the cellular networks. But no cellular company wanted to crawl to Congress begging for more spectrum, because doing so would have meant the collapse of cellular company stock prices. Instead, we were fed a steady diet of false rhetoric about how 5G was going to transform the world.

The message from the University of Oulu is that most 5G features are probably still five or six years away. But even when they finally get here, 5G is not going to bring much benefit or change to our daily cellphone usage. It was never intended to do that. We already have 100 Mbps cellular data speeds with no idea how to use the extra speed on our cellphones.

Perhaps all we’ve learned from this experience is that the big cellular companies have a huge amount of political and social clout and were able to pull the wool over everybody’s eyes. They told us that the sky was falling and could only be fixed with 5G. I guess we’ll find out in a few years if we learned any lesson from this because we can’t be far off from hearing the hype about 6G. This time it will be 100% hype because 6G deals with the use of extremely short frequencies that will never be used in outdoor cellular networks. But I have a feeling that we’ll find ourselves in a 6G war with China before we know it.

Categories
Technology

Fixed Cellular Broadband Performance

One of the first in-depth reviews I’ve found for T-Mobile’s fixed cellular broadband was published in the Verve. It’s not particularly flattering to T-Mobile, and this particular customer found the performance to be unreliable – fast sometimes and barely functioning at other times. But I’ve seen other T-Mobile customers raving about the speeds they are receiving.

We obviously can’t draw any conclusions based upon a single review by one customer, but his experience and the contrasting good reviews by others prompted me to talk about why performance on cellular broadband networks can vary so significantly.

I’ve always used the word wonky to describe cellular performance. It’s something I’ve tracked at my own house, and for years the reception of the cellular signal in my home office has varied hour-by-hour and day-by-day. This is a basic characteristic of cellular networks that you’ll never find the cellular carriers talking about or admitting.

The foremost issue with cellular signal strength is the distance of a customer from the local cellular tower. All wireless data transmissions weaken with distance. This is easy to understand. Wireless transmissions spread after they leave a transmitter. The traditional way we depict a wireless transmission shown in the diagram below demonstrates the spread. If two customers have the same receiver,  a customer who is closer to the tower will receive more data bits sooner than somebody who is further after the signal has spread. The customer in the bad review admitted he wasn’t super close to a cell tower, and somebody in his own neighborhood who lives closer to the cell site might have a stronger signal and a better opinion of the product.

There are other factors that create variability in a cellular signal. One is basic physics and the way radio waves behave outdoors. The cellular signal emanating from your local cell tower varies with the conditions in the atmosphere – the temperature, humidity, precipitation, and even wind. Things that stir up the air will affect the cellular signal. A wireless signal in the wild is unpredictable and variable.

Another issue is interference. Cellular companies that use licensed spectrum don’t want to talk about interference, but it exists everywhere. Some interference comes from natural sources like sunspots. But the biggest source of interference is the signal from other cell towers. Interference occurs any time there are multiple sources of the same frequency being used in the same area.

The customer in the review talks about the performance differing by the time of day. That is a phenomenon that can affect all broadband networks and is specific to the local robustness of the T-Mobile network. Performance drops when networks start getting too busy. Every DSL customer or cable company broadband customer has witnessed the network slowing at some times of the day. This can be caused by too many customers sharing the local network – in this case, the number of customers using a cell tower at the same time. The problem can also because caused by high regional usage if multiple cell towers share the same underlying broadband backbone.

The final issue that is somewhat unique to cellular networks is carrier priority. It’s highly likely that T-Mobile is giving first priority to customers using cell phones. That’s the company’s primary source of revenue, so cell phones get first dibs at the bandwidth. That means in busy times that the data left over for the fixed cellular customers might be greatly pinched. As T-Mobile and other carriers sell more of the fixed product, I predict the issue of having second priority will become a familiar phenomenon.

This blog is not intended to be a slam against fixed cellular broadband. The customer that wrote the review switched to cellular broadband to get a less expensive connection than from his cable company. This customer clearly bought into the T-Mobile advertising hype because a cellular broadband signal will never be as reliable as a signal delivered through wires.

We can’t forget the real promise of fixed cellular broadband – bringing broadband to folks who have no alternatives. Somebody that switched to T-Mobile from a 1 Mbps rural DSL product would have written a different and more glowing review of the same product. The bottom line is that anybody buying cellular broadband should recognize that it’s a wireless product – and that means the product comes with the quirks and limitations that are inherent with wireless broadband. I imagine that we’re going to continue to see bad reviews from customers who want to save money but still want the performance that comes with wired broadband. This is another reminder that it’s a mistake to judge a broadband product strictly by the download speed – a 100 Mbps cellular broadband product is not the same as a 100 Mbps cable company connection.

Categories
Technology

The Future of Data Storage

One of the consequences of our increased use of broadband is a big increase in the amount the data that we store outside our homes and businesses. The numbers are becoming staggering. There are currently about 3.7 billion people using the Internet, and together we generate 2.5 quintillion bytes of online data every day. The trends are that by 2025 we’ll be storing 160 zettabytes of data per year – a zettabyte is one trillion gigabytes.

I store a lot more data online than I used to. I now store things in the cloud all day long. When I edit a Word or Excel file, my changes are all stored in the cloud. I also back up every change on my computer every day. I write and store these blogs on a WordPress server. Copies of my blogs are automatically posted and stored on Twitter and LinkedIn. My company’s accounting records are stored online. When my car pulls into a driveway, it uploads diagnostics into the cloud. Pictures I take on my cellphone are automatically saved. I have no idea what else is being shared and saved by apps and software that I routinely use. As recently as a few years ago, I had very little interaction with the cloud, but I now seemingly live and work in the cloud.

It may be hard to believe, but in the foreseeable future, we’ll be facing a data storage crisis. We can’t afford the resources to be able to store data in the same way we do today. Data centers now use nearly 20% of the electricity used by technology. A single data center uses more electricity than a small town. We’re consuming electric generation resources and spinning off huge amounts of carbon dioxide to be able to save the 45 pictures taken at the birthday party you attended last night.

One of the obvious solutions to the data storage challenge is to throw away data. But who gets to decide what gets kept? The alternative is to find better methods of data storage that don’t require as much energy or take as much space. There are several areas of research into better storage – none is yet ready for prime time, but the ideas are intriguing.

5D Optical Storage.

Researchers at the University of Southampton are exploring data storage using lasers to etch into cubes of silicon glass. The technique is being called 5F, because in addition to using the normal three axes as storage parameters they are also using the size of a recorded record and the orientation. Think of this as a 3D version of the way we used to store data on compact disks. This technology would be used for long-term storage since something that is etched into the glass is permanent. Storing data in glass requires no power, and the glass cubes are nearly indestructible. One small cube could store hundreds of terabytes of data.

Cold Storage.

Researchers at the University of Manchester are taking a different approach and looking at the benefits of storing data at super-cold temperatures. They have developed man-made molecules that can store several hundred times more data than in the equivalent space on current hard drives. The key is to store the molecules at low temperatures. This is the same research group that discovered graphene and that works with unique molecular structures. Scientists have known that storage at lower temperatures can work, and the big breakthrough is having this technology work at 80 Kelvin using liquid nitrogen (which is significantly warmer than past work near to absolute zero using liquid helium). Since our atmosphere is mostly nitrogen the frozen gas is inexpensive to produce. Scientists are hoping that the molecules will be able to store data for a long time, even if losing power.

DNA Storage. Scientists have been intrigued for over a decade about using DNA as a storage medium. DNA could be an idea storage media because it’s made from our base-pair amino acids, and the convoluted coiled structure provides a lot of storage capacity in a condensed space. A team at Harvard was able to store the code for a video on a strand of bacterial DNA. Since then, the commercial company Catalog has been working to perfect the technology. The company believes it is close to a breakthrough by using a synthetic version of a DNA molecule rather than living tissue. Data could be written to the molecule as it’s being assembled. Like with etched glass, this is permanent storage and highly promising. In the past summer, the company announced it was able to record the full 16 Gigabytes of Wikipedia into a tiny vial of the material.

We need these technologies and others to work if we don’t want to drown in our own data.

Exit mobile version