New Technologies, June 2017

Following are some interesting new technologies I’ve run across recently.

WiFi Imaging. Cognitive Systems has a product they call Aura that can detect motion inside of a home using WiFi. The technology was developed a few years ago at MIT. The technology used is called Radio Frequency (RF) Capture. The device can sense subtle changes in wireless signals to determine if something is moving in the home. It can be set to different sensitivities to be able to detect people, but not animals. It can also be set to track specific cellphones so that you’ll know when a known person has entered or left the home. For now the device does not connect to external security services but sends a message to a smartphone.

Some German researchers at the University of Munich have already taken this same idea a lot farther. In a paper published in the Physical Review of Letters they describe a technique where they can use WiFi to create 3D holographic images through walls. The lab unit they have built can detect objects down to about 4 centimeters in size. They scan ten times per second and can see outlines of people or pets moving inside of another room. This technology is eerily reminiscent of the surveillance machine in The Dark Knight that Bruce Wayne destroys at the end of the movie since it was a scary invasion of privacy.

Eliminating IoT Batteries. One of the scariest things about the exploding number of devices used for IoT is the need to power them, and the potential huge waste, cost and hassle of needing batteries for tons of devices. Tryst Energy from the Netherlands has developed an extremely efficient solar device that only needs 200 lux of light for four hours per day to operate a small sensor that communicates with Bluetooth or WiFi. That is the amount of light normally found under a desk. The device also ought to last for 75 – 100 years, opening the ability to place small IoT sensors in all sorts of places to monitor things. When you consider the expected billions of devices that are expected over the next decade this could provide a huge boost to the IoT industry and also provide a green solution for powering tiny devices. The device is just starting to go into production.

Bots Have Created Their Own Language. A team at OpenAI, the artificial intelligence lab founded by Elon Musk and Sam Altman, has published a paper describing how bots have created their own language to communicate with each other. They accomplished this by presenting simple challenges that require collaboration to bots, which are computer programs that are taught to accomplish tasks. Bots are mostly being used these days to learn to communicate with people. But the OpenAI team instead challenged the bots to solve spatial challenges such as devising a way to move together to a specific location inside of a simulated world. Rather than tell the bots how to accomplish this they simply required that the bots collaborate with other bots to accomplish the assigned tasks. What they found was that the bots created their own ‘language’ to communicate with each other and that the language got more efficient over time. This starts sounding a bit like bad Sci-Fi world where computers can talk to each in languages we can’t decipher.

Recycling CO2. Liang-shi Li at Indiana University has found a way to recycle CO2 for the production of power. He has created a molecule that, with the addition of sunlight, can turn CO2 from the atmosphere into carbon monoxide. The carbon monoxide can then be burnt to create power, with the byproduct being CO2. If scaled this would provide for a method to produce power that would add no net CO2 to the atmosphere (since it recycles the CO2). Li uses a nanographene molecule that has a dark color and that absorbs large amounts of sunlight. The molecule also includes rhenium which is then used as a catalyst to turn nearby CO2 into carbon dioxide. He’s hoping to be able to accomplish this instead with more easily obtained magnesium.

Liquid Light. It’s common knowledge that light usually acts like a wave, expanding outward until it’s reflected or absorbed by an object. But in recent years scientists have also discovered that under extreme conditions near absolute zero that light can also act like a liquid and flow around objects and join back together on the other side. The materials and processes used to produce the liquid light are referred to as Einstein-Bose condensates.

Scientists from CNR Nanotec in Italy, Ecole Polytechnique de Montreal in Canada, and Aalto University in Finland just published an article in Nature Physics that shows that light can also exist in a ‘superliquid’ state where light flows around objects with no friction. Of most interest is that this phenomenon can be produced at normal room temperature and air pressure. The scientists created this effect by sandwiching organic molecules between two highly-reflective mirrors. The scientists believe that interaction of light with the molecules induces the photons in the light to take on characteristics of electrons in the molecules.

The potential uses for this technique, if perfected, are huge. It would mean that light could be made to pass through computer chips with no friction, meaning no creation of the heat that is the bane of data centers.

Comparing Streaming and Broadcast Video

One thing that doesn’t get talked about a lot in the battle between broadcast TV and on-line video is video quality. For the most part today broadcast TV still holds the edge over on-line video.

When you think of broadcast TV over a cable system I can’t help but remember back twenty years ago when the majority of the channels on a cable system were analog. I remember when certain channels were snowy, when images were doubled with ghosts and the first couple of channels in the cable system were nearly unwatchable. Today the vast majority of channels on most cable systems are digital, but there are still exceptions. The conversion to digital resulted in a big improvement in transmission quality.

When cable systems introduced HDTV and the quality got even better. I can remember flipping back and forth between the HD and SD versions of the same channel on my Comcast system just to see the huge difference.

This is not to say that cable systems have eliminated quality issues. It’s still common on many cable systems to see pixilation, especially during high action scenes where the background is constantly changing. All cable systems are not the same, so there are differences in quality from one city to the next. All digital video on cable systems is compressed at the head-end and decompressed at the settop box. That process robs a significant amount of quality from a transmission and one only has to compare any cable movie to one from a Blu-ray to realize how much is lost in the translation.

In the on-line world buffered video can be as good as good as cable system video. But on-line video distributors tend to compress video even more than cable systems – something they largely can get away with since a lot of on-line video is watched on smaller screens. And this means that a side-by-side comparison of SD or HD movies would usually favor the cable system. But Netflix, Amazon and a few others have one advantage today with the spectacular quality of their 4K videos – there is nothing comparable on cable networks.

But on-line live-streamed video still has significant issues. I watch sports on-line and the quality is often poor. The major problem with live-streamed video is mostly due to delays in the signal getting to the user. Some of that delay is due to latency – either latency in the backbone network between the video creator and the ISP or latency in the connection between the ISP and the end-user. Unlike downloading a data file where your computer will wait until it has collected all of the needed packets, live-streamed video is sent to end-users with whatever pixels have arrived at the needed time. This creates all sorts of interesting issues when watching live sports. For instance, there is pixilation, but it doesn’t look like the pixilation you see on cable network. Instead parts of the screen often get fuzzy when they aren’t receiving all the pixels. There are also numerous problems with the video. And it’s still not uncommon for the entire picture to freeze for a while, which can cause an agonizing gap when you are watching sports since it always seems to happen at a critical time.

Netflix and Amazon have been working with the Internet backbone providers and the ISPs to fix some of these issues. Latency delays in getting to the ISPs is shrinking and, at least for the major ISPs, will probably not be an issue. But the one issue that still needs to be resolved is the crashes that happen when the Internet gets overloaded when the demand is too high. We’re seeing ISPs bogging down when showing a popular stream like the NBA finals, compared to a normal NBA game that might only be watched by a hundred thousand viewers nationwide.

One thing in the cable system’s favor is that their quality ought to be improving a lot over the next few years. The big cable providers will be implementing the new ATSC 3.0 video standard that is going to result in a significant improvement in picture quality on HD video streams. The FCC approved the new standard earlier this year and we ought to see it implemented in systems starting in 2018. This new standard will allow cable operators to improve the color clarity and contrast on existing HD video. I’ve seen a demo of a lab version of the standard and the difference is pretty dramatic.

One thing we don’t know, of course, is how much picture quality means to the average video user. I know my teenage daughter seems quite happy watching low-quality video made by other teens on Snapchat, YouTube or Facebook Live. Many people, particularly teens, don’t seem to mind watching video on a smartphone. Video quality makes a difference to many people, but time will tell if improved video quality will stem the tide of cord cutting. It seems that most cord cutters are leaving due to the cost of traditional TV as well as the hassle of working with the cable companies and better video might not be a big enough draw to keep them paying the monthly cable bill.

The End of the MP3?

Last month the Fraunhofer Institute for Integrated Circuits ended its licensing program for the MP3 digital file format. This probably means that the MP3 format will begin fading away to be replaced over time by newer file formats. MP3 stands for MPEG Audio Layer III and was the first standard that allowed for the compression of audio files without loss of sound quality. The US patent for MP3 was issued in 1996 by Fraunhofer and since then they have collected royalties for all devices that were able to create or use files in that format.

While it might seem a bit odd to be reading a blog about the end of a file format, MP3 files have had a huge impact in the tech and music industries that they are partly responsible for the early success of the Internet.

The MP3 file revolutionized the way that people listened to music. In the decade before that there had been a proliferation of portable devices that would play cassette tapes or CDs. But those devices did not really bring freedom to listen to music easily everywhere. I can remember the days when I’d have a pile of tapes or CDs in the car so that I could listen to my favorite music while I drove. But the MP3 file format meant that I could rip all of my music into digital files and could carry my whole music collection along with me.

And the MP3 digital files were small enough that people could easily share files with friends and could send music as attachments to emails. But file-sharing of MP3 files really took off in 1999 when Shawn Fanning, John Fanning, and Sean Parker launched the peer-to-peer network Napster. This service gave people access to the entire music collections of huge numbers of others. Napster was so popular that the traffic generated by the platform crashed broadband networks at colleges and caused havoc with many ISP networks.

In 2001 Apple launched iTunes, a service where people could legally download MP3 files. Apple used the MP3 format initially but in 2003 changed to the AAC format, probably mostly to avoid paying the MP3 licensing fees. Internet traffic to iTunes grew to be gigantic. It’s hard to remember when the Internet was so much smaller, but the transfer of MP3 files was as significant to Internet traffic in the early 2000s as Netflix is today.

Napster, along with Apple iTunes, revolutionized the music industry and the two are together credited with ending the age of albums. People started listening to their favorite songs and not to entire albums – and this was a huge change for the music industry. Album sales dropped precipitously and numerous music labels went out of business. I remember the day I cancelled my subscription to Columbia House because I no longer felt the need to buy CDs.

Of course, Napster quickly ran into trouble for helping people violate music copyrights and was driven out of business. But the genie was out of the bottle and the allure of sharing MP3 files was too tempting for music lovers. I remember musician friends who always had several large-capacity external hard drives in their car and would regularly swap music collections with others.

One of the consequences from ending the licensing of the MP3 format is that over time it’s likely that computers and other devices won’t be able to read the MP3 format any longer. MP3s are still popular enough that the music players on computers and smartphones all still recognize and play MP3 files. But the history of the Internet has shown us that unsupported formats eventually fizzle away into obscurity. For example, much of the programming behind the first web sites is no longer supported and many of today’s devices can no longer view old web sites without downloading software capable of opening the old files.

It’s interesting that most people think that once something has been digitized that it will last forever. That might be true for important data if somebody makes special effort to save the digitized files in a place that will keep them safe for a long time. Bu we’ve learned that digital storage media are not permanent. Old CDs become unreadable. Hard drives eventually stop working. And even when files are somehow kept, the software needed to run the files can fall into obscurity.

There are huge amounts of music since 2000 that has been created only in a digital format. Music by famous musicians will likely be maintained and replayed as long as people have an interest in those musicians. But music by lesser-known artists will probably fade away and much of it will disappear. It’s easy to envision that in a century or two that that most of the music we listen to today might have disappeared.

Of course there are the online music streaming services like Spotify that are maintaining huge libraries of music. But if we’ve learned anything in the digital age it’s that companies that make a living peddling digital content don’t themselves have a long shelf life. So we have to wonder what happens to these large libraries when Spotify and similar companies fade away or are replaced by something else.

The WISP Dilemma

For the last decade I have been working with many rural communities seeking better broadband. For the most part these are places that the large telcos have neglected and never provided with any functional DSL. Rural America has largely rejected the current versions of satellite broadband because of the low data caps and because the latency won’t support streaming video or other real-time activities. I’ve found that lack of broadband is at or near the top of the list of concerns in communities without it.

But a significant percentage of rural communities have access today to WISPs (wireless ISPs) that use unlicensed frequency and point-to-multipoint radios to bring a broadband connection to customers. The performance of WISPs varies widely. There are places where WISPs are delivering solid and reliable connections that average between 20 – 40 Mbps download. But unfortunately there are many other WISPs that are delivering slow broadband in the 1 – 3 Mbps range.

The WISPs that have fast data speeds share two characteristics. They have a fiber connection directly to each wireless transmitter, meaning that there are no bandwidth constraints. And they don’t oversubscribe customers. Anybody who was on a cable modem five or ten years ago understands oversubscription. When there are too many people on a network node at the same time the performance degrades for everybody. A well-designed broadband network of any technology works best when there are not more customers than the technology can optimally serve.

But a lot of rural WISPs are operating in places where there is no easy or affordable access to a fiber backbone. That leaves them with no alternative but to use wireless backhaul. This means using point-to-point microwave radios to get bandwidth to and from a tower.

Wireless backhaul is not in itself a negative issue. If an ISP can use microwave to deliver enough bandwidth to a wireless node to satisfy the demand there, then they’ll have a robust product and happy customers. But the problems start happening when networks include multiple ‘hops’ between wireless towers. I often see WISP networks where the bandwidth goes from tower to tower to tower. In that kind of configuration all of the towers and all of the customers on those towers are sharing whatever bandwidth is sent to the first tower in the chain.

Adding hops to a wireless network also adds latency and each hop means it takes longer for the traffic to get to and from customers at the outer edges of one of these wireless chains. Latency, or time lag, in signal is an important factor in being able to perform real-time functions like data streaming, voice over IP, gaming, or functions like maintaining connections to an on-line class or a distant corporate WAN.

Depending upon the brand of the radios and the quality of the internet backbone connection, a wireless transmitter that is connected directly to fiber can have a latency similar to that of a cable or DSL network. But when chaining multiple towers together the latency can rise significantly, and real-time applications start to suffer at latencies of 100 milliseconds or greater.

WISPs also face other issues. One is the age of the wireless equipment. There is no part of our industry that has made bigger strides over the past ten years than the manufacturing of subscriber microwave radios. The newest radios have significantly better operating characteristics than radios made just a few years ago. WISPs are for the most part relatively small companies and have a hard time justifying upgrading equipment until it has reached its useful life. And unfortunately there is not much opportunity for small incremental upgrades of equipment. The changes in the technologies have been significant enough that that upgrading a node often means replacing the transmitters on towers as well as subscriber radios.

The final dilemma faced by WISPs is that they often are trying to serve customers that are in locations that are not ideally situated to receive a wireless signal. The unlicensed frequencies require good line-of-sight and also suffer degraded signals from foliage, rain and other impediments and it’s hard to serve customer reliably who are surrounded by trees or who live in places that are somehow blocked by the terrain.

All of the various issues mean that reviews of WISPs vary as widely as you can imagine. I was served by a WISP for nearly a decade and since I lived a few hundred feet from the tower and had a clear line-of-sight I was always happy with the performance I received. I’ve talked to a few people recently who have WISP speeds as fast as 50 Mbps. But I have also talked to a lot of rural people who have WISP connections that are slow and have high latency that provides a miserable broadband experience.

It’s going to be interesting to see what happens to some of these WISPs as rural telcos deploy CAF II money and provide a faster broadband alternative that will supposedly deliver at least 10 Mbps download. WISPs who can beat those speeds will likely continue to thrive while the ones delivering only a few Mbps will have to find a way to upgrade or will lose most of their customers.

Are You Ready for 4K Video?

The newest worry for ISPs is the expansion of 4K video. Already today Netflix and Amazon are offering on-line 4K video to customers. Almost all of the new programming being created by both companies is being shot in 4K.

Why is this a concern for ISPs? Netflix says that in order to enjoy a streaming 4k signal that a user ought to have a spare 15 – 20 Mbps of bandwidth available if streaming with buffering. The key word is spare, meaning that any other household activity ought to be using other bandwidth. Netflix says that without buffering that a user ought to have a spare 25 Mbps.

When we start seeing a significant number of users stream video at those speeds even fiber networks might begin experiencing problems. I’ve never seen a network that doesn’t have at least a few bottlenecks, which often are not apparent until traffic volumes are high. Already today busy-hour video is causing stress to a lot of networks. I think about millions of homes trying to watch the Super Bowl in 4K and shudder to think what that will mean for most networks.

While 4K video is already on-line it is not yet being offered by cable companies. The problem for most of the industry is that there is no clear migration path between today and tomorrow’s best video signal. There are alternatives to 4K being explored by the industry that muddy the picture. Probably the most significant new technology is HDR (high-dynamic range) video. HDR has been around for a few years, but the newest version which captures video in 10-bit samples adds both contrast and color accuracy to TVs. There are other video improvements also being explored such as 10-bit HEVC (high-efficiency video coding) which is expected to replace today’s H.264 standard.

The uncertainty of the best technology migration path has stopped cable companies from making upgrades to HDR or 4K. They are rightfully afraid to invest too much in any one version of the early implementations of the technology to then face more upgrades in just a few years. But as the popularity of 4K video increases, the pressure is growing for cable companies to introduce something soon. It’s been reported that Comcast’s latest settop box is 4K capable, although the company is not making any public noise about it.

But as we’ve seen in the past, once customers start buying 4K capable TVs they are going to want to use them. It’s expected by 2020 that almost every new TV will include some version of HDR technology, which means that the quality of watching today’s 1080 pixel video streams will improve. And by then a significant number of TVs will come standard with 4K capabilities as well.

I remember back when HD television was introduced. I have one friend who is a TV buff and once he was able to get HD channels from Comcast he found that he was unable to watch anything that was broadcast in standard definition. He stopped watching any channel that did not broadcast HD and ignored a huge chunk of his Comcast line-up.

The improvements of going to 4K and/or true HDR will be equally as dramatic. The improvement in clarity and color is astonishing as long as you have a TV screen large enough to see the difference. And this means that as people grow to like 4K quality they will migrate towards 4K content.

One thing that is clear is that 4K video will force cable companies to broadcast video over the IP stream. A single 4K signal eats up an entire 6 MHz channel on a cable system making it impossible for any cable system to broadcast more than a tiny number of 4K channels in the traditional way. And, like Comcast is obviously preparing to do, it also means all new settop boxes and a slew of new electronics at the cable headend to broadcast IPTV.

Of course, like any technology improvement we’ve seen lately, the improvements in video quality don’t stop with 4K. The Japanese plan to broadcast the 2020 Olympics in 8K video. That requires four times as much bandwidth as 4K video – meaning an 80 – 100 Mbps spare IP path. I’m sure that ways will be found to compress the transmission, but it’s still going to require a larger broadband pipe than what most homes buy today. It’s expected that by 2020 that there will only be a handful of users in Japan and South Korea ready to view 8K video, but like anything dramatically new, the demand is sure to increase in the following decade.

The Future of WiFi

There are a lot of near-term improvements planned for WiFi. The IEEE 802.11 Working Group (part of the Wi-Fi Alliance) has a number of improvements being planned. Many, but not all of the improvements, look at the future of using the newly available millimeter wave spectrum.

It’s been twenty years since the first WiFi standard was approved. I remember how great it felt about fifteen years ago when Verizon gave me a WiFi modem as part of my new FiOS service. Up until then my computing had always been tied to cables and it was so freeing to use a laptop anywhere in the house (although that first generation WiFi didn’t do a great job of penetrating the plaster walls in my old house).

Here are some of the improvements being considered:

802.11ax. The goal of this next-gen WiFi is to enable speeds up to 10 Gbps using the 5 GHz band of free WiFi spectrum. The standard also seeks to provide more bandwidth in the 2.4 GHz band. The developing new standard is looking at the use of Orthogonal Frequency Division Multiple Access (OFDMA), multi-user MIMO and other technology improvements to squeeze more bandwidth out of the currently available WiFi frequency.

Interestingly, this standard only calls for an improvement of about 37% over today’s 802.11ac technology, but the various improvement in the way the spectrum is used will hopefully mean about a four times greater delivery of bandwidth.

Probably the biggest improvement with this standard is the ability to connect efficiently to a greater number of devices. At first this will make 802.11ax WiFi more useful in crowded environments like stadiums and other public places. But the real benefit is to make WiFi the go-to spectrum for use for the Internet of Things. There is a huge race going on between WiFi and cellular technologies to grab the majority of that exploding market. For now, for indoor uses WiFi has the lead and most IoT devices today are WiFi connected. But today’s WiFi networks can get bogged down when there are too many simultaneous requests for connections. We’ll have to wait to see if the changes to the standards improve WiFi enough to keep in ahead in the IoT race.

Of course, the 10 GHz speed is somewhat theoretical in it would provide all of the bandwidth to one device that was located close the transmitter – but the overall improvement in bandwidth promises to be dramatic. This new standard is expected to be finalized by 2019, but there will probably be new hardware that incorporates some of the planned upgrades by 2018.

802.11ay. 802.11ay is the successor to 802.11ad, which never got any market traction. These two standards utilize the 60 GHz spectrum and are intended to deliver big amounts of bandwidth for short distances, such as inside a room. This new standard promises to improve short-range bandwidth up to 20 Gbps, about a three times improvement over 802.11ad. The new standard might have the same market acceptance issues if most users are satisfied instead with 802.11ax. The primary improvements over 802.11ad are the addition of MIMO antennas with up to four simultaneous data streams.

802.11az. The earlier two improvements discussed above are aimed at improving bandwidth to WiFi users. The 802.11az standard instead looks at ways to improve the location and positioning of users on a WiFi network. Since many of the improvements in WiFi use MIMO (multiple input multiple output) antennas, system performance is improved significantly if the WiFi router can accurately and quickly keep track of the precise location of each user on the WiFi network. That’s a relatively simple task in a static environment of talking to fixed-location devices like a TV or appliances, but much harder to do with mobile devices like smartphones, tablets, etc. Improvements in locating technology allows a WiFi network to more quickly track and connect to a device without having to waste frequency resources to first find the device before each transmission.

The other big improvement promised by this standard is increased energy efficiency of the network. As the network becomes adroit at identifying and remembering the location of network devices, the standard allows for WiFi devices to shut down and go to sleep and drop off the network when not in use, saving energy for devices like IoT sensors. The WiFi hub and sensor devices can be ‘scheduled’ to connect at fixed times allowing for devices to save power by sleeping in between connections.

These changes are necessary to keep WiFi useful and relevant. The number of devices that are going to be connected to WiFi is expected to continue to grow at exponential rates, and today’s WiFi can bog down under heavy use, as anybody who tries to use WiFi in a business hotel understands. But a lot of the problems with today’s WiFi can be fixed with the combination of faster data throughput along with tweaks that reduce the problems caused by interference among devices trying to gain the attention of the hug modem. The various improvements planned by the IEEE Working Group are addressing all of these issues.

Cellular Networks and Fiber

We’ve known for a while that the future 5G that the cellular companies are promising is going to need a lot of fiber. Recently Verizon CEO Lowell McAdam verified this when he said that the company will be building dense fiber networks for this purpose. The company has ordered fiber cables as large as 1,700 strands for their upcoming build in Boston in order to support the future fiber and wireless network there. That’s a huge contrast from Verizon’s initial FiOS builds that largely built a network using mostly 6-strand fibers in a lot of the Northeast.

McAdams believes that the future of urban broadband will be wireless and that Verizon intends to build the fiber infrastructure needed to support that future. Of course, with that much fiber in the environment the company will also be able to supply fiber-to-the-premise to those that need the largest amounts of bandwidth.

Boston is an interesting test case for Verizon. They announced in 2015 that they would be expanding their FiOS network to bring fiber to the city – one of many urban areas that they skipped during their first deployment of fiber-to-the-premise. The company also has engaged with the City government in Boston to develop a smart city – meaning using broadband to enhance the livability of the city and to improve the way the government delivers services to constituents. That effort means building fiber to control traffic systems, police surveillance systems and other similar uses.

And now it’s obvious that the company has decided that building for wireless deployment in Boston is part of that vision. It’s clear that Verizon and AT&T are both hoping for a world where most devices are wireless and that the wireless connections use their networks. They both picture a world where their wireless is not just used for cellphones like today, but will also be used to act as the last mile broadband connection for homes, for connected cars, and for the billions of devices used for the Internet of Things.

With the kind of money Verizon is talking about spending in Boston this might just become the test case for a connected urban area that is both fiber rich and wireless rich. To the extent that they can do it with today’s technology it sounds like Verizon is hoping to serve homes in the City with wireless connections of some sort.

I’ve discussed several times how millimeter wave radios have become cheap enough to be a viable alternative for bringing broadband to urban apartment buildings. That’s a business plan that is also being pursued by companies like Google. But I still am not aware of hardware that can reasonably be used with this same technology to serve large numbers of single family homes. At this point the electronics are still too expensive and there are other technological issues to overcome (such as having fiber deep in neighborhoods for backhaul).

So it will be interesting to watch how Verizon handles their promise to bring fiber to the homes in Boston. Will they continue with the promised FTTP deployment or will they wait to see if there is a wireless alternative on the horizon?

It’s also worth noting that Verizon is tackling this because of the density of Boston. The city has over 3,000 housing units per square mile, making it, and many other urban centers, a great place to consider wireless alternatives instead of fiber. But I have to contrast this with rural America. I’m working with several rural counties right now in Minnesota that have housing densities of between 10 and 15 homes per square mile.

This contrast alone shows why I don’t think rural areas are ever going to see much of the advantages of 5G. Even though it’s expensive to build fiber in a place like Boston, the potential payback is commensurate with the cost of the construction. I’ve always thought that Verizon made a bad strategic decision years ago when they halted their FiOS  construction before finishing building in the metropolitan areas on the east coast. Verizon has fared well in its competition with Comcast and others.

But there is no compelling argument for the wireless companies or anybody else to build fiber in the rural areas. The cost per subscriber is high and the paybacks on investment are painfully long. If somebody is going to invest in rural fiber they might as well use it to connect directly to customers rather than to spend the money in fiber plus adding a wireless network on top of it.

We are going to continue to see headlines about how wireless is the future, and for some places like Boston it might be. Past experience has shown us that wireless technology often works a lot different in the field compared to the lab, so we need to see if the wireless technologies being considered really work as promised. But even if they do, those same technologies are going to have no relevance to rural America. If anything the explosion of urban wireless might further highlight the stark differences between urban and rural America.

Ownership of Software Rights

There is an interesting fight currently at the US Patent Office that involves all of us in the telecom industry. The argument is over the right of ownership of the software that comes along these days with almost any type of electronics. The particular fight is between John Deere and tractor owners, but the fight is a precedent for similar software anywhere.

John Deere is arguing that, while a farmer may buy one of their expensive tractors, John Deere still owns the software that operates the tractor. When a farmer buys a tractor they must agree to the terms of the software license, just like we all agree with similar licenses and terms of service all of the time. The John Deere software license isn’t unusual, but what irks farmers is that it requires them to use John Deere authorized maintenance and parts for the term of the software license (which is seemingly forever).

The fight came to a head when some farmers experienced problems with tractors during harvest season and were unable to get authorized repair in a timely manner. Being resourceful they found alternatives and there is now a small black market for software that can replace or patch the John Deere software. But John Deere is attacking farmers that use alternate software saying they are violating the DMCA (Digital Millennium Copyright Act) which prohibits the bypassing of copyrighted locks on content. They argue that farmers have no right to open or modify the software on the tractors which remains the property of John Deere. The Patent Office is siding with John Deere.

This is not a unique fight for farmers and the owners of many electronics companies are taking the same approach. For example all of the major car manufacturers except Tesla have taken the same position. Apple has long taken this position with its iPhone.

So how does this impact the telecom industry? First, it seems like most sophisticated electronics we buy these days come with a separate software license agreement that must be executed as part of a purchase. So manufacturers of most of the gear you buy still think they own the proprietary software that runs your equipment. And many of them charge you yearly after buying electronics to ‘maintain’ that software. In our industry this is a huge high margin business for the manufacturers because telcos and ISPs get almost nothing in return for these annual software license fees.

I don’t think I have a client who isn’t still operating some older electronics. This may be older Cisco routers that keep chugging along, an old voice switch, or even something major like the electronics operating an entire FTTH network. It’s normal in the telecom industry for manufacturers to stop supporting most electronics within 7 to 10 years of its initial release. But unlike twenty years ago when a lot of electronics didn’t last more then the same 7 – 10 years, the use of integrated chips means that electronics are working a lot longer.

And therein lies the dilemma. Once a vendor stops supporting a technology they literally wash their hands of it – they no longer issue software updates, they stop stocking spare parts. They do everything in their power to get you to upgrade to something newer, even though the older gear might still be working reliably.

But if a telco or ISP makes any tweaks to this older equipment to keep it working – something many ISPs are notorious for – then theoretically anybody doing that has broken the law under the DMCA and could be subject to a fine up to $500,000 and a year in jail, for a first offense.

Of course, we all face this same dilemma at home. Almost everything electronic these days comes with proprietary software and the manufacturers of your PCs, tablets, smartphones, personal assistants, security systems, IoT gear and almost all new appliances probably think that they own the software in your device. And that raises the huge question of what it means these days to buy something, if you don’t really fully own it.

I know many farmers and I think John Deere is making a huge mistake. If another tractor company like Kubota or Massey Ferguson declares that they don’t maintain rights to the software then John Deere could see its market dry up quickly. There is also now a booming market in refurbished farm equipment that pre-dates proprietary software. But this might be a losing battle when almost everything we buy includes software. It’s going to be interesting to see how both the courts and the court of public opinion handle this.

Death of the Smartphone?

Over the last few weeks I have seen several articles predicting the end of the smartphone. Those claims are a bit exaggerated since the authors admit that smartphones will probably be around for at least a few decades. But they make some valid points which demonstrate how quickly technologies come into and out of our lives these days.

The Apple iPhone was first sold in the summer of 2007. While there were phones with smart capabilities before that, most credit the iPhone release with the real birth of the smartphone industry. Since that time the smartphone technology has swept the entire world.

As a technology the smartphone is mature, which is what you would expect from a ten-year old technology. While phones might still get more powerful and faster, the design for smartphones is largely set and now each new generation touts new and improved features that most of us don’t use or care about. The discussion of new phones now centers around minor tweaks like curved screens and better cameras.

Almost the same ten-year path happened to other electronics like the laptop and the tablet. Once any technology reaches maturity it starts to become commoditized. I saw this week that a new company named Onyx Connect is introducing a $30 smartphone into Africa where it joins a similarly inexpensive line of phones from several Chinese manufacturers. These phones are as powerful as US phones of just a few years ago.

This spells trouble for Apple and Samsung, which both benefit tremendously by introducing a new phone every year. People are now hanging onto phones much longer, and soon there ought to be scads of reasonably-priced alternatives to the premier phones from these two companies.

The primary reason that the end of the smartphone is predicted is that we are starting to have alternatives. In the home the smart assistants like Amazon Echo are showing that it’s far easier to talk to a device rather than work through menus of apps. Anybody who has used a smartphone to control a thermostat or a burglar alarm quickly appreciates the ability to make the changes by talking to Alexa or Siri rather than fumbling through apps and worrying about passwords and such.

The same thing is quickly happening in cars and when your home and car are networked together using the same personal assistant the need to use a smartphone while driving gets entirely eliminated. The same thing will be happening in the office and soon that will mean there is a great alternative to the smartphone in the home, the car and the office – the places where most people spend the majority of their time. That’s going to cut back on reliance of the smart phone and drastically reduce the number of people who want to rush to buy a new expensive smartphone.

There are those predicting that some sort of wearable like glasses might offer another good alternative for some people. There are newer version of smartglasses like the $129 Snap Spectacles that are less obtrusive than the first generation Google Glass. Smartglasses still need to overcome the societal barrier where people are not comfortable being around somebody who can record everything that is said and done. But perhaps the younger generations will not find this to be as much of a barrier. There are also other potential kinds of wearables from smartwatches to smart clothes that could take over the non-video functions of the smartphone.

Like with any technology that is as widespread as smartphones today there will be people who stick with their smartphone for decades to come. I saw a guy on a plane last week with an early generation iPod, which was noticeable because I hadn’t seen one in a few years. But I think that most people will be glad to slip into a world without a smartphone if that’s made easy enough. Already today I ask Alexa to call people and I can do it all through any device such as my desktop without even having a smartphone in my office. And as somebody who mislays my phone a few times every day, I know that I won’t miss having to use a smartphone in the home or car.