Categories
Regulation - What is it Good For? Technology

FCC Makes Changes to 60 GHz Spectrum

United States radio spectrum frequency allocat...
United States radio spectrum frequency allocations chart as of 2003 (Photo credit: Wikipedia)

On August 12, 2013 the FCC, in [ET Docket No 07-113] amended the outdoor use for the 60 GHz spectrum. The changes were prompted by the industry to make the spectrum more useful. This spectrum is more commonly known as the millimeter spectrum, meaning it has a very short wavelength and operates between 57 GHz and 64 GHz. Radios at high frequencies like this have very short antennae which are typically built into the unit.

The spectrum is used today in two applications, a) as outdoor short-range point-to-point systems used in place of fiber, such as connecting two adjacent buildings, and b) as in-building transmission of high-speed data between devices for functions such as transmitting uncompressed high-definition (HD) video between devices like blu-ray recorders, cameras, laptops and HD televisions.

The new rules modify the outside usage to increase power and thus increase the distance of the signal. The FCC is allowing an increase in emissions from 40 dBm to 82 dBm which will increase the outdoor distance for the spectrum up to about 1 mile. The order further eliminates the need for outside units to send an identifying signal, which now makes this into an unlicensed application. This equipment would be available to be used by anybody, with the caveat that it cannot interfere with existing in-building uses of the spectrum.

One of the uses of these radios is that multiple beams can be sent from the same antenna site due to the very tight confinement of the beams. One of the drawbacks of this spectrum is it is susceptible to interference from heavy rain, which is a big factor in limiting the distance.

Radios in this spectrum can deliver up to 7 Gbps of ethernet (minus some for overheads) and so this is intended an alternative to fiber drops to buildings needed less bandwidth than that limit. A typical use for this might be to connect to multiple buildings in a campus or office park environment rather than having to build fiber. The FCC sees this mostly as a technology to be used to serve businesses, probably due to the cost of the radios involved.

Under the new rules the power allowed by a given radio is limited to the precision of the beam created by that radio. Very precise radios can use full power (and get more distance) while the power and distance are limited for less precise radios.

The FCC also sees this is an alternative for backhaul to 4G cellular sites, although the one mile limitation is a rather short one. Most 4G sites that are already within a mile of fiber have largely been connected.

This technology will have a limited use, but there will be cases where using these radios could be cheaper than installing fiber and/or dealing with inside wiring issues in large buildings. I see the most likely use of these radios to get to buildings in crowded urban environments where the cost of leasing fiber or entrance facilities can be significant.

The 60 GHz spectrum has also been allowed for indoor use for a number of years. The 60GHz band when used indoors has a lot of limitations related to both cost and technical issues. The technical limitations are 60 GHz must be line-of-sight and the spectrum doesn’t go through walls. The transmitters are also very power consumptive and require big metal heat sinks and high-speed fans for cooling. Even if a cost effective 60 GHz solution where to be available tomorrow battery operated devices would need a car battery to power them.

One issue that doesn’t get much play is the nature of the 60 GHz RF emissions. 60 GHz can radiate up to 10 Watts with the spectrum mask currently in place for indoor operation. People are already concerned about the 500mW from a cell phone and WiFI and it is a concern in a home environment to have constant radiation at 10 Watts of RF energy. That’s potentially 1/10 the power of a microwave oven radiated in your house and around your family all of the time.

Maybe at some point in the distant future there may be reasonable applications for indoor use of 60 GHz in some vertical niche market, but not for years to come.

Categories
Technology What Customers Want

The Future of TV – Content

Photo of cable tv headend rack. Louisiana. Now closed out of business. (Photo credit: Wikipedia)

Since cable TV became a nationwide product the content has been delivered by the cable providers in large packages that differed little from coast to coast. Small rural systems have typically smaller line-ups, but the programming available in the big cities is about the same everywhere.

The first big crack in how programming is delivered came with Tivo which let people record TV to watch later, including the ability to skip commercials. And quickly following that was video-on-demand from the cable companies. Now we are seeing a large amount of programming available on the Internet and I think we have turned the corner and consumers now have more say than the cable companies in how and when they watch content. This trend will strengthen and greater numbers of people will step away from traditional packages. I looked around to see what others are expecting for the future of content and here are some of the predictions:

Content Participation. This started in a mild way when home viewers could vote each week for the winners of shows like American Idol. This got millions of viewers heavily invested in the outcomes of such shows. Expect a lot more of this in the future and to a much greater degree. There will be programs that are driven by the viewers. The viewers will get a say in the plot development, the introduction of new characters or getting rid of existing ones. The shows and characters will participate in social media and become part of fan’s lives.

Viral TV Production. Even better than participation, viewers will be able to help fund new shows they want to watch. To some extent this has happened to a few shows today that were discontinued by networks but then picked up for independent production for the Internet. Viewers will not only get to participate as backers of new shows but will have the ability to have some say in the creation of content. I can picture Star Trek fans funding episode after episode forever.

Produce Your Own Content. Anybody who has witnessed 14-year-old girls watching video will see that a lot of what they watch is clips made by their friends or by themselves. As it becomes easier and easier to make your own content, and as this content is easier and easier to play anywhere, a lot of people are going to produce content to share with their friends.

More Local Content. To a large degree local content has died on cable TV. Larger markets have local news, but there is a lot of demand to watch local content such as high school football and basketball, parades, government meetings and other local events. The Internet is already producing ways to channelize local content and I expect local ‘channels’ to pop up all over the country. There is no reason that every high school, every college, every church can’t have their own local channel of web content.

Fewer Network Channels. I think everybody expects that as more content is on the Internet and as some of the more popular content becomes available on a per-pay basis that many of the existing cable networks will die. It’s been reported that 80% to – 85% of cable channels don’t make enough money to stand alone in an a la carte cable world.

Different Perspectives. Expect programming that will offer different perspectives. This has been done a little in the past with shows being filmed with different endings for different viewers, but expect a lot more of this in the future. There will be shows that will allow the viewer to watch the show from the perspective of a specific character.

Personalized Ads. Of course, with all of the good changes that are coming, there is a lot of consensus that ads will become more personalized. Of course, advertisers think that this will make you like to watch their ads since most of what you see will be aimed at you, but I suspect that is going to make most people even more jaded about advertising.

Sensory TV. As a science fiction fan I have been to a number of movies that purported to invoke the sense of smell, taste or touch during movies. I must say that movies with Sniff-o-rama were a little less than successful! However, it is predicted that in the near future that it will be possible though personal electronics to make a viewer really invoke the different senses. This will begin with gamers and will involve wearing helmets or goggles that will trigger brain sensations. But this will move eventually to wider programming.

Categories
Current News Technology The Industry

Spying on our Internet Infrastructure

English: NSA EMPLOYEES ONLY Français : NSA employés seulement (Photo credit: Wikipedia)

Everybody I know in the telecom industry has been following the controversy surrounding the allegations that the NSA has been gathering information on everybody’s Internet usage. What I find somewhat amusing are the smaller ISPs who are telling people that they have not cooperated with the NSA, and that it is ‘safe’ for customers to use them. That is a great marketing ploy but it far from the truth. The Internet infrastructure in the country is very complex, but for the most part the data traffic in the country can be characterized in three ways: core Internet, peering and private traffic.

The private data traffic is just that. There are huge numbers of private data connections in the country that are not part of the ‘Internet’. For example, every banking consortium has a private network that connects branches and ATMs. Large corporations have private connections between different locations within the company. Oil companies have private data circuits between the oil fields and headquarters. And for the most part the data on these networks is private. Most corporations that use private networks do so for security purposes and many of them encrypt their data.

The FBI has always been able to get a ‘wiretap’ on private data circuits using a set of rules called CALEA (Communications Assistance for Law Enforcement Act). The CALEA rules proscribe the processes for the FBI to use to wiretap any data connection. But over the years I have asked hundreds of network technicians if they have ever seen a CALEA request and from what I can see this is not a widely used tool. It would require active assistance from telecom companies to tap into private data circuits, and there just does not seem to be much of that going on. Of course, there is also not a lot of likelihood in finding spy-worthy information in data dumps between oil pumps and many of the other sorts of transactions that happen on private networks.

But the NSA is not being accused of spying on private corporate data. The allegations are that they are monitoring routine Internet traffic and that they possess records of every web site visited and every email that is being sent over the Internet. And it seems plausible to me that the NSA could arrange this.

The Internet in the US works on a hub and spoke infrastructure. There are major Internet hubs in Los Angeles, New York, Atlanta, Chicago, Dallas and Washington DC. Most of ‘Internet’ traffic ends up at one of these hubs. There are smaller regional hubs, but all of the Internet traffic that comes from Albuquerque, Phoenix, San Francisco, Las Vegas and all of the other cities in that region will end up eventually in Los Angeles. You will hear ISP technicians talking about ‘hops’, meaning how many different regional hubs an Internet transmission must go through before it gets to one of these Internet hubs.

So when some smaller Internet provider says that the NSA does not have their data they are being hopeful, naive or they are just doing PR. I recall an article a few months back where Comcast, Time Warner and Cox all said that they had not cooperated with the NSA and that it was safer to use their networks than to use AT&T and Verizon, who supposedly have cooperated. But everything that comes from the Comcast and Cox networks ends up at one of these Internet hubs. If the NSA has figured out a way to collect data at these hubs then there would be no reason for them to come to the cable companies and ask for direct access. They would already be gathering the data on the customers of these companies.

But then there is the third piece of the Internet, the peering network. Peering is the process of carriers handing data directly to each other rather than sending it out over the general Internet. Companies do this to save money. There is a significant cost to send information to and from the Internet. Generally an ISP has to buy transport, meaning the right to send information through somebody’s fiber cable. And they have to buy ‘ports’ into the Internet, meaning bandwidth connection from the companies that own the Internet portals in those major hubs. If an ISP has enough data that goes to Google, for example, and if they also have a convenient place to meet Google that costs less than going to their normal Internet hub, then they will hand that data traffic directly to Google and avoid paying for the Internet ports.

And peering is also done locally. It is typical for the large ISPs in large cities to hand each other Internet traffic that is heading towards each other’s network. Peering used to be something that was done by the really large ISPs, but I now have ISP clients with as few as 10,000 customers who can justify some peering arrangements to save money. I doubt that anybody but the biggest ISPs understand what percentage of traffic is delivered through peering versus directly through the more traditional Internet connections.

But the peering traffic is growing all of the time, and to some extent peering traffic can bypass NSA scrutiny at the Internet hubs. But it sounds like the NSA probably has gotten their hands on a lot of the peering traffic too. For instance, a lot of peering traffic goes to Google, and so if the NSA has an arrangement with Google then that catches a lot of the peering traffic.

There certainly are smaller peering arrangements that the NSA could not intercept without direct help from the carriers involved. For now that would be the only ‘safe’ traffic on the Internet. But if the NSA is at the Internet hubs and also has arrangements with the larger companies in the peering chains, then they are getting most of the Internet traffic in the country. There really are no ‘safe’ ISPs in the US – just those who haven’t had the NSA knocking on their doors.

Categories
Current News Technology The Industry

Time to End the Cable Card Rules?

no-cable-tv (Photo credit: hjl)

This week the National Cable Television Association (NCTA) published a document called The Integration Ban – A Rule Past its Prime. So what is the integration ban and why are they so upset about it?

When the cable industry uses the phrase integration ban, they are referring to the various rules that require cable companies to offer settop boxes that include a cable card. There is no actual FCC order called the integration ban and that is a ‘marketing’ phrase the industry came up with to talk about the cable card rules.

NCTA has a lot of really valid points and there probably is no other set of rules administered by the FCC that is as much of a mess as the cable card rules. These rules came into place in 1998 and were due to multiple requests from the public to be able to use their own settop boxes rather than use the ones supplied by the cable company (and for which the cable company charges). And so the FCC came up with some complicated rules that required cable companies to use boxes that included a cable card.

A cable card is a little device that is about the size of a credit card and that fits into a slot in settop boxes. Its function is to decrypt the television signal from the cable company in order to watch the programming. Different cable companies use different encryption techniques, and so a consumer must acquire a cable card from their own cable company, and then they can use the card in a settop box they buy on their own.

This sounds like a good idea. Cable companies have historically charged around $5 per month forever to ‘rent’ the settop box and the FCC clearly envisioned that a lot of people would buy their own settop boxes to avoid these fees. But they haven’t, and so from a practical aspect this order has been a dismal failure. According to the NCTA there are only 600,000 cable cards in use today compared to 40 million cable card-ready settop boxes. And there are a huge number of settop boxes that don’t include the cable card technology, so less than 1% of consumers have taken the opportunity to avoid the settop box charges. From a market perspective that is a failure.

But that is not the only reason that the cable card order is a mess today. The FCC has granted numerous waivers over the years and so some companies do not have to use cable cards. AT&T and the telcos who use DSL do not have to use cable cards because nobody has really figured out a way to make them work with the way that DSL is used to deliver TV signal. One of the functions of a cable card is to act as a tuner, meaning it changes channels, and these technologies change the channel back at the headend rather than at the customer location. Many of the smaller fiber providers cannot buy settop boxes that will allow cable cards, although Verizon must offer them. The satellite providers also do not have to use cable cards for similar reasons.

But the FCC has also granted conditional waivers to some traditional cable companies like Charter and Cablevision. These providers have been working with a new technology that would allow customers to download software that would allow external devices to act as settop boxes on their systems.

But there is even a bigger reason why the cable card rules are a mess. In January of this year the D.C Circuit Court of Appeals entirely vacated what is known as the ‘Plug and Play rules’ that were issued by the FCC in 2003. These rules made changes to the cable card rules along with other cable-related issues. Further, the FCC amended the cable card rules again in 2010, largely based upon the 2003 order, and yet those rules were not vacated by the Court. We now have a regulatory puzzle that I am not sure anybody can solve (but many lawyers will be glad to charge to try).

Finally, and probably most important of all, settop boxes are quickly going to lose relevance in the marketplace. The FCC needs to look into people’s living rooms to see how people are watching video today. (I don’t mean that literally since that seems to be the NSA’s job). People want to be able to watch video on a wide array of devices, not just their television sets. They are connecting a plethora of new devices to their TVs and wireless networks to let them do this on their own. And many cable companies are now helping them by offering some form of what they are calling ‘TV Everywhere”. There are also cable providers who are actively allowing boxes like Hulu, Playstation and Apple TV to act as their settop box.

So we have cable card rules that are a failure in the marketplace. Further, the cable card rules have been eviscerated by a Court order and almost nobody understands what is or is not required any more. And technology is getting ready to quickly bypass the traditional settop box. The FCC needs to admit that this is an order past its prime and should stop requiring new cable cards. It might make some sense for some period of time to allow existing cable cards to be used, but it’s time to face the reality of the market and the technology and get out of the way of innovation

Categories
Technology What Customers Want

The Future of TV – The Sets

English: Various remote controls fot TV-set, DVD and VHS. (Photo credit: Wikipedia)

I think everybody agrees that television viewing is changing rapidly, and everybody in the industry has been thinking about how these changes will impact the cable business. I am going to do a series of blogs for a few Mondays looking at where industry experts think the business is moving. I will start off today looking at the future of the television set and then move on to other aspects of the business such as advertising, content production and viewing habits.

For the first time in many decades the purchase of new television sets is down. This seems to be due to two primary factors. First, 11% of homes now say that they now watch all of their video from computers, laptops, tablets or smartphones. So some households have given up on the communal nature of having a centralized set that everybody can watch together. However, the communal nature of TV viewing probably means that most households are going to want to keep a TV set of some sort. Second, TVs are being upgraded less often and people are treating them as a screen more so than a standalone device. When somebody connects a Hulu or Goggle Chromecast device to their TV they have in effect upgraded without the necessity of buying a new monitor.

So I looked around to see what experts think will happen to the TV set over time? Here are some guesses for both short-term and long-term.

Short-Term.  In the short term TV sets are going to get glitzier and have even more functions than they do today. Of course, not all big TV innovations succeed such as the fizzle that came with 3D TVs in 2010. But before TV manufacturers agree that the future of TVs is dead they are going to try to sell new sets by pushing new features. Some of the features being seen on new TVs now include:

  • Split screens. This takes the idea of picture in the screen and creates up to four separate pictures on the screen at the same time. Thus, a sports fan could watch four football games simultaneously. This has to be giving nightmares to companies delivering IPTV over DSL if each set can be asking for up to four HD channels at the same time.
  • Ultra High Definition. There are not TVs being made with 4k resolution which provides 4 times as many pixels with a 3840 X 2160 pixel grid as compared to today’s 1920 X 1080 grid.
  • OLED (Organic Light Emitting Diodes) TVs. These are ultrathin TVs made of layers of sprayed on materials that create a new kind of diode. The diodes emit their own light and turn black when not being used. The Koreans have made an OLED screen that is flexible and only 4 mm thick.
  • IGZO (Indium Gallium Zinc Oxide). Sharp has introduced a new LCD screen that is much brighter and also that can change colors much faster than older LCD screens. This ideal for gaming but also makes a superior TV screen.
  • Smart TVs. It is being rumored that Apple TV is almost ready to release its iTV, or the next generation of smart TV. A smart TV is really a new kind of smarter settop box combined with a screen. Apple will probably include Siri and iSight and other computer and smart phone features into the box. The smart TV will no longer be just a tuner and recorder but will be a full-functioning application machine that can bring the web and cellphone apps fully integrated to the TV set.

Long Run. In the long run it is likely that the TV settop box functionality will be completely separated from the display. The OLED flexible and transparent displays will mean that a TV will be able to be installed anywhere by laying a film over an existing surface. And so there could easily be an inexpensive TV display on the side of the refrigerator, on every mirror in the house or on any wall. These TVs will be operated using the combination of a smart box along with very fast WiFi in the house that will let all of the TVs be integrated into one system. This will allow for interesting new features such as ‘follow-me’ TV where the TV signal would follow the person from device to device and from room to room as they move throughout the house.

TV is also likely to become far more personal to each person in the household, a topic which I will look at in a future blog.

One small detail I almost forgot. The lowly TV remote is likely to die soon. The remote we have today is largely still needed due to a rule at the FCC called the integration ban which requires cable settop box manufacturers to produce a removable tuner, called a cable card. And so the current remotes still work on ancient infrared technology.

Remotes are starting to be replaced by smartphones and there are apps which can take over many of the remote functions. But in the not-too-distant future the smart TVs are going to do away with the need for any device and you will be able to control the TV by voice commands or by gestures. I know this will save me the five minutes it takes me every time I go to watch TV and try to remember where I left the remote!

Categories
Technology The Industry What Customers Want

Will the Real 4G Please Stand Up?

English: 4G LTE single mode modem by Samsung, operating in the first commercial 4G network by Telia (Photo credit: Wikipedia)

We are all aware of grade inflation where teachers give out more high grades than are deserved. But US cellular marketers have been doing the same thing to customers and have inflated the performance of their data products by calling every new development the next generation. Earlier this year the International Telecommunications Union (ITU) approved the final standards for 4G cellular data. One of the features of the final standard is that a 4G network must be able to deliver at least 100 Mbps of data to a phone in a moving vehicle and up to 1 Gbps to a stationary phone.

Meanwhile in the US we have had cellular networks marketed as 4G for several years. In the US the earliest deployments of 3G networks happened just after 2001. That technology was built to a standard that had to deliver at least 200 kbps of data, which was more than enough when we were using our flip phones to check sports scores.

But since then there have been a number of incremental improvements in the 3G technology. Improvements like switching to 64-QAM modulation and multi-carrier technologies improved 3G speeds. By 2008 3G networks were pretty reliably delivering speeds up to 3 Mbps download using these kinds of improvement. Around the rest of the world this generation of 3G improvements was generally referred to as 3.5G. But in the US the marketers started calling this 4G. It certainly was a lot faster than the original 3G, but it is still based on the 3G standard and is not close to the 4G standard.

And since then there has been other big improvements in 3G using LTE and HSPA. For example, LTE is an all-packet technology and this allows it to send voice traffic over the data network, gaining efficiency by not having to switch between voice and data. One of the biggest improvements was the introduction of MIMO (multiple input multiple output). This allows LTE to use different frequencies to send and receive data, saving it from switching back and forth between those functions as well.

For a while Wi-max looked like a third competitor to LTE, but it’s pretty obvious now in the US that LTE has won the platform battle. All of the major carriers have deployed significant amounts of LTE and most of them say these deployments will be done by the end of this year in metropolitan markets. Speeds on LTE are certainly much faster than earlier speeds using 3.5G technology. But this is still not 4G and around the rest of the world this technology is being referred to as 3.9G or Pre-4G.

But to date there are very few phones that have been deployed that use the LTE network to its fullest. There have been a few handsets, like the HTC Thunderbolt that have been designed to use the available LTE speeds. And Verizon says it will roll out smartphones in 2014 that will only work on the LTE network.

There is a big trade-off in handsets between power consumption and the ability to switch between multiple cellular technologies. A typical cell phone today needs to be able to work on 3G networks, 3.5G networks and several variations of the latest networks including the different flavors of LTE as well as the HSPA+ used by T-Mobile. So, interestingly, the most popular phones like the iPhone and the Galaxy S4 will work on LTE, but don’t come close to achieving the full speeds available with LTE. And of course, nobody tells this to customers.

Starting in September in South Korea will be a new deployment of another incremental improvement in cellular data speeds using a technology called LTE-A (LTE Advanced). This is achieving data speeds of about twice those achieved on the current US LTE deployments. This is achieved by layering in a technology called carrier aggregation (CA) that links together two different spectrums into one data path.

And the US carriers have talked about deploying the LTE-A technology starting sometime in 2014. No doubt when this is deployed in the US some marketer is going to call it 5G. And yet, it is still not up to the 4G standard. Maybe this is now 3.95G. Probably by the time somebody actually deploys a real 4G phone in the US it is going to have to be called 8G.

Categories
Technology

Is the Internet Changing the Way We Think?

Cover via Amazon

Nicolas Carr published a headline piece in the Atlantic in 2008 that asked ‘Is Google Making Us Stupid?’. He expanded this article into a book, The Shallows: What the Internet is Doing to Our Brains. The basic hypothesis of the book is that the nature of the way that we use Internet is changing the way we think.

Carr looks at how we use the Internet and compares it to the way we learned before the Internet. Everybody who loves books knows that feeling where you sink so deeply into a book that your mind is fully immersed in the book’s world. To quote Carr:

Even the earliest silent readers recognized the striking change in their consciousness that took place as they immersed themselves in the pages of a book. The medieval bishop Isaac of Syria described how, whenever he read to himself, “as in a dream, I enter a state when my sense and thoughts are concentrated. Then, when with prolonging of this silence the turmoil of my memories is stilled in my heart, ceaseless waves of joy are sent me by inner thoughts, beyond expectation suddenly arising to delight my heart.” Reading a book was a meditative act, but it didn’t involve a clearing of the mind. It involved a filling, or replenishing, or the mind. Readers disengaged their attention from the outward flow of passing stimuli in order to engage it more deeply with an inward flow of words, ideas, and emotions. That was—and is—the essence of the unique mental process of deep reading.  

By contrast, using the Internet is the opposite of reading a book and the experience is a million miles wide and an inch deep. The Internet purposefully interrupts you, distracts you, gives you constant reasons not to delve deeply and think hard. Rather it is easy to flit from topic to topic, from distraction to distraction. Described by Carr:

What the Net seems to be doing is chipping away my capacity for concentration and contemplation. Whether I’m online or not, my mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

As somebody who has probably read a few thousand books in my lifetime I doubt that I am ever going to want to give up the experience of reading books. But I also now spend a lot of time on the Internet and I acknowledge that Carr is right about how I use my brain during that experience. I know when reading web news stories that I rarely even read a complete article, but rather quickly skim for the gist of it and move on.

This book makes me wonder about two things: if there is anything negative in the way that most people use the Internet, and particularly is this bad for our kids?

Carr suggests that there is something wrong with using the Internet the way we do. He says we are essentially using the Internet as our long-term memory and that it doesn’t force is to undergo the mental processes necessary to place new ideas into our long-term memories. Reading and thinking about the ideas in a book establishes a series of long-term memories in a way that skimming news stories on the web does not. Think back on how you feel about your favorite book and you will see that you have retained a lot of details of the book, but you also will have retained the thoughts the book invoked in you. Observing myself I see that this same thing is not true from web browsing. But this ignores the huge benefit of the web which is that the information of the human race at our fingertips, meaning we can find out things faster and more accurately than ever before.

Reading books and talking to people about ideas lets you take the time for your brain to process ideas, form conclusions and to gain a deeper understanding of your life. Says Carr, “We become, neurologically, what we think.” I have benefitted by my lifelong love of reading books and I now look at the Internet as an enhancement to the way my brain already works.

But my real concern with the Internet is the effect on our kids. We are now creating the first generation of humans 2.0 who are being raised with the Internet as a constant background. Are we raising a generation of kids who cannot or will not be deep thinkers because they are not being forced to think deeply? Like any other human act, the very act of thinking deeply trains our mind in how to think even more deeply in the future. Are we creating a generation of kids whose brains will mimic the shallowness of Internet and who will constantly flitter from one topic to another, always ready for the next distraction? I really don’t know, but it is certainly possible and it is a bit scary.

Categories
Improving Your Business Technology

Do You Understand Your Chokepoints?

Almost every network has chokepoints. A chokepoint is some place in the network that restricts data flow and that degrades the performance of the network beyond the chokepoint. In today’s environment where everybody is trying to coax more speed out of their network these chokepoints are becoming more obvious. Let me look at the chokepoints throughout the network, starting at the customer premise.

Many don’t think of the premise as a chokepoint, but if you are trying to deliver a large amount of data, then the wiring and other infrastructure at the location will be a chokepoint. We are always hearing today about gigabit networks, but there are actually very few wiring schemes available that will deliver a gigabit of data for more than a very short distance. Even category 5 and 6 cabling is only good for short runs at that speed. There is no WiFi on the market today that can operate at a gigabit. And technologies like HPNA and MOCA are not fast enough to carry a gigabit.

But the premise wiring and customer electronics can create a choke point even at slower speeds. It is a very difficult challenge to bring speeds of 100 Mbps to large premises like schools and hospitals. One can deliver fast data to the premise, but once the data is put onto wires of any kind the performance decays with distance, and generally a lot faster than you would think. I look at the recent federal announced goal of bringing a gigabit to every school in the country and I wonder how they plan to move that gigabit around the school. The answer mostly is that with today’s wiring and electronics, they won’t. They will be able to deliver a decent percentage of the gigabit to classrooms, but the chokepoint of wiring is going to eat up a lot of the bandwidth.

The next chokepoint in a network for most technologies is neighborhood nodes. Cable TV HFC networks, fiber PON networks, cellular data networks and DSL networks all rely on creating neighborhood nodes of some kind, a node being the place where the network hands off the data signal to the last mile. And these nodes are often chokepoints in the network due to what is called oversubscription. In the ideal network there would be enough bandwidth delivered so that every customer could use all of the bandwidth they have been delivered simultaneously. But very few network operators want to build that network because of the cost, and so carriers oversell bandwidth to customers.

Oversubscription is the process of bringing the same bandwidth to multiple customers since we know statistically that only a few customers in a given node will be making heavy use of that data at the same time. Effectively a network owner can sell the same bandwidth to multiple customers knowing that the vast majority of the time it will be available to whoever wants to use it.

We are all familiar with the chokepoints that occur in oversubscribed networks. Cable modem networks have been infamous for years for bogging down each evening when everybody uses the network at the same time. And we are also aware of how cell phone and other networks get clogged and unavailable in times of emergencies. These are all due to the chokepoints caused by oversubscription at the node. Oversubscription is not a bad thing when done well, but many networks end up, through success, with more customers per node than they had originally designed for.

The next chokepoint in many networks is the backbone fiber electronics that delivers bandwidth to from the hub to the nodes. Data bandwidth has grown at a very rapid pace over the last decade and it is not unusual to find backbone data feeds where today’s data usage exceeds the original design parameters. Upgrading the electronics is often costly because in some network you have to replace the electronics to all nodes in order to fix the ones that are full.

Another chokepoint in the network can be hub electronics. It’s possible to have routers and data switches that are unable to smoothly handle all of the data flow and routing needs at the peak times.

Finally, there can be a chokepoint in the data pipe that leaves a network and connects to the Internet. It is not unusual to find Internet pipes that hit capacity at peak usage times of the day which then slows down data usage for everybody on the network.

I have seen networks that have almost all of these chokepoints and I’ve seen other networks that have almost no chokepoints. Keeping a network ahead of the constantly growing demand for data usage is not cheap. But network operators have to realize that customers recognize when they are getting shortchanged and they don’t like it. The customer who wants to download a movie at 8:00 PM doesn’t care why your network is going slow because they believe they have paid you for the right to get that movie when they want it.

Categories
Improving Your Business Technology

The Internet of Things is Here Today

Consider the following pricing chart from Vivint, one of the nationwide leaders in home security. This particular pricing chart happens to come from Fort Wayne, Indiana.

 

This may not look like it, but this is the beginning of the Internet of Things and I think the way that Vivint has packaged this is brilliant. Just a few years ago this company and every company in the security business would have been selling only the features in the first column. But now they have added on energy management and home automation which are the first steps into the Internet of Things. To make this work they will install a gateway in the home that is capable of monitoring or communicating with the devices in the home and also communicating back with the cloud.

This is just the beginning. As more home-ready services are created Vivint will certainly add some of them on as enhancements to the packages listed or will create new packages. The next big field is already hinted in the last item, the medical pendant. We are not too far away from the time when sensors will be able monitoring your health and keeping a constant record of your heart beat, blood pressure and other vital signs. And a few years after that, micro sensors will be in your blood looking at your blood chemistry, looking for cancer etc.

A company like Vivint will have to decide what things they will support because the scope of the Internet of Things will become immense. It’s been predicted that much of the Internet of things will be done with Apps. But households still need the gateway and will want an expert to make sure things like security and smoke alarms are connected properly. I see a prominent role for businesses willing to go into the home to make sure that everything works well together.

Since there will be so many options in the Internet of Things it’s likely that a carrier will choose a few standardized packages that will fit a large percentage of the population and will leave customized packages to somebody else. For example, even today there are a ton of other options available in the energy management field and Vivint has chosen a few common options. Today a household can also do things like control blinds for allowing or blocking sunlight, coordinate ceiling fans, change the hot water heater settings dynamically during day, and interface with external solar panels.

I believe a lot of homes are going to want these services. I also know that customers will choose somebody they know and trust if given a choice of vendors. The Internet of Things is going to grow over time while traditional services like voice and cable TV wane. If you are going to survive as a carrier selling to households, then selling the Internet of Things needs to be in your portfolio.

Categories
Technology

Is There any Life Left in Copper?

RG-59 coaxial cable A: Plastic outer insulation B: Copper-clad aluminium braid shield conductor C: Dielectric D: Central conductor (copper-clad steel) (Photo credit: Wikipedia)

Copper is still a very relevant technology today, and when looked at on a global scale nearly 2/3 of all broadband subscribers are still served by copper. That percentage is smaller in the US, but this country has a far more widely deployed cable TV system than most of the rest of the world.

The most widely deployed DSL technologies today are ADSL2 and VDSL. In theory these technologies can get speeds up to about 40 Mbps. But depending upon the gauge, the age and the condition of the copper many actual deployments are closer to 20 Mbps than the theoretical 40 Mbps.

ADSL2 and VDSL technology has been widely deployed by AT&T in its U-verse product which serves over 7 million data customers and over 4.5 million cable customers. AT&T has made the product available to over 24 million homes. AT&T can support the product up to about 3,500 feet on good single copper pair and up to 5,500 feet using a two bonded copper pairs.

And ADSL2 is a pretty decent product. It can deliver IPTV and still support an okay data pipe. However, as the cable companies are finding ways to get more bandwidth out of their coaxial cable and as new companies are deploying fiber, these DSL technologies are going to again fall behind the competition.

So what is out there that might resurrect copper and make speeds faster than ADSL2? Not too long ago I wrote a blog about G.Fast, which is Alcatel-Lucent’s attempt to find a way to get more speeds out of legacy copper networks. In recent field tests ALU achieved a maximum speed of 1.1 Gbps over 70 meters and 800 Mbps over 100 meters for brand new copper. On older copper the speed dropped to 500 Mbps for 100 meters.

However, the G.Fast distance limitations are far shorter than ADSL2 and G.Fast is really more of a drop technology than a last mile technology and it would require a telco like AT&T to build a lot more fiber to get even closer to houses. You have to wonder of it makes any sense to rebuild the copper network to be able to get up to 500 Mbps out of copper when fiber could deliver many gigabits.

There are other technologies that have been announced for copper. Late last year Genesis Technical Systems announced a scheme to get 400 Mbps out of copper using a technology they are calling DSL Rings. This technology would somehow tie 2 to 15 homes into a ring and bridge them with copper. Details of how the technology works are still a little sketchy.

In 2011 the Chinese vendor Huawei announced a new technology that will push up to 1 gigabit for 100 meters. This sounds very similar to G.Fast and sounds like a way to use existing copper within a home rather than rewiring.

There is one new technology that is finally getting wider use which is bonded VDSL pairs that use vectoring. Vectoring is a noise cancellation technology that works in a way similar to how noise-cancelling headphones work to eliminate sound interference. Vectoring eliminates most of the noise between bonded pairs of copper. Alcatel-Lucent hit the market with bonded pair VDSL2 in late 2011 that can deliver up to 100 Mbps. However, in real deployment speeds are reported to be 50 Mbps to 60 Mbps on older copper. That is enough speed to probably give another decade to DSL, although to do so requires a full replacement of old technology DSL technology with VDSL2. One has to wonder how many times the telcos will keep upgrading their copper electronics to get a little more speed rather than taking the leap to fiber like Verizon did.

One only has to take a look at the growth rate of the data used at homes and ask how long copper can remain relevant. Within a few short decades we have moved from where homes could get by on dial-up and now find a 20 Mbps connection too slow. Looking just a few years forward we see the continued growth of video sharing and a lot of new traffic from cellular femtocells and the Internet of Things. It’s hard to think that it won’t be long until people are bemoaning the inadequacy of their 50 Mbps connections. But that day is coming and probably is not more than a decade away.

Exit mobile version