What’s the Future for Media Advertising?

Film4I’m glad I’m not in the advertising business. We think telecom is undergoing big changes, but the advertising firms that represent large clients must be struggling to know where to find the eyeballs to view their ads. The public’s traditional viewing habits are changing quickly and dramatically across all forms of media.

Not many years ago ad revenues were spread across TV, radio, and print and the big companies had a pretty good idea who was seeing their ads by demographic. But the way that people view all forms of media is changing so rapidly that it’s a lot harder to know who is seeing your ads.

Consider the following statistics comparing how people spend their time viewing different media versus how advertising dollars were spent. Both sets of numbers are from 2014 and come from Business Insider.

‘                            % of Time Spent         % of Advertising Dollars

Digital                         46.3%                                28.2%

Television                   36.6%                                38.1%

Radio                          11.8%                                  8.6%

Print                             3.5%                                 17.6%

Digital includes the web, cellphones, and all forms of digital advertising.

These percentages show a interesting picture of how people are spending their time and I think this is the first time I have ever seen this expressed in a side-by-side comparison across all forms of media. It’s obvious that people prefer digital media and spend nearly half of their media time there.

The problem that advertisers have is that there are still huge amounts of change happening within each category. For example, it looks on the surface that the amount of advertising spent on television is about right according to the eyeball time purchased. But consider the following facts:

  • The demographics for television are changing dramatically and rapidly. For example, the percentage of households of 18–24 year olds that buy a cable subscription dropped 7 total percentage points (or 12% overall) just last year.
  • The percentage of people who watch TV on a time-delayed basis is up dramatically and over 40% of TV watching is now done on a delayed basis (using a DVR or video on demand), and these viewers largely skip the commercials.

This means that the demographic for those who watch television is aging rapidly, and even many of those who watch are doing so on a time-delayed basis and skipping the ads. This has to be a huge concern for advertisers.

But there are equal issues with web advertising. One of the fastest growing categories of web apps is for ad blocking, meaning that a huge number of people are now blocking ads from showing up on the pages loaded by their browsers/devices. Studies have shown that people are capable of ignoring web advertising compared to advertising on television or the radio. They can and do read news articles or other content without looking at or clicking on any of the ads.

And so an advertiser has a very tough choice to make. They can place ads on television with its rapidly-aging demographic and quickly-decreasing percentage of people who see the ads, or they can advertise on the web where people either block the ads or become good at ignoring them.

This is all evidence that technology has given the average person the ability to skip ads if they so choose. I know I have largely wiped ads out of my life. I can’t recall having watched an ad on television this year and I very rarely click on web ads. I used to be a voracious reader of magazines and I have not looked at a magazine this year. I read a local paper every day but I cannot name even one company that advertises in that paper. The one place where ads still get to me is on the radio that I always have on when I’m driving.

The problem with my behavior (and everybody else that ignores ads) is that advertising is what pays for a lot of the content we enjoy. If advertisers eventually bow to reality and cut back on TV and web advertising then a lot of the content we like will not be produced. It’s a real dilemma not only for the advertisers, but also for the television networks and web sites that rely on advertising to fund their content.

Are You Ready for Do-It-For-Me?

General-Ledger-TemplateThe majority of my clients are small businesses, and as such they spend an inordinate amount on software. They have software that they use for billing, accounting, payroll, benefits, taxes, sales, inventory/continuing property records, and scheduling. It’s expensive to buy the various software packages they need and the software is all complicated to learn and operate. And the software is generally not flexible and is hard to customize to provide what the company would like it to do. As small businesses they have to fit the software versus the software fitting them.

There is a new trend in software that might make it easier on small businesses. We are now seeing Do-It-For-Me (DIFM) services that combine cloud software platforms with specialized external labor to perform functions that many companies find costly and time-consuming

This idea of DIFM is gaining huge traction in the consumer world. We see millennials not buying cars and instead using Uber to get from place to place (cheaper than car ownership). There are now a ton of DIFM services on the web and you can hire somebody to temporarily help you with anything from weeding your garden to mailing packages for you. Now this concept is starting to spread to the business world.

The last revolution in software was the concept of buying only as much software as you need, or software-as-a-service (SaaS). There are now tons of software packages for businesses that you can pay for by the user and which don’t force you to make a huge upfront investment. But most of these software packages are still hard to learn and they don’t integrate into other software used by a business. So each SaaS program you buy is its own little silo separate from the rest of your business and which has the added drawback of normally having a steep learning curve. SaaS software can save a lot of money for a firm compared to buying a huge expensive package, but it doesn’t necessarily make life easier for employees or the business.

But Do-It-For-Me software aims to do just that – take the burden off your staff and let outside specialists take care of mundane tasks so your staff can focus on the important stuff. This idea has been around on a limited basis for years. For instance, there are huge, successful companies that handle payroll and all of the tax forms and employee deductions that companies hate keeping track of. In the telco world a lot of companies for years have sent their billing out to a service bureau who provides turnkey billing of customers.

There are now DIFM services for all sorts of software that offer to perform functions that most businesses hate doing. What these software platforms ask of a business is to supply them with the raw data they need, and then they do everything else. These new companies are staffed to be super customer-friendly making them easy to use.

There are a number of new start-ups in the DIFM arena and I expect many more as these companies find success. Some of the more interesting ones include:

  • Buzz360 This firm has automated the marketing process for smaller companies. They can manage your web site, your social media interfaces, and other interfaces with customers. They offer a variety of tools for communicating with customers and potential customers.
  • Bench offers a DIFM accounting service that eliminates the need for an in-house bookkeeper.
  • UpCounsel offers a way to use small-business attorneys on an as-needed basis.
  • Zenefits is interesting in that they give free Human Resources software to manage employee benefits and make their money from commissions on insurance.

Every firm has some functions that they hate to do. Such tasks either take valuable time away from other more important functions, or since they are hated they don’t get proper attention. You should definitely look around for alternatives, because there is probably somebody out there willing to take these kinds of tasks off your plate.

The Shift To Proprietary Hardware

OLYMPUS DIGITAL CAMERA

There is a trend in the industry that is not good for smaller carriers. More and more I see the big companies designing proprietary hardware just for themselves. While that is undoubtably good for the big companies, and I am sure that it saves them a lot of money, it is not good for anybody else.

I first started noticing this a few years ago with settop boxes. It used to be that Comcast and the other large cable companies used the same settop boxes as everybody else. And their buying power is so huge that it drove down the cost of the settop boxes for everybody in the industry. It was standard for large companies to put their own name tag on the front of the boxes, but for the most part they were the same boxes that everybody else could buy, from the same handful of manufacturers.

But then I started seeing news releases and stories indicating that the largest cable companies had developed proprietary settop boxes of their own. One driver for this change is that the carriers are choosing different ways to bring broadband to the settop box. Another change is that the big companies are adding different features, and are modifying the hardware to go along with custom software. Cable companies are even experimenting with very non-traditional settop box platforms like Roku or the various game consoles.

I see this same thing going on all over the industry. The cable modems and customer gateways that the large cable companies and the large telcos use are proprietary and designed just for them. I recently learned that the WiFi units that Comcast and other large cable companies are deploying outdoors are proprietary to them. Google has designed its own fiber-the-the-premise equipment. And many companies including Amazon, Facebook, Google, Microsoft, and others are designing their own proprietary routers to use in their cloud data centers.

In all of these cases (and many other that I haven’t listed here), the big companies used to buy off-the-shelf equipment. They might have had a slightly different version of some of the hardware, but not different enough that it made a difference to the manufacturers. Telco has always been an industry where only a handful of companies make any given kind of electronics. Generally, smaller companies bought from whichever vendors the big companies chose, since those vendors had the economy of scale.

But now the big carriers are not only using proprietary hardware, but a lot of them are getting it manufactured for themselves directly, without one of the big vendors in the middle. You can’t blame a large company for this; I am sure they save a lot of money by cutting Alcatel/Lucent, Cisco, and Motorola out of the supply chain. But this tendency is putting a hurt on these traditional vendors and making it harder for vendors to survive.

It’s going to get worse. Currently there is a huge push in many parts of the telecom business to use software-defined networking (SDN) to simplify field hardware and control everything from the cloud. Since the large carriers will shift to SDN networks long before smaller carriers, the big companies will be using very different gear at the edges of the network – and those are the parts of the network that cost the most.

This is a problem for smaller carriers since they often no longer benefit from being able to buy the same devices that the large companies buy to take advantage of their huge economy of scale. Over time this is going to mean the prices for the basic components smaller carriers buy are going to go up. And in the worst case there might not be any vendor that can make a business case for manufacturing a given component for the small carriers. One of the advantages of having healthy large manufacturers in the industry was that they could take a loss on some product lines as long as the whole suite of products they sold made a good profit. That will probably no longer be the case.

I hate to think about where this trend is going to take the industry in five to ten years, and I add it to the list of things that small carriers need to worry about.

What’s the Real Cost of Providing the Internet?

British-Union-Jack-FlagThere is an interesting conversation happening in England about the true cost of operating the Internet. As an island nation, all of the costs of operating the network must be borne by the whole country, and so every part of the Internet cost chain is being recognized and counted as a cost. That’s very different than the way we do it here.

There are two issues that are concerning British officials – power costs and network capacity. Reports are that operating the data centers and the electronics hubs needed to operate the Internet now consume 8% of all of the power produced in the country. And it’s growing rapidly. At the current rate of growth of Internet consumption it’s estimated that the power requirements needed for the Internet are doubling every four years.

Here in the US we don’t have as much of the same concern about power costs. First, we have hundreds of different power companies scattered across the country and we don’t produce electricity in the same places that we use the Internet. But second, in this country the large data centers are operated by the large billion-dollar companies like Amazon, Google, and Facebook who can afford to pay the electric bills, mostly due to advertising revenues. But in a country like England, that sort of drain on electricity capacity must be borne by all electric rate payers when the whole grid hits capacity and must somehow be upgraded.

And it’s going to get a lot worse. If the pace of power consumption needed for broadband doesn’t somehow slow down, then by 2035 the Internet will be using all of the power produced in the British Isles today. It’s not likely that the power needs will grow quite that fast. For example, there are far more power-efficient routers and switches being made for data centers that are going to knock the power demand curve down a notch, but there is no reason to think that the demand for Internet usage is going to stop growing anytime soon.

In Britain they are also worried about the cost of maintaining the network. They say that the bulk of their electronics need to be upgraded in the next few years. In the industry we always talk about fiber being a really long-term investment, and the fiber is so good today that we really don’t know how long it’s going to last – 50 years, 75 years, longer? But that is not true for the electronics. Those electronics have to be replaced every 7 to 10 years and that can be expensive.

In this country all of the companies and cities that were early adopters of FTTP technology used BPON – the first Fiber-to-the-premise technology. This technology was the best thing at the time and was far faster than cable modems – but that is no longer the case. BPON is limited in two major ways. First, as happens with many technologies, the manufacturers all stopped supporting BPON. That means it’s hard to buy replacement parts and a BPON network is at major risk of failure if one of the larger core components of the network dies.

BPON is also different enough from newer technologies that the new replacements, like GPON, are not backwards compatible. This means that in order to upgrade to a newer version of fiber technology every electronic component in the network from the core to the ONTs on customer premises must be replaced, making upgrades very costly. Even the way BPON is strung to homes is different, meaning that there is fiber field work needed to upgrade it. We have hopefully gotten smarter lately; a lot of fiber electronics today are being designed to still work with later generations of equipment.

This is what happened in England. The country’s telecoms were early adopters of fiber and so the electronics throughout the country are already aged and running out of capacity. I saw a British article where the author was worried that the networks were getting ‘full’ and that more fiber would have to be built. The author didn’t recognize that upgrading electronics instead can use existing fiber to deliver a lot more data.

England is one of the wealthier nations on the global scale and one has to be concerned about how the poorer parts of the world are going to deal with these issues. As we introduce the Internet into Africa and other poorer nations one has to ask how a poor country that already has trouble generating enough electricity is going to be able to handle the demand caused by the Internet? And how will poorer nations keep up with the constant upgrades needed to keep the networks operating?

Perhaps I am worrying about nothing and maybe we will finally see the cheap fusion reactors that have been just over the horizon since I was a teenager. But when a country like England talks about the possible need to ration Internet usage, or to somehow meter it so that big users pay a lot more, one has to be concerned. In our country the big ISPs always complain about profits, but they are wildly profitable. The US and a few other nations are very spoiled and we can take the continued growth of the Internet for granted. Much of the rest of the world, however, is going to have a terrible time keeping up, and that is not good for mankind as a whole.

Dropped Rural Calls

Fuld-modell-frankfurtA lot of rural places in the country are having problems receiving calls. Calls will be placed to rural areas but are never completed. This is not a problem everywhere and it’s not a problem in urban areas, so most people have no idea this is happening.

Senators Amy Klobuchar, Jon Tester, and Jeff Merley introduced a bill in the Senate aimed to fix the problem. A similar bill was introduced last year that died in committee. This is not a new issue and the FCC has taken several steps in the recent past to try to fix the problem. In 2011 the FCC placed 2,150 calls to rural areas and 344 of them never reached the called party. Another 172 calls were of very poor quality and almost impossible to hear. That’s an astounding one fourth of the calls placed and either failed or didn’t work.

In 2012, in Docket CC 01-92 the FCC prohibited several practices by carriers that were resulting in huge delays in completing calls, in lost calls and in the poor quality. For example, they made it illegal to place a ring onto the phone of the calling party until that call has reached the called party. People making calls heard a long ring and were giving up on calls they placed to rural areas before the call even reached the other end.

That order also put a limit on the number of times that a given call could be handed from one carrier to another. That is something that is common in the world of least-cost call routing. Carriers have tables that choose one of many possible different carriers to send a given call based upon the price that carrier is going to charge them. But then the carriers they hand the calls to do the same thing and it’s possible for calls to be handed between carriers multiple times during the process.

This is a problem for calls made to rural areas because the access charges in those areas are higher than the charges for calling urban places. Access charges are the fees that a local telephone company bill to long distance carriers for using their networks. In a world of least-cost routing, many carriers don’t want to pay the higher cost to complete a rural call. The FCC has taken steps to remove the price barrier by phasing the access charges for terminating calls to zero. But even that isn’t going to completely eliminate the problem because there is also a mileage charge component to access charges, and so places that are far outside of cities will continue to cost more than calls to urban areas.

The FCC further implemented new rules in 2013 (Docket WC 13-39) that give the FCC the ability to fine carriers who don’t properly complete calls to rural locations. But the problem still persists. The common belief in the industry now is that  some long distance carriers are just dumping rural calls and not handing them to anybody else. The proposed new laws would make it easier for the FCC to prosecute such carriers.

The problem that least-cost routing creates in the industry is that the margins on long distance calls are really slim for the intermediate carriers. For intermediate carriers that guarantee their rates to others, too many calls to rural places can put their profits underwater.

In the industry we call this sort of situation “arbitrage”. This arises when there are different costs for doing the same function in different ways. Arbitrage situations have always caused troubles. Carriers who can bill the higher prices in an arbitrage situation strive to bill for as many minutes as they can. And carriers who pay those higher prices look for alternatives to pay something less. A significant percentage of the carrier issues with long distance over the last few decades are the results of these arbitrage situations.

The problem the FCC faces is that it’s really hard to catch the carriers who are dropping calls. The whole phone network was established with an understanding that when a carriers is handed a call to that they always tried their best to complete it. That understanding is what made the US phone network the envy of the world. But generally there were not too many companies involved in a long distance call. Most calls years ago involved the local telco where the caller lived, one long distance carrier in the middle, and another local telco where the called party lived.

But today the least-cost routing tables and the fact that many calls are partially or totally VoIP calls makes it hard to even find out which carriers are in the middle of a given call. If a carrier is dumping rural calls, if they are smart enough to destroy any records that they ever received such calls, it’s very hard to definitely prove they are the culprit.

We are undergoing a transition over the next decade to an all-IP network between carriers. This might eliminate some of the problem if that reduces the need for intermediate least-cost carriers. But it also might not change anything, and as long as there is a price difference between completing a rural and an urban call, this problem is likely to remain.

Is There Any Future for Voice Mail?

answeringmachineI read that J.P Morgan and Coca-Cola have dropped their voice mail service and I wonder if we are we starting to see the end of voice mail as a product?

In its heyday, voice mail and caller ID were hailed as the big saviors of telcos. A lot of customers dropped clunky answering machines and changed over to the telcos’ voice mail. And it was lucrative, at least for the larger telcos. They charged $5–$7 for residential voice mail and $7-$10 for business voice mail and this drove a lot of revenue.

It was not necessarily such a good deal for smaller telcos, although they had to have it to remain relevant to their customers. I can remember one client upgrading voice mail and spending $150,000 on the new hardware and software platform. I doubt that they had more than a few hundred customers on voice mail, so this made for a slow payback.

Today it’s easy to think that voicemail has been around for a long time. But it was developed by Voice Message Exchange in the late 70s and didn’t hit the market until the early 80s. Many of the larger companies like AT&T didn’t have a large business solution until the early 90s. Voice mail relied on bulk computer storage and wasn’t practical on a large-scale basis until there were large and affordable drum storage units.

But then the market started chipping away at voice mail. A few cellphones came with free voice mail in the early 90s and today it’s a standard feature on almost every cellphone on the market. Voice mail and a lot of other telephone features are now included with the price of the service for most VoIP plans like Vonage, and most unlimited long distance plans. One has to imagine that the residential penetration rate for paid voice mail has dropped significantly.

But the real money in voice mail has been for service to business lines. It’s not unusual for businesses to pay $10 per line for voice mail, even at large businesses. And of course, with today’s cheap data storage, this has to be almost all margin to the voice mail provider.

Companies are dropping voice mail partially because of the cost, but more importantly because people just don’t use it much. I know I hate voice mail and it’s a labor to check my own. I finally installed an app that would transcribe my voice mails to an email so I wouldn’t ever have to check it again. If I call somebody I know and get their voice mail I don’t leave a message but instead send them an email. And all of us remember those people who left us interminably long voice mails that made you groan once you knew who left the message.

The millennials hate voice mail. They are a generation that expects to be able to communicate quickly and they prefer text messages or instant messaging. In fact, one of the big complaints about millennials in the work force is that many of them hate talking on the phone at all. I’ve read that in colleges today leaving voice mails is as rare as sending emails – they are both dismissed as old technology.

We are probably a generation away from a time when voice mail will become a thing of the past just like many other telecom services. It is hard to explain to a kid today why somebody should pay $10 per month just so others can leave them messages.

Today a lot of telcos are pushing unified communications, which is basically enhanced voice mail. This is a product that combines all forms of company communications onto the same platform and lets people receive communications in whatever format they like. But as the millennials become more prevalent in the workplace even unified communications doesn’t look to have a rosy long-term future. A lot of these platforms are about transcribing things from emails and voice mails, and if those aren’t used then you don’t really need a fancy platform if employees are only going to text and  IM each other.

I am positive that when voice mail was introduced in the 80s that absolutely nobody could have imagined that just over thirty years later people would be abandoning it, and by fifty years later it might be completely dead as a product. This goes to show you how quickly things are changing. Now millennials, can I make a request? Can you also get rid of the big corporate IVR systems?

Selling Our Personal Data

SpyVsSpyRecently, the CEO of Apple, Tim Cook, has been making speeches in multiple forums that contrasts Apple’s privacy practices to those of other large consumer-based companies like Google, Facebook, and Yahoo. Cook says that his company is selling superior products and that they are not in the business of gathering or selling information about their customers.

Certainly he can’t say that Apple doesn’t use customer information, because they do. I have a Macbook and there are tons of ways that Apple uses my data to make my experience better. If I travel, the Mac will display the right time and local weather, for example. And various Apple software products will get to know me and make customized suggestions for me over time. But Cook’s point is that Apple doesn’t sell that data to others.

Of course, the companies that Cook is comparing himself to do not sell electronics like Apple but rather software. Probably the closest analog to Apple is Samsung and they can’t make the same claim as Apple. Late last year it was discovered that Samsung smart TVs were capable of listening to customer conversations all of the time. It’s not clear that Samsung gathers data directly from its smart phones, but they have chosen Android and one can imagine that part of that arrangement is to let Google gather data from Samsung smartphones.

Companies like Facebook and Google have a hard time not using your data, because that is really the only way they can generate value. It’s wonderful to have millions of loyal users on your platform, but both companies make most of their money from advertising. Certainly Google’s search engine advertising doesn’t require any data from users and that revenue is driven from the companies who want their products to be at the top of the list in a search. But Google and Facebook also sells web advertising, and the name of that game is to know the user in order to direct the most relevant ads to each customer.

I think if using our information stopped with advertising that most people would be fundamentally comfortable with having these companies invade their privacy. I know I find it eerie when I do a Google search and for the next three days I see ads that are related to for something I searched for. But I can personally live with that, because most of the time Google is wasting their time on me and I wasn’t looking to shop. I find it funny that I will look up the latest information about smart cars and then get flooded with car ads (because I exclusively drive Ford trucks and I buy one every twenty years, whether I need a new one or not).

The real rub is that these companies do a lot more than build advertising profiles on us. They know all sorts of other personal data about us and they associate that data with our name. While I am not bothered by getting car ads for vehicles I am never going to buy, I frequently hear about people getting bombarded with ads or even mailings and phone calls about far more personal topics like rehab centers or the latest diabetes treatments. That is going over the line in my opinion.

The invasion of our privacy seems to be going even further. Facebook, for example, is the world leader in facial recognition technology and they are building a huge database of every time you show up in somebody’s picture. They not only know about you, but they are learning where you go and who you associate with. That is a bit unnerving.

But to me the real scary thing is that these companies then sell this data to others. And there is no telling how that data is used. Even should the large companies have some sense of morality and responsibility (and many believe they do not), the companies that buy this data can do anything with it they please. It’s very easy these days to buy a data dump about other people, and that kind of information can be a powerful tool in the hands of an ex-spouse, an employer, or a scammer.

The problem that we all face is that it’s too easy to use the services that watch us. Google has a spectacular set of software products. And for my generation there are a ton of friends and relatives on Facebook. If you don’t want to be spied on you have to make a very conscious effort to wall yourself off from these sorts of data-gathering web activities, and that is hard to do. And no matter what you do online, your ISP or the government might be gathering all of this data anyway.

These large companies sometimes hide behind the fact that they mostly sell ‘metadata’ which is data that has been scrubbed to hide the identify of individuals. But numerous articles point out that with data mining it’s only necessary to know a few facts about you in order to pull out facts about you from metadata files.

We may come to a day when there is massive pushback against these companies that are collecting, using, and selling our personal data. It will probably take a string of tragedies and disasters for this to become a worry for the average person. And if that happens, then either the large companies will stop spying on us or somebody who promises not to will take their place. But it is extremely profitable today for the big companies to spy on people, and until there is more pain than profit from using our data, one has to imagine that this is going to continue.

The Screens We Watch

Old TVAmericans spend a lot of time in front of screens of some sort – televisions, computers, smartphones. Various studies estimate that the average adult spends between 6 and 8 hours per day in front of screens. So today I thought I would take a short tour through the history and the future of screens.

Early discoveries. Our screens got their start with the early work on cathode ray tubes that began in the late 1800s. In 1907, Henry Joseph Round discovered electroluminescence; later that same year, Russian scientist Boris Rosing was able to transmit crude images onto a screen. Television got its real start in the late twenties when John Logie Baird was able to transmit faces, objects, and colors onto a screen.

Television. The first commercial television screens were produced by Telefunken in Germany in 1934. But TV exploded onto the market after World War II. By 1954 56% of the homes in the US had a television and by 1962 90% had them. These were all cathode ray tubes that transmitted images using an electron gun onto a fluorescent screen. Kids today have a hard time believing that a TV in the 50s was as heavy and hard to move as a washing machine is today.

Color Television. Color televisions were produced starting in the early 50s and became widely available after 1960. These also used a cathode ray tube and three separate electron beams, one for each primary color (red, green, and blue).

Computer Monitors. Early computer monitors were also cathode ray tubes, but generally of one color only. I’m old enough to remember the first computers with the orange letters on black background followed by the green on black screens. We had the first Macs at our office in 1984 and they had a 9-inch, monochrome 512×342 pixel display. Today the newest Macs have a 5,120 x 2,880 pixel display that supports millions of colors. I also had an Osborne 1, the first ‘portable’ computer with a tiny, 5-inch orange screen.

Newer TV Technologies. By the early 2000s there were three new technologies that quickly replaced cathode ray tube TVs: plasma TVs, rear-projection TVs, and LCD TVs. A plasma TV works using small cells of electrically charged ionized gases to produce the picture. Rear-projection TVs were just that—a projector using the same basic technology as front projectors used in classrooms and businesses. But LCDs (liquid crystal display) won in the marketplace and by 2007 almost all TVs were LCD. LCDs work by having a tiny transmitter, immersed in liquid, for each pixel. These transmitters are induced to produce specific colors.

LED TVs. The newest technology are LED TVs (OLEDs, FEDs, and SEDs) that use tiny light-emitting diodes to produce the color. These TVs are just hitting the market but have not yet gotten much market penetration. OLEDs look to be the most promising technology and it uses tiny LEDS emitting light onto an organic electroluminescent film.

Touchscreens. The first touchscreen was developed by E.A. Johnson in 1965. These were first used for air traffic control and were made available for consumers with the IBM Simon. Apple used a touchscreen on the first iPhone and these are now standard in billions of smartphones and tablet screens.

Retina Display. Apple introduced retina display screens with all of its products in 2014. This basically means that the resolution is so great that the human eye is unable to detect any pixilation.

Future Screens. There are a number of new screen technologies now hitting the market. For cell phones and other uses there are bendable screens that are thin, flexible and even transparent. We are on the verge of being able to buy TVs that come in a tube that you can unroll and hang anywhere.

And we are now seeing what comes after screens. It’s hard to call the experience inside of a virtual reality headset a “screen”. It is instead an immersive 3D display that puts a viewer into the middle of a scene. And there are even now early versions of whole-room holograms that are the precursor to the Star Trek holodecks. I’m imagining that the day may come soon when kids will have no idea what a TV set or monitor are.

An All-IP Telephone Network?

IPThe FCC posed a very interesting question to the industry. They asked if VoIP should become the only way of delivering voice service. This infers having an all-IP network that is extended out to every customer. The FCC asked this question as part of the IP trials that a few telcos are currently undertaking to see what an IP world looks like. While IP undoubtedly makes for the most efficient telco network, I think there many practical reasons why this can’t be implemented everywhere.

We can start with the FCC’s own estimate that there are something like 14 million rural homes without a broadband alternative. These are people who live in rural areas and are almost universally on old and sometimes very poor copper. These are people who can’t get DSL or cable modem and for whom VoIP would not work. There are also a ton of people in the country who are on marginal DSL service who also will have a hard time getting working VoIP. Most such people are also rural, but there are older urban networks with bad copper that also suffer from problems related to the condition of the copper.

But aside from the rural issue, it’s an interesting question. Cable companies already all use VoIP for voice, as do fiber overbuilds. Urban telcos could also give everybody VoIP, but it would mean providing a DSL connection to everybody on copper. This would cause all sorts of network problems. We found out years ago that you can’t put too many DSL lines into the same large copper sheathe or you create interference problems, and universal VoIP would put DSL on just about every copper pair. I also can’t think of any financial benefit to the telco for spending the money to put voice on DSL if that is all a line is going to be used for.

All of these issues make it hard to imagine mandatory VoIP at the customer end of the network. I’ve always envisioned that the IP transition would mean an all-IP network between carriers, which would create the most efficient network. But forcing VoIP where it won’t work right sounds both expensive and impractical. And it would likely boot millions from access to the voice network.

But there are other parts of an all-IP network that could be interesting. For instance, if the whole network was IP from end-to-end you could do away with telephone numbers. In an all-IP network each customer would be associated with an IP address, and so keeping telephone numbers would be forcing a historical structure onto an all-IP network. While we all would probably still have phone numbers, it would be just as easy to just pick somebody out of a computer menu by name and connect with them without going through the fiction that a number is required.

There are a few situations where going all-digital is a bit of a concern. Take 911. The current 911 network is comprised of a redundant pair of special access circuits between each carrier and each local 911 center. This network layout was created to greatly increase the likelihood that a 911 call can be completed. But in an all-IP world 911 traffic would probably be routed with everything else, which is not an issue of itself. But we know that Internet pipes go down all of the time. So anytime there was an Internet outage in a town or a region, 911 would go down with the Internet.

Of course, the FCC didn’t suggest this in a vacuum. They are being prodded in the whole IP-transition by both AT&T and Verizon who would like to get out of maintaining rural copper lines. AT&T has said many times that they want to cut down millions of rural lines and convert them to cellular. And so any pretense that the carriers are interested in creating a rural all-IP network is a fiction, because these large carriers don’t want to own or operate a rural landline connection of any kind. As we recently saw with a large sale of Verizon FiOS lines to Frontier I’m not sure that the two big telcos want to maintain any landline connections at all. These telcos are now mostly cellular companies who are finding landlines to be a nuisance.

Basics of Video Compression, Part I

Fatty_watching_himself_on_TVMost of the web traffic going to residential customers today is video. This is going to be the first of a series of blogs that looks at the techniques that allow video to be delivered efficiently over the Internet and over various types of networks.

All video that is transmitted is somehow first compressed to make it fit within the transmission medium. An easy analog to how this works is to look at the pictures that come out of a digital camera. I once had a good camera and it captured enough detail to let you have a clear print even after cropping to a small part of the picture or blowing the whole picture up to poster size. But I remember that the size of the file created for one picture was immense. If you wanted to email the picture you had to first compress it and save it as a jpeg, and doing so chopped off a lot of the detail of the picture.

Video compression works the same way. Video cameras capture a lot more detail than can ever be retransmitted over a cable TV network or a satellite TV path, and so the detail needs to be edited in some way to fit into the size of the video transmission path. The industry has developed compression standards that define certain parameters for compressing video. The most commonly used standards include Motion JPEG, MPEG-4 Part 2 (or simply referred to as MPEG-4) and H.264.

The process of compressing video is to apply an algorithm to the raw video signal to reduce its size and then apply an inverse algorithm to the compressed file in the viewing device (your TV settop box, computer, or smartphone) to play the video. This combination of coding and then decoding a video signal is called video codec (coding/decoding). Each of the standards is unique and the same standard must be used at both ends of the transmission path to function. For example, you can’t view an MPEG4 video using an H.264 decoder.

The two major techniques utilized by a video codec are image compression and video compression.

Image compression uses intraframe techniques to reduce the size of video files. This means that unnecessary information is removed from each frame of the video, with unnecessary defined as things that cannot be noticed by the human eye. These are the same techniques that are used in my example above of compressing an image from my digital camera. For instance, there might be very tiny nuances of different shades of blue in the captured image of a sky and the compression would simplify this to a few shades of blue in order to simplify the information that will be saved.

Video compression, on the other hand, works by using interframe compression techniques. This compares the images from adjacent frames of a video and will try to capture only the pixels that have changed from one frame to the next. For example, if the camera spent a few seconds looking out into a sunny backyard, most of the parts of all of the frames might be identical except maybe for leaves or flowers that might be moving in the wind. These techniques would only transmit the pixels that change with a note to keep the others the same.

These techniques are more sophisticated than that simple example. For instance, if a camera was panning across a view of a house, the details of the house would remain the same even though the house would be shifting across the video image. A technique called block-based motion compensation effectively draws a ‘box’ around the moving image and again treats it the same from frame to frame.

The techniques also make use of schemes to classify frames of different types to make it faster to decode them. For example, an I-frame is a unique frame and would be the first frame that shows a new scene where everything is different than the previous frame. A P-frame is a predictive inter frame, meaning that it makes reference to and relies of the frame before it. A B-frame is a bi-predictive frame that makes reference to the frame both before and after it.

Next in this blog series I dig a bit deeper into the specific techniques used to make these standards function. And I will start to explain how not all versions of any one of the standard compression techniques are the same.