Ideas for Better Broadband Mapping

The FCC is soliciting ideas on better ways to map broadband coverage. Everybody agrees that the current broadband maps are dreadful and misrepresent broadband availability. The current maps are created from data that the FCC collects from ISPs on the 477 form where each ISP lists broadband coverage by census block. One of the many problems with the current mapping process (I won’t list them all) is that census blocks can cover a large geographic area in rural America, and reporting at the census block level tends to blur together different circumstances where some folks have broadband and others have none.

There have been two interesting proposals so far. Several parties have suggested that the FCC gather broadband speed availability by address. That sounds like the ultimate database, but there are numerous reasons why this is not practical.

The other recommendation is a 3-stage process recommended by NCTA. First, data would be collected by polygon shapefiles. I’m not entirely sure what that means, but I assume it means using smaller geographic footprints than census blocks. Collecting the same data as today using a smaller footprint ought to be more accurate. Second, and the best idea I’ve heard suggested, is to allow people to challenge the data in the mapping database. I’ve been suggesting that for several years. Third, NCTA wants to focus on pinpointing unserved areas. I’m not sure what that means, but perhaps it means creating shapefiles to match the different availability of speeds.

These ideas might provide better broadband maps than we have today, but I’m guessing they will still have big problems. The biggest issue with trying to map broadband speeds is that many of the broadband technologies in use vary widely in actual performance in the field.

  • Consider DSL. We’ve always known that DSL performance decreases with distance from a DSL base station. However, DSL performance is not as simple as that. DSL also varies for other reasons like the size of the gauge of copper at a customer or the quality of the copper. Next door neighbors can have a significantly different DSL experience if they have different size wires in their copper drops, or if the wires at one of the homes have degraded over time. DSL also differs by technology. A telco might operate different DSL technologies out of the same central office and see different performance from ADSL versus VDSL. There really is no way for a telco to predict the DSL speed available at a home without installing it and testing the actual speed achieved.
  • Fixed wireless and fixed cellular broadband have similar issues. Just like DSL, the strength of a signal from a wireless transmitter decreases over distance. However, distance isn’t the only issue and things like foliage affect a wireless signal. Neighbors might have a very different fixed wireless experience if one has a maple tree and the other has a pine tree in the front yard. To really make it difficult to define the speed, the speeds on wireless systems are affected to some degree by precipitation, humidity and temperature. Anybody who’s ever lived with fixed wireless broadband understands this variability. WISPs these days also use multiple spectrum blocks, and so the speed delivered at any given time is a function of the particular mix of spectrum being used.

Regardless of the technology being used, one of the biggest issues affecting broadband speeds is the customer home. Customers (or ISPs) might be using outdated and obsolete WiFi routers or modems (like Charter did for many years in upstate New York). DSL speeds are just as affected by the condition of the inside copper wiring as the outdoor wiring. The edge broadband devices can also be an issue – when Google Fiber first offered gigabit fiber in Kansas City almost nobody owned a computer capable of handling that much speed.

Any way we try to define broadband speeds – even by individual home – is going to still be inaccurate. Trying to map broadband speeds is a perfect example of trying to fit a round peg in a square hole. It’s obvious that we can do a better job of this than we are doing today. I pity a fixed wireless ISP if they are somehow required to report broadband speeds by address, or even by a small polygon. They only know the speed at a given address after going to the roof of a home and measuring it.

The more fundamental issue here is that we want to use the maps for two different policy purposes. One goal is to be able to count the number of households that have broadband available. The improved mapping ideas will improve this counting function – within all of the limitations of the technologies I described above.

But mapping is a dreadful tool when we use it to start drawing lines on a map defining which households can get grant money to improve their broadband. At that point the mapping is no longer a theoretical exercise and a poorly drawn line will block homes from getting better broadband. None of the mapping ideas will really fix this problem and we need to stop using maps when awarding grants. It’s so much easier to decide that faster technology is better than slower technology. For example, grant money ought to be available for anybody that wants to replace DSL on copper with fiber. I don’t need a map to know that is a good idea. The grant process can use other ways to prioritize areas with low customer density without relying on crappy broadband maps.

We need to use maps only for what they are good for – to get an idea of what is available in a given area. Mapping is never going to be accurate enough to use to decide which customers can or cannot get better broadband.

Leave a Reply