ISPs and AI

One of the most common questions I’ve been asked lately is what I think the impact AI will have on the broadband industry.

All of the big ISPs in the industry have actively been pursuing the use of AI. For example, AT&T Labs says it is investigating the use of AI to optimize the customer experience and auto-heal the network. Comcast says that it is using AI to help process petabytes of data every day. Comcast also worked with Broadcom to develop the first broadband chip for nodes, amps, and modems that bring AI into the network. Verizon is working on an AI solution to improve the customer experience in its IVR systems for customers calling the company. Charter is working AI into its customer interface. It’s also using AI to help customers generate commercials for advertising on the cable network.

Before talking about those uses, a basic primer on AI is needed. Most people are familiar with public AI platforms like Chat GPT or the Google Cloud Platform. No big corporations are using the open public versions of AI. Any data dumped into those systems is available to other users. Instead, corporations are buying and implementing private versions of AI that they train using their own data. One of the common issues with public AI platforms is that AI will hallucinate and invent an answer to a question. However, hallucination can be controlled in private networks where the user strictly controls the data.

All of the big ISPs, and seemingly most companies that field a lot of calls from customers, want to use AI to improve the customer experience. There are different approaches to using AI. One of the primary uses of AI is to eliminate customer menus where customers are asked to wade through a menu to choose who they want to talk to. AI can be used to interpret a customer request and direct the call to the appropriate place. AI can also be used to quickly pull all information about a caller to put it at the fingertips of a customer service rep. Maybe the most important feature of AI is that a customer conversation can carry across different customer service reps, meaning that a caller doesn’t have to repeat basic information every time they are transferred.

There are companies in the country that have completely automated AI to fully handle the customer interface, but it’s not likely that any big ISPs have gotten that bold yet. All of the feedback I’ve heard is that it’s still far too easy for an AI system to badly misinterpret what a customer wants. The same goes for attempts to fully automate an online chatbot. It doesn’t seem like anybody has come close to perfecting this yet, and doing it clumsily is frustrating for customers. But who knows, maybe in the future, most customer interfaces could be entirely handled by an AI representative.

Big ISPs are all investigating the use of AI in the network. The most obvious uses of AI is to interpret real-time network data to detect problems and analyze network quality. For many years, networks have used alarms to identify problems. One of the issues with an alarm system is that ISPs get constantly hit with minor alarms, and it’s not always easy to pick out the ones that matter. One of the hopes with AI is to look deeper at the performance of network equipment to identify problems long before an alarm is triggered.

ISPs are also starting to use AI for load balancing. It’s easy to think of broadband usage on a network as a steady state, but the reality is that usage spikes and dives erratically from second to second. AI can be used to examine usage on all segments of a network. For example, there are numerous paths from the network core in a fiber or cable network, and AI can examine all of them in real-time, as well as understand how usage spikes from neighborhoods can overwhelm other parts of the network.

The big temptation is to let AI take an active role in fixing problems. That idea makes a lot of network engineers nervous because AI is still nothing more than a series of algorithms created by programmers. It’s incredibly challenging for any programmer to create perfect programming, and the fear is having a network get out of control in a way that humans will have a hard time regaining control without shutting the network down. It’s not hard to envision an automated AI repeatedly magnifying and compounding a network problem.

The last use of AI by ISPs is to automate functions done by people. None of the big ISPs are talking about this because doing so sparks a lot of anxiety in the workforce. AI seems to be efficient at processing repetitive data or generating routine reports for management. It’s becoming obvious that other industries like banking and insurance companies have already been able to reduce some staff due to AI efficiencies. It’s likely that ISPs are already quietly reducing some clerical and middle-management staff due to AI. This is the part of AI that makes workers nervous. AI is more likely to replace white-collar workers and middle management than hands-on technicians. But this is going to be done quietly, at least until one of the big ISP CEO spills the beans on an investor call.

It’s going to be a while until any of these benefits move downhill to smaller companies. AI hardware and software is prohibitively expensive and smaller ISPs will have to wait until there are generic solutions offered by AI vendors.

2 thoughts on “ISPs and AI

  1. Not all AI are LLM’s, the flashy interactive language-y technology behind things like ChatGPT.

    “hallucinations” are a structural artifact of LLM-based AI, not something that you can control with a restricted dataset. You can sanity check answers, which is where these systems all need to go, it just makes it way less AI-y.

    Restricting the data universe means that you’re not also going to _also_ get some random crap of unknown truth from somewhere godknowswhere in the Internet as part of your answer, which is a related problem. (Although it can make the hallucinations much spicier!)

    The big problem with sophisticated AI is that you can’t really tell how it produces its answers. That’s exactly when it looks like magic. (They’re doing work to make the LLM AIs expose steps they took to produce an answer but I don’t think they’re going to get to actual provenance any time soon. And, it’s a very, very, very fancy set of predictions based on pattern matching so the amount of actual computation that takes place is not what most people expect in terms of the very real … and confident… sounding responses produced.)

    Generally, companies are going to find out that having systems that produce answers — that they can’t establish confidence in — is of much more limited utility than they were led to believe. That’s not to say that AI and ML can’t do amazing things that are often very useful… as long as you’re OK with the risks.

Leave a Reply