At the November 2023 FCC monthly meeting, the FCC approved the release of Notice of Inquiry related to the impact of AI-generated calls. The press referred to this effort as an attempt to stop AI-generated nuisance robocalls, but the investigation covers a lot of other issues.
The FCC is currently bound by the Telephone Consumer Protection Act (TCPA) that Congress passed in 1991 that prohibited “any telephone call to any residential telephone line using an artificial or prerecorded voice to deliver a message without the prior express consent of the called party” unless a statutory exception applies or the call is exempted by ruling or order from the FCC. Subsequent to that order, the FCC determined that the rule applies to both calls and texts.
When that law was passed, the majority of the complaints received at the FCC were from consumers complaining about junk calls. The volume of junk calls is greater today than in 1991, but most people have learned how to deal with or ignore such calls.
Unfortunately, the FCC can’t just decide that all calls involving computer-generated voices are illegal. One of the big promises of AI is that customer service departments will be able to use AI to provide better customer service. On an inbound basis, AI can be used to eliminate the dreaded “If you are calling for X, press 1… for Y, press 2”. AI can instead direct a call to the right person by listening to what customers are seeking.
More troublesome for the FCC is that AI can also be used to send calls or texts to customers to answer specific customer questions. There are businesses that have already converted inbound customer service to AI, and it’s inevitable that AI will be used for outbound calls and sales.
One of the challenges faced by the FCC and all government agencies is how to define AI to distinguish it from uses of technology that are not AI. The National Artificial Intelligence Initiative Act of 2020 defined AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions.” The National Institute of Standards and Technology (NIST) defined AI as “the capability of a device to perform functions that are normally associated with human intelligence, such as reasoning, learning, and self-improvement.” Those definitions are talking about AI that is a lot more advanced than what is needed to place calls to people.
When Congress enacted the TCPA, it concluded that artificial and prerecorded voice messages constituted a greater nuisance to consumers than calls with live persons. The FCC is left with the unenviable task of deciding if AI calls are a nuisance if the AI call can interface with people in the same manner as a live person by responding to questions. How will people even know they are talking to an AI-generated voice?
One of the particularly troubling aspects of AI is that the technology is going to be able to generate a voice that is tailored to each called party. The AI caller can mimic the accent, slang, and other language characteristics that will make it feel comfortable to callers. AI could even creepily mimic somebody a person knows, gaining instant credibility. AI seems like a particularly powerful tool in the hands of scammers.
I think one thing is almost guaranteed – AI scammers will quickly find a way around any specific rules formulated by the FCC. AI can be used to develop calling strategies that sidestep specific regulations. It’s going to be interesting to see what the FCC develops. The first generation rules are almost sure to be inadequate, and this is a topic that is going to have to be continually revisited to keep up with changing technology and determined hackers.