In a significant move, the Federal Communications Commission (FCC) has proposed a $6 million fine against a scammer who used voice-cloning technology to impersonate President Biden in a series of illegal robocalls during a New Hampshire primary election. This development is not just about AI or robocalls but serves as a warning to potential high-tech scammers.
Background on the Scam
In January, many voters in New Hampshire received a call claiming to be from President Biden, urging them not to vote in the upcoming primary. This was, of course, a fake message created using voice-cloning technology that has become increasingly accessible over the last couple years.
While creating a fake voice has been possible for a long time, generative AI platforms have made it surprisingly easy: Dozens of services offer cloned voices with minimal restrictions or oversight. With just a few minutes of President Biden’s speeches available online, it’s relatively simple to create your own cloned voice.
The Problem with Robocalls and AI
However, the FCC and several law enforcement agencies have made clear that using these fake voices for purposes such as suppressing voters via robocalls is unacceptable. These actions are already illegal, but the increasing availability of generative AI technology poses a significant challenge to preventing their misuse.
The Culprits Behind the Scam
Steve Kramer, described as a "political consultant," was the primary perpetrator behind this scam. He enlisted the help of the shady Life Corporation and the calling services of various telecom companies, including Lingo (also known by other names such as Americatel, BullsEyeComm, Clear Choice Communications, Excel Telecommunications, Impact Telecom, Matrix Business Technologies, Startec Global Communications, Trinsic Communications, VarTec Telecom).
The FCC’s Stance
Loyaan Egal, chief of the FCC’s Enforcement Bureau, stated in a press release: "We will act swiftly and decisively to ensure that bad actors cannot use U.S. telecommunications networks to facilitate the misuse of generative AI technology to interfere with elections, defraud consumers, or compromise sensitive data."
The Fine and Next Steps
The proposed $6 million fine is a significant sum, but it’s worth noting that this is more like a ceiling or aspiration for the actual amount paid. The next step involves Kramer responding to the allegations. Separate actions are being taken against Lingo and other telecom companies involved, which may result in fines or lost licenses.
AI-Generated Voices and Robocalls
In February, the FCC officially declared AI-voiced robocalls illegal after considering whether they counted as "artificial." The agency decided that they do indeed fall under this category. This decision serves as a clear warning to potential high-tech scammers who might attempt to use voice-cloning technology for malicious purposes.
What’s Next?
The FCC has made it clear that it will continue to take action against those using AI-generated voices in robocalls. The agency is working closely with law enforcement agencies to prevent the misuse of generative AI technology, ensuring that U.S. telecommunications networks are not used for malicious purposes.
Conclusion
This case highlights the growing concern over the misuse of voice-cloning technology and AI-generated voices in robocalls. As these technologies become increasingly accessible, it’s essential for regulatory bodies like the FCC to take a strong stance against their misuse. The proposed fine is a significant step towards preventing such scams, but it also underscores the need for continued vigilance and cooperation between law enforcement agencies.
Related Reading
- FCC Officially Declares AI-Voiced Robocalls Illegal: This article discusses the FCC’s decision to declare AI-voiced robocalls illegal.
- AI Newsletter Launch: TechCrunch is launching an AI newsletter, which will provide updates on the latest developments in the field of artificial intelligence.