It’s now illegal in the US for robocallers to use AI-generated voices, thanks to a new ruling by the Federal Communications Commission on Thursday.
In a unanimous decision, the FCC expands the Telephone Consumer Protection Act, or TCPA, to cover robocall scams that contain AI voice clones. The new rule goes into effect immediately, allowing the commission to fine companies and block providers for making these types of calls.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” FCC chair Jessica Rosenworcel said in a statement on Thursday. “We’re putting the fraudsters behind these robocalls on notice.”
The move comes a few days after the FCC and New Hampshire attorney general John Formella identified Life Corporation as the company behind the mysterious robocalls imitating President Joe Biden last month before the state’s primary election. At a Tuesday press conference, Formella said that his office had opened a criminal investigation into the company and its owner, Walter Monk.
The FCC first announced its plan to outlaw AI-generated robocall scams by updating the TCPA last week. The agency has used the law in the past to go after junk callers, including the conservative activists and pranksters Jacob Wohl and Jack Burkman. In 2021, the FCC fined them more than $5 million for conducting a massive robocalling scheme to discourage voters from voting by mail in the 2020 election.
“While this generative AI technology is new, and it poses a lot of challenges, we already have a lot of the tools that we need to grapple with that challenge,” Nicholas Garcia, policy counsel at Public Knowledge, tells WIRED. “We can apply existing laws like the TCPA, and a regulatory agency like the FCC has the flexibility and the expertise to go in and respond to these threats in real time.”