In an effort to curb the recent uptick in scams driven by AI voice cloning technology, the Federal Communications Commission in February adopted a new rule to protect consumers targeted by robocalls.
The rule effectively makes AI voice cloning technology commonly used in robocall scams illegal. This move strengthens State Attorneys General arsenal to go after bad actors behind these calls. Additionally, the rule opens the door for consumers to file lawsuits and potentially recover damages from these incidents.
The FCC’s action comes amidst a surge in recent years in increasingly sophisticated scams that con consumers and individuals by imitating loved ones, celebrities and even political figures. In January, New Hampshire voters received calls from a voice impersonating President Joe Biden and encouraging them to not vote in the primary election.
The so-called “Deepfakes” utilize the power of AI to generate ultra-realistic voices, images or videos of real-life figures doing or saying anything their creator wants – opening the door to a number of deceptive practices and crises – including deceiving voters during an election year.
Industry leaders have repeatedly called on lawmakers to step in and set clear guardrails for AI, while tech companies continue to make efforts to detect and regulate problematic AI-generated content. Many say that neither Congress nor big tech companies have gone far enough to remedy AI’s presence and danger. And AI continues to advance at break-neck speed.
Deepfakes are blurring the line between what is real and what is fake – further eroding the public’s trust and spawning a new breeding ground for reputational threats and other public relations crises.
While AI has made fakes harder to spot, there are skills we can employ to recognize and protect ourselves and our clients from bad actors.
- Review content carefully before sharing online. Ask yourself, “is this plausible?” Avoid contributing to wildfire sharing of disinformation by vigilantly vetting before posting.
- Recognize the hallmarks of a scam. They often use scare tactics or play on emotions to get their victims to take action.
- Pay attention to unnatural sounds, cadences, pauses, and patterns. Recognize out of the ordinary speech patterns and word choices.
- Examine visual content more carefully. AI tools will often distort portions of content such as text, backgrounds, lighting, and facial expressions.
- Stay current. AI is getting better and better so these tips may be invalid within a matter of months. Actively educate yourself.
As AI technology continues to fine-tune itself, it is critical that public relations professionals remain vigilant protectors of their client’s reputation. That includes monitoring for misinformation and problematic deepfake content, as well as building out a robust response plan that emphasizes transparency and timeliness in the event of a crisis.