X

FCC proposes new rules to combat AI-generated robocalls

The Federal Communications Commission (FCC) has taken a significant step to protect consumers from the growing threat of AI-generated robocalls and robotexts. On Wednesday, the FCC proposed new rules regulating the use of artificial intelligence in these communications. The proposal addresses the increasing concerns over fraudulent activities and scams that exploit AI technology to deceive and misinform the public.

Clear definitions and disclosures by FCC for AI robocalls

The FCC’s proposed rules include clear definitions for AI-generated calls. It require callers to disclose using AI when obtaining prior express consent from consumers. This means that before a business or entity can send AI-generated calls or texts, they must inform the recipient of their intention to use such technology. The rules also mandate that each AI-generated call must include a disclosure, allowing consumers to identify these calls in real time. This measure provides an additional layer of protection that enables individuals to avoid calls that may pose a higher risk of fraud.

In addition to requiring disclosures, the FCC seeks to support technologies that can alert consumers to AI-generated robocalls and robotexts. These technologies would play a crucial role in helping consumers differentiate between legitimate and potentially harmful communications. By encouraging the development of such tools, the FCC aims to empower consumers to make informed decisions about the calls and texts they receive.

The proposal also includes provisions to protect the positive uses of AI. The FCC recognizes that AI technology can significantly benefit individuals with disabilities by enhancing their ability to use telephone networks. The new rules aim to ensure that these beneficial applications of AI can continue to thrive without the risk of liability under the Telephone Consumer Protection Act (TCPA).

Addressing AI-generated scams and misinformation

The FCC’s proposed rules on AI robocalls are part of a broader effort to combat AI-generated scams and misinformation. Recently, the FCC adopted a Declaratory Ruling clarifying that using voice cloning technology in robocall scams is illegal unless the called party has given prior express consent or an exemption applies. The Commission also proposed significant fines for entities using deepfake AI-generated voice cloning and caller ID spoofing to spread election misinformation.

The new rules build on these actions by introducing transparency standards for political ads on radio and television that use AI technology. This is a critical step in ensuring that consumers are fully aware when AI is being used to influence their decisions, particularly in the context of elections.

The FCC’s Notice of Proposed Rulemaking invites public comment on the proposed rules. The Commission seeks input on the definitions of AI-generated calls, the effectiveness of the required disclosures, and the potential impact on consumers and businesses. Additionally, the FCC has issued a Notice of Inquiry to gather more information on emerging technologies that could alert consumers to unwanted and illegal AI-generated calls and texts.