Press Release
FCC Urged to Amend Regulation to Account for False AI Content
Related Issues
Today, Common Cause, the Leadership Conference on Civil and Human Rights, United Church of Christ Media Justice Ministry, and a number of other concerned organizations, filed comments with the Federal Communications Commission (FCC) in a rulemaking urging the agency to address disclosure and transparency of artificial intelligence (AI)-generated content in political advertisements on the nation’s airwaves. The groups warn of the dangers posed by the new technology in an era when political disinformation is widespread and the public has trouble identifying AI-generated content.
The other organizations joining in the comments include, Access Now, Asian Americans Advancing Justice (AAJC), Japanese American Citizens League, National Black Child Development Institute (NBCDI), National Consumer Law Center (on behalf of its low-income clients), National Disability Rights Network (NDRN), NETWORK Lobby for Catholic Social Justice, Public Citizen, Sikh American Legal, Defense and Education Fund, and The Trevor Project.
“AI deepfakes represent a clear and present danger when it comes to political advertising and the FCC is rightfully taking it very seriously,” said Virginia Kase Solomón, President and CEO of Common Cause. “The American people deserve disclosure and transparency not only regarding who is bankrolling the political ads but whether the content itself is real or AI-generated. There are simply too many bad actors – both foreign and domestic – seeking to manipulate voters and election outcomes not to address deepfakes now.”
“It is gratifying to work with these civil rights organizations on the intersection of media accountability and artificial intelligence,” said Cheryl A. Leanza, UCC Media Justice’s policy advisor. “The long-standing work to ensure transparency and responsibility in political advertising should be fully applied to artificial intelligence in advertising. The viewing and listening public has every right to know by what means, and by whom, they are being persuaded.”
The groups emphasize that deepfakes and other AI-generated content have the potential to dramatically increase election disinformation, further threatening the integrity of our elections. They further stress that Black, Latino, Asian, and other communities of color have traditionally been targets of voter suppression and disinformation and that those trends will continue with the use of AI deepfakes.
The comments highlight the critical need for public disclosure of these advertisements, and the need to create a database that is easy-to-use for both journalists and members of the public.
“Disinformation has evolved and so must the regulations to combat it and protect the public interest,” said Ishan Mehta, Common Cause Media and Democracy Program Director. “AI deepfakes represent the newest generation of disinformation and voter suppression and to safeguard our democracy, Chair Rosenworcel and the FCC are to be commended for moving forward with new rules and a new database at a time when Congress and the Federal Election Commission has thus far failed to act.”
The comments point to the fact that the Commission has the authority to adopt the proposed rules in the public interest and that they are in compliance with the First Amendment.
To read the comments, click here.