Blog Post
Vote-by-Mail Under Attack: Act 77 Update
Blog Post
Our democracy can only thrive when voters have access to accurate information. But deepfakes produced using generative artificial intelligence (AI) are being weaponized to spread disinformation and suppress votes. Deepfakes have already come into play for the 2024 Presidential election. Take, for example, the deepfake video of Ron DeSantis declaring he was dropping out of the 2024 presidential race. During New Hampshire’s primary, voters received a robocall impersonating President Joe Biden that told recipients not to vote in the presidential primary.
A new Pennsylvania law would regulate the use of artificial intelligence in political advertising, increasing transparency and accountability.
Deepfakes are digitally altered video, audio, or images that can be used to mislead people. This content shows events or statements that did not actually occur. With this AI technology, you can literally put words in other people’s mouths and expressions on their faces. It’s troubling, especially in the context of elections.
In 2018, filmmaker Jordan Peele produced a short video demonstrating the dangers of deepfakes:
Unfortunately, in the six years since that video was released, deepfakes have only become cheaper and easier to produce — and troublingly, far more convincing.
AI technology is progressing quickly, and it is becoming more difficult to distinguish deepfakes from reality. A video that might have taken a large budget and full production team to create a few years ago can now be put together by everyday users with just a few clicks.
Deepfakes have already come into play for the 2024 Presidential election. During New Hampshire’s primary, voters received a robocall impersonating President Joe Biden that told recipients not to vote in the presidential primary.
AI-generated content blurs the lines between fraud and free speech. On social media, people are free to express their ideas and views within the parameters of a platform’s policies.
Congress, the Federal Communications Commission (FCC) and the Federal Elections Commision (FEC) have discussed proposals that would regulate the use of generative AI in political advertisements, but they are unlikely to act before the 2024 general election. In the absence of federal action, states across the country have passed legislation regulating the use of AI in elections.
Under Section 230 of the 1996 Communications Decency Act, internet service providers are immune from liability for user content and may set their own standards for how they want to moderate and remove content. This makes users responsible for their own content, sparking debates over the balance between fostering online freedom and mitigating harmful content.
Pennsylvania House State Government Committee has advanced legislation (HB 2353) to stop the spread of AI-generated deepfakes in our elections. As originally written, HB 2353 was a solid starting place for regulating AI-generated political content; Common Cause Pennsylvania recommended strengthening the bill with some provisions which were adopted by the committee:
If you live in Pennsylvania, contact your state lawmakers and urge them to pass HB 2353 in its current form to protect the future of our democracy.
No matter where you live, you can talk to your friends and family about deepfakes and encourage them to check the accuracy of the information they see online. You can also report disinformation at https://reportdisinfo.org/.
Common Cause is working to build the resiliency of our democracy by addressing threats like disinformation.
For updates, follow us on X [Twitter], Instagram, Threads, Facebook, and TikTok.
Blog Post