Article
CITED: First Amendment Notes
Blog Post
In November 2024, Elon Musk’s company, X, quietly filed a lawsuit in a Sacramento court to halt the implementation of one of California’s new, flagship anti-disinformation laws. The company is challenging the constitutionality of AB 2655, which makes social media companies responsible for election-related disinformation on their platforms.
The law, sponsored by the California Initiative for Technology and Democracy (CITED), a project of California Common Cause, represents some of the most assertive steps taken anywhere in the nation to address the dangers that Artificial Intelligence (AI) and disinformation pose to our elections.
The outcome of this case, X v. Bonta, will have nationwide implications on social media and how election disinformation impacts us. Here’s what you should know.
What happened?
Elon Musk and his company, X, are suing to halt the implementation of our anti-disinformation bill signed into law by Governor Newsom in September 2024.
Background:
“Deepfakes” are hyper-realistic but totally fake video and audio of public figures. With the rise of AI in the last few years, deepfakes have become more realistic, and the tools to make them have become more accessible, making them an even bigger problem, especially around elections.
Deepfakes have already destabilized national elections around the world, and most recently, the 2024 US presidential election. As this problem is peaking, many technology and social media platforms have decreased their investments in their trust and safety teams, walking away from any responsibility to address it.
This has left voters to pick up the pieces, wondering what images, audio, and video they can trust. That’s why CITED got to work on AB 2655 when it identified this issue as a serious threat to our democracy.
What our law does:
AB 2655, authored by Assemblymember Marc Berman, holds social media companies accountable for the disinformation and deepfakes that spread on their platforms.
Specifically, it combats online disinformation in our elections by requiring social media platforms to label or remove generative AI deepfakes that can deceive voters as digital or fake content, and by prohibiting posting the most harmful of them close to Election Day.
It also allows candidates, elected officials, elections officials, the Attorney General, and a district attorney or city attorney to seek injunctive relief against a large online platform for noncompliance with the bill.
Why this lawsuit matters:
This is all about people power vs. billionaires who want to profit off of lies. But our democracy, and the people’s right to accurate information about our elections, is not up for negotiation.
Just like with climate standards and auto emissions, passing a law in California can have a nationwide impact. Silicon Valley is the tech capital of the United States, so our anti-disinformation laws have the power to set the national standard for how states respond to election disinformation. This lawsuit can have a similar impact, determining if our law is implemented or not.
What’s next?
We’re grateful to Attorney General Rob Bonta and his very capable team, as they defend AB 2655 in court. At CITED, we’re forging ahead too. We will not allow billionaire oligarchs to chip away at the integrity of our democratic institutions for their financial and political gain.
The only lasting solution is passing common-sense regulations to make sure our democracy is safe from AI-driven disinformation. Fortunately, we’re on the case, and working to strengthen these legal safeguards against election disinformation in 2025.
Keep up with our progress by signing up for our updates!
Article