Report

As a Matter of Fact: The Harms Caused by Election Disinformation Report

Donald Trump’s Big Lie is working, and we have to respond. Just as we came together last year, rising up to vote safely and securely in record numbers during a global pandemic, we must now rise up to stop election disinformation efforts in future elections.

Introduction

 

In America, whatever our background, color, or zip code, we value our freedom. Generation after generation has fought for the freedom to have a say in decisions that impact our lives—the freedom to participate fully in our country. But in recent years, a small faction has grown increasingly skilled at spreading lies about our elections, lies that targeted Black communities and other communities of color to suppress their votes, lies that fueled a deadly attack on our Capitol in January 2021 to disrupt the peaceful transfer of power, lies that threaten to suppress votes and undermine public confidence in future elections. This intentional use of false information to affect the participation of voters in elections is known as “election disinformation.”

The United States is at a critical juncture. More than 1 in 3 U.S. residents—and nearly 80% of Republicans— wrongly believe that President Joe Biden did not legitimately win the election.

The United States is at a critical juncture. More than 1 in 3 U.S. residents—and nearly 80% of Republicans—wrongly believe that President Joe Biden did not legitimately win the election, and a majority say they “do not have confidence that elections reflect the will of the people.” Donald Trump’s Big Lie is working, and we have to respond. Just as we came together last year, rising up to vote safely and securely in record numbers during a global pandemic, we must now rise up to stop election disinformation efforts in future elections. This report is a game plan for success.

As online election disinformation has increased, Common Cause Education Fund’s commitment to monitoring and stopping it has likewise increased. As part of our plan to combat election disinformation, Common Cause Education Fund has prepared this report to explain the problem of election disinformation in detail and propose commonsense public and corporate policy reforms to reduce the harmful impacts of election disinformation in future elections. The report’s final section is a series of state, federal and corporate reforms to help stem the flow of election disinformation that is undermining Americans’ faith in the nation’s elections. Reform recommendations detailed in the report include the following:

  • Social media companies must strengthen their policies around combating content designed to undermine our democracy, including by providing users with authoritative information regarding voting and elections, reducing the spread and amplification of election disinformation, and providing greater transparency concerning their content moderation policies and practices.
  • Congress and state legislatures should amend voting rights laws to explicitly prohibit intentional dissemination of false information regarding the time, place, or manner of elections or the qualifications or restrictions on voter eligibility, with the intent to impede voting.
  • Congress and state legislatures should update campaign finance disclosure laws for the digital age, to include paid for by” disclaimers on digital advertising, and effective provisions shining a light on money transferred between groups to evade disclosure.
  • Congress and state legislatures should pass comprehensive data privacy legislation to protect consumers from the abusive collection, use, and sharing of personal data.
  • Congress should enact legislation strengthening local media and protecting public access to high-quality information about government, public safety, public health, economic development, and local culture.
  • Congress should pass legislation to protect researchers’ and watchdog journalists’ access to social media data, enabling researchers to study social media platform practices without fear of interference or retaliation from social media companies.
  • Congress should pass legislation to prohibit online platform discriminatory algorithms and to create greater transparency about how these algorithms operate.
  • The White House and governors in states around the nation must play a leading role in combating election disinformation, including by issuing executive orders directing agencies with enforcement, rule-making, and investigatory authorities to use these capabilities in combating election disinformation.

Election Disinformation Overview

What Is Election Disinformation?

Broadly, election disinformation refers to intentional attempts to use false information to affect the participation of voters in elections. There is a long history of tactics used to disenfranchise voters, and our previous reports11 detail how flyers, billboards, and other offline tactics are used to tell voters incorrect information that could prevent them from participating in an election. These reports also highlighted some of the emerging online digital tactics used to spread election disinformation, including email, the web, and Facebook, which were just gaining mainstream popularity.

 

“Information disorder” is an emerging term of art used by researchers and media experts that encompasses three related terms:

• Disinformation is content that is false (even if it contains some truth) and deliberately created to harm a person, social group, organization, or country.

• Misinformation is false information, but it is differentiated from disinformation by lacking an intent to harm any person, group, or organization.

• Malinformation is content that is accurate but is intentionally manipulated to cause harm, including voter suppression or voter confusion.

Misinformation

Misinformation is false information, but it is differentiated from disinformation by lacking an intent to harm any person, group, or organization. While it is less intentional, it can be equally harmful. Examples of misinformation include inaccuracies in dates or statistics or incorrectly identified photo captions. Anyone encountering the misinformation could believe it and draw conclusions from it, even if the content provider was not intending to misinform them.

Disinformation

Disinformation content is false and deliberately created to harm a person, social group, organization, or country. Disinformation is deliberately and often covertly spread to influence public opinion and actions, obscure or alter voting, or provide cause for outrage. Disinformation may contain some true facts, but those facts are either taken out of context or combined with falsehoods to create and support a specific intended message.

Malinformation

Malinformation is content that is accurate but is intentionally manipulated to cause harm. This includes misrepresenting the context of a true news story, doxing (releasing personal information like addresses and phone numbers of an individual online to intimidate them), or selectively leaking correspondence.

Who Is Spreading Election Disinformation and Why?

Few who intentionally spread election disinformation would publicize this fact because the behavior is sometimes illegal and always despicable. The ability of individuals to anonymously spread election disinformation is part of the problem—and strengthening transparency laws as recommended later in this report is part of the solution. Nevertheless, here is what we know about those spreading election disinformation in recent years. Both foreign and domestic actors have used—and likely will continue to use—election disinformation. During the 2016 elections, the Russian Internet Research Agency created numerous posts on multiple social media platforms. According to the U.S. Senate Select Committee on Intelligence, this foreign interference was “at the direction of the Kremlin” and created social media content in support of then-candidate Trump and against Hillary Clinton. In particular, the content was “principally aimed at African-Americans in key metropolitan areas.” Russian disinformation efforts included the use of the Facebook page Blacktivist, which purported to be a Black empowerment page and garnered 11.2 million engagements with Facebook users. Both advertisement and organic (non-ad) content was published through this program. This Russian social media content was designed to drive divisions between voters and cause general political instability in the United States, a tactic that differed from more direct efforts to disenfranchise voters used by some other purveyors of election disinformation.

A number of social scientists are working to understand the psychology behind individuals spreading disinformation. In our observations, gleaned from over 15,000 volunteer hours spent monitoring social media for mis- and disinformation during the 2020 election cycle, we have found that election misinformation is often spread by those sincerely attempting to be helpful in a climate of uncertainty and distrust (particularly when it came to the USPS and its ability to manage vote by mail in the 2020 elections) and disinformation is spread by individuals with partisan goals, including intraparty contests, like the Democratic Presidential Primary.

In an age of hyperpartisanship, spreading election disinformation can both serve to attack your political opponents and show that you are aligned with other members of your political tribe. Election disinformation—in particular, the narrative of a rigged election and pervasive voter fraud committed by Democrats—existed long before the rise of Donald Trump but now has become party orthodoxy. You can signal that you are a Trump-supporting “MAGA Republican” (an acronym for Trump’s campaign slogan “Make America Great Again”) by spreading stories that reinforce a narrative (however false) about a political system rigged against other MAGA Republicans. This creates a negative feedback loop of distrust in government and elections: a September 2021 poll showed that 78% of Republicans believe that Joe Biden did not win the presidency. Numerous states and counties are proceeding with sham ballot reviews—even in areas where Trump won decisively. Among 15 Republican candidates currently running for secretary of state in five battleground states, 10 have “either declared that the 2020 election was stolen or called for their state’s results to be invalidated or further investigated.” Election disinformation is spread by activists and candidates in the same way that political messaging and issue priorities used to be.

State and Federal Laws Regulating Election Disinformation

Several different bodies of law provide tools for fighting election disinformation. A primary purpose of election disinformation is to suppress and sometimes intimidate voters. Consequently, election laws prohibiting voter intimidation and false election speech play an important role in fighting election disinformation. Several other bodies of law are also critically important to the fight. Strong campaign finance disclosure laws can shine the light of publicity on those seeking to undermine our elections from the shadows and help ensure existing laws are enforced. Communications laws, consumer protection laws, media literacy laws, and privacy laws can all play a part in effectively regulating and deterring election disinformation.

Voter Intimidation and False Election Speech Laws

Federal law and laws in nearly every state contain provisions explicitly prohibiting voter intimidation, with many of these laws being rightly interpreted as prohibiting election disinformation. Some states have enacted laws explicitly prohibiting various types of false election-related speech—e.g., false statements about voting procedures/qualifications, candidates, incumbency, endorsements, veteran status, or ballot measure effects. In this report, we focus only on the first of these types: laws prohibiting false statements about voting procedures and qualifications such as where and when to vote. Our reasons are twofold and related to one another. First, the veracity of statements about voting procedures and qualifications (e.g., the date of the election, the hours polls are open) is easily ascertainable, and determining such veracity can be done in an entirely nonpartisan, objective fashion. By contrast, determining the veracity of statements about a candidate (e.g., a candidate’s stance on an issue) is often more subjective, as reflected by the rating systems some prominent fact-checkers use.

Second, and relatedly, courts have for years been divided on the constitutionality of laws prohibiting false speech characterizing candidates and ballot measures, with at least two federal appellate courts in recent years striking down such laws as unconstitutionally vague and overbroad. Courts are much more likely to uphold as constitutionally permissible narrower laws prohibiting false statements about the procedures and qualifications of voting.

Federal Voter Intimidation and False Election Speech Laws

The following is a summary of voter intimidation and false speech laws at the federal level and in numerous states. And the recommendations section at the end of this report identifies the best features of these laws, urging their adoption throughout the United States.

The National Voter Registration Act of 1993 makes it a crime to knowingly and willfully intimidate or threaten any person for voting, registering to vote, or aiding others to register and vote. Another federal criminal statute similarly provides that “[w]hoever intimidates, threatens, coerces, or attempts to intimidate, threaten, or coerce, any other person for the purpose of interfering with the right of such other person to vote” in a federal election has committed a crime subject to fines or imprisonment. The DOJ explains that this statute “criminalizes conduct intended to force prospective voters to vote against their preferences, or refrain from voting, through activity reasonably calculated to instill some form of fear.” Conspiracy to “injure, oppress, threaten, or intimidate any person…in the free exercise or enjoyment of any right or privilege secured to him by the Constitution or laws of the United States”— including the right to vote—is a felony under federal law. This criminal code provision covers voter suppression schemes, including “providing false information to the public—or a particular segment of the public—regarding the qualifications to vote, the consequences of voting in connection with citizenship status, the dates or qualifications for absentee voting, the date of an election, the hours for voting, or the correct voting precinct.

In addition to the federal criminal code provisions detailed in the preceding paragraphs, the Voting Rights Act of 1965 and other civil rights laws also prohibit disinformation activities that amount to voter intimidation or suppression. The Voting Rights Act provides that no person “shall intimidate, threaten, or coerce, or attempt to intimidate, threaten, or coerce any person for voting or attempting to vote.”

State Voter Intimidation and False Election Speech Laws

The federal laws detailed earlier prohibiting voter intimidation and suppression—including some disinformation tactics—generally apply to any election with candidates for federal office on the ballot. Nearly every state, likewise, has laws prohibiting voter intimidation and suppression, applicable to elections even when no federal office candidates are on the ballot. A few states have laws explicitly regulating false election-related speech, and a few others have interpreted more general anti-intimidation laws to prohibit false election speech. APPENDIX I in the report summarizes the voter intimidation and false speech laws of several states. Among the best state laws worthy of emulating around the nation, Colorado law provides that no person shall knowingly or recklessly “make, publish, broadcast, or circulate or cause to be made, published, broadcasted, or circulated…any false statement designed to affect the vote on any issue submitted to the electors at any election or relating to any candidate for election to public office.” The Colorado attorney general’s guidance makes clear that disinformation tactics—including “misleading phone calls, texts, or emails to a voter”—can constitute illegal voter intimidation.” Similarly, Hawaii law provides that any person who “knowingly broadcasts, televises, circulates, publishes, distributes, or otherwise communicates…false information about the time, date, place, or means of voting with the purpose of impeding, preventing, or otherwise interfering with the free exercise of the elective franchise” has committed illegal election fraud. And Virginia explicitly outlaws communicating to a “registered voter, by any means, false information, knowing the same to be false, intended to impede the voter in the exercise of his right to vote,” including information “about the date, time, and place of the election, or the voter’s precinct, polling place, or voter registration status, or the location of a voter satellite office or the office of the general registrar.” Importantly, Virginia law includes a private right of action for registered voters to whom such false information is communicated, enabling them to seek an “injunction, restraining order, or other order, against the person communicating such false information.”

For an overview of Campaign Finance Laws, Federal Communications Laws, Federal Consumer Protection Laws, State Media Literacy laws, and State Privacy Laws, read Section 2 of the full report.

Select Social Media Civic Integrity Policies

Social media platforms from Facebook to Twitter and YouTube to TikTok have civic integrity policies in place designed to combat disinformation related to elections and other civic processes. These policies often work in tandem with the platforms’ other policies, which address things like fraud, violent content, hate speech, and other content the platform may find objectionable. A piece of content may violate multiple policies at once, like a post advocating violence against a specific group.

Platform civic integrity policies primarily focus on prohibiting content that is misleading about how to participate in the civic process. This includes misleading statements or information about the official announced date or time of an election, misleading information about requirements to participate in an election, and content containing statements advocating for violence because of voting, voter registration, or the administration or outcome of an election.

These policies are not exhaustive though and have significant loopholes that allow for certain disinformation-oriented content to stay up on the platforms. This includes narratives contributing to voter suppression, disinformation from world leaders/public figures, and political ads.

We summarize only the policies that Facebook, Twitter, and YouTube implemented during the 2020 elections and soon after. We also discuss how inconsistent enforcement and policy loopholes led to the spread of disinformation during and after the election, how the actions taken (or not taken) by the platforms contributed to the insurrection at the Capitol complex on January 6, and how the platforms reacted in the aftermath. Unfortunately, Facebook and Twitter have stopped enforcing existing policies to the degree they did during the 2020 election. Our research shows that there are many pieces of content being left on the platform that would have been taken down months ago.

Facebook

It has been well documented that Facebook is inconsistent in its enforcement of existing policies. In September of 2020, the Wall Street Journal flagged over 200 pieces of content for Facebook that appeared to violate the platform’s rules against the promotion of violence and dangerous information, only to have Facebook respond by taking down around 30 pieces of flagged content and later conceding that more than half of the pieces of content should have been taken down for violating their policies.

In addition to inconsistent enforcement, Facebook also had two major loopholes that contribute significantly to the spread of disinformation on the platform: the newsworthiness exemption and its policy of not fact-checking political ads. The newsworthiness exemption applies to any content that Facebook believes “should be seen and heard” and meets a balancing test that weighs the public benefit of having the content up versus the harm keeping the content in question up could cause. This is extremely subjective, and this subjectivity is reflected in Facebook’s use of the newsworthiness exemption over time.

Facebook’s decision to exempt political ads has proven to be equally controversial, if not more, than their newsworthiness exemption. This loophole is straightforward: Facebook will not fact-check political advertisements on the platform. During the 2020 election, then-candidate Donald Trump took advantage of this loophole several times and placed ads on Facebook intending to mislead voters about then-candidate Joe Biden and his son Hunter. If Facebook is to get serious about cracking down on disinformation, this loophole is one of the first they need to address. This laissez-faire approach to content moderation allowed bad actors to spread content that contributed to the January 6 insurrection.

Twitter

Although Facebook tends to dominate the conversation about content moderation practices and the spread of disinformation on social media, Twitter is guilty of many of the same things: inconsistent enforcement of existing policies, loopholes in policies that allow for the spread of disinformation, and relatively weak policy responses to the January 6 insurrection. While Twitter may want to be viewed as better on content moderation than its peers, it has been equally as slow to deal with the misinformation that is found all over the platform.

Just like Facebook’s newsworthiness exemption, Twitter has a major loophole that contributes significantly to the spread of disinformation called the “public interest exception.” This exception applies to tweets from elected and government officials that Twitter believes “directly contribute” to the understanding or discussion of a matter of public concern. Tweets that are found to be in the public interest but break other rules may have a label put on them but will not be taken down. Even though the platform insists that this does not mean public officials can post whatever they want (even tweets in violation of their rules), in reality, public officials are generally allowed to get away with posting whatever they want.

YouTube

Compared to Facebook and Twitter, YouTube’s policies have not been scrutinized to the same degree, but like the other social media platforms mentioned here, YouTube is also inconsistent in its enforcement of existing policies. However, instead of having one or two major loopholes in which disinformation is able to spread, YouTube’s policies are overall far more permissive than that of Facebook and Twitter.

YouTube’s inconsistency in policy enforcement is well documented. In 2019, the platform announced that it would be making changes to its hate speech policy and taking down thousands of videos that were in violation of the new policy, but Gizmodo found that many of the videos remained up. To make matters worse, YouTube’s own algorithm will frequently recommend content that violates its own policies.

Recommendations

Federal laws and the laws of many states contain important provisions to reduce the harmful impact of election disinformation. Social media company civic integrity policies are likewise critically important. These current laws and policies leave much room for improvement. There is no single policy solution to the problem of election disinformation. We need strong voting rights laws, strong campaign finance laws, strong communications and privacy laws, strong media literacy laws, and strong corporate civic integrity policies. In Section 4 of the full report, we recommend reforms in all these policy areas, highlighting both pending legislation that should be passed and existing state laws that should be replicated in other jurisdiction.

Conclusion

For decades, Common Cause Education Fund has worked on public education and systemic reforms to build a better democracy. The harmful impact of election disinformation makes it clear that our core programmatic work is needed now more than ever. We must and will educate and mobilize our communities to curb the harmful, rapid growth of election disinformation. Doing so will help deliver on America’s promise of a functioning 21st-century democracy that’s open, accessible, responsive, and accountable to the people. We need your support and your activism to fix the problem of election disinformation. Together, we can build a democracy that works for everyone.

Read the Full Report

Related Resources

See all Related Resources

Guide

Media Literacy Skill: Lateral Searching

"What do I do if my loved ones do not trust verified sources of information?" is the #1 most asked question among trusted messengers navigating conversations about media literacy.

Lateral reading or lateral searching is a strategy that helps us to determine for ourselves who is a credible source of information.

Report

Under the Microscope

Election Disinformation in 2022 and What We Learned for 2024
BY Emma Steiner

Report

2023 Freedom to Vote Act in the States

Report

Highlights and Accomplishments From 2022

Close

Close

Hello! It looks like you're joining us from {state}.

Want to see what's happening in your state?

Go to Common Cause {state}