The Washington PostDemocracy Dies in Darkness

Big Tech is failing to fight election lies, civil rights groups charge

A coalition of 60 civil rights and consumer groups backed a report that offers a dim assessment of tech companies’ plans to protect the 2022 midterms

October 27, 2022 at 10:00 a.m. EDT
A mob loyal to President Donald Trump at the U.S. Capitol on Jan. 6, 2021. (John Minchillo/AP)
6 min

Two years ago, Silicon Valley’s biggest technology giants faced criticism from activists and voter suppression experts for not moving sooner to restrict Donald Trump’s accounts after his repeated false claims disputing the legitimacy of the 2020 election.

Now, a coalition of 60 consumer and civil rights groups says Meta, Twitter, TikTok and YouTube are just as ill-prepared to fight disinformation from politicians and other public figures whose public pronouncements about the 2022 midterms could undermine Americans’ faith in the electoral process or lead to violence.

The Change the Terms coalition, which includes the civil rights group Color of Change and the good-government group Common Cause, is releasing on Thursday a scathing 19-page analysis of the major tech companies’ election-related policies and whether they are living up to their pledges to fight disinformation ahead of the vote.

The report argues that the tech companies’ plans to fight disinformation and connect users with credible information arrived too late and were not aggressive enough to address proliferating false claims about widespread voter fraud or specific attacks against election officials.

“Election misinformation and disinformation are not anecdotal or seasonal. Lies — particularly the brand of election-denialism rhetoric that rose in 2020 — have been ubiquitous online for years, and this crisis has no end in sight,” the groups write.

“To treat ‘election-related’ disinformation in particular as episodic ignores that it is present year-round and shapes beliefs and opinions that lead to harassment of election officials, and election-related hoaxes and violence.”

Inside the civil rights campaign to get Big Tech to fight the ‘big lie’

The coalition’s report, which was written primarily by researchers and activists at the media advocacy group Free Press, offers a dim assessment about the way tech companies have wielded their power to shape public discourse during a high-stakes campaign season when Americans will decide who will represent them in the House of Representatives, a third of the Senate and numerous state offices. Millions of voters already have cast their ballots.

“We went to the brink of violence and saw the effect of social media’s influence on January 6th,” said Nora Benavidez, a senior counsel and the director of digital justice and civil rights at Free Press. “Despite that, the companies are doing no better. They have failed to clearly update their systems in time for the elections.”

YouTube spokeswoman Ivy Choi said in a statement that the company disagrees with the report’s characterization of the company’s policies. “Inciting violence against poll workers or alleging the 2020 U.S. presidential election was stolen or rigged is not allowed on YouTube, and we enforce our policies regardless of the speaker,” Choi said.

TikTok spokesman Ben Rathe said in a statement that the company removes election misinformation and that it provides access to authoritative information about elections through its “Election Center, which is available in more than 45 languages.”

Twitter spokeswoman Elizabeth Busby said in a statement that the company has “taken deliberate, meaningful steps to elevate credible, authoritative information about the U.S. midterms, and to ensure misleading information isn’t amplified.”

A spokesperson for Meta, which is the parent company of Facebook and Instagram, declined to comment on the report but referred a Washington Post reporter to an August news release outlining Meta’s intention to fight misinformation about how to vote and threats of violence or harassment against election workers.

The report follows a months-long campaign the coalition waged to encourage the tech companies to address hateful, misinformed and violent content on their platforms. Over the summer, the coalition began meeting with executives at the four companies to talk about specific strategies they could adopt to address problematic information. Months later, the coalition argues, the companies have followed few of its recommendations.

Over the summer, the tech companies announced they were largely sticking to the strategies they’d deployed in past electoral cycles to fight false claims about the electoral process while elevating credible information. They pledged to ban and remove content that misleads users about how or when to vote while promoting accurate information about the electoral process. Twitter, TikTok and YouTube also said they would take action against posts that falsely claim the 2020 election was rigged. Meta barred such posts only in political advertisements.

But the report alleges that severe gaps exist in the companies’ policies and in their enforcement of their own rules. The advocacy groups were especially critical of exceptions to the rules that all four companies grant because they deem some content to be newsworthy or in the “public interest,” according to the report.

In new election, Big Tech uses old strategies to fight ‘big lie’

The activists argue that “every promising protective policy seems as though it could be circumvented with each platform’s arbitrary ‘newsworthiness’ or ‘public interest’ exception.”

That issue has gained renewed interest since the tech mogul Elon Musk, who on Friday is expected to become the owner of Twitter, said he would reverse Twitter’s ban on former president Donald Trump.

Hundreds of GOP candidates have embraced Trump’s false claims about his defeat in the 2020 presidential race, and some are using social media to trumpet unsubstantiated claims about election fraud.

Busby said that Twitter rarely applies the public interest exception and that when it does, the tweet is ineligible to be retweeted and is placed behind a notice providing context about the rule violation.

Choi said that although YouTube does “allow content with sufficient educational, documentary, scientific or artistic (EDSA) context or countervailing views — this is not a pass to violate our policies based on ‘newsworthiness.’ ”

Rathe pointed a Washington Post reporter to TikTok’s content guidelines, which state that the company may apply exceptions to its rules in “certain limited circumstances,” as in the case of content that is important for documentary, scientific or artistic reasons.

The coalition’s report also urges the platforms to bolster their policies to protect election workers from violence and harassment. Election workers and their families have been experiencing death threats as well as sexual and racist attacks spawned by their refusal to back Trump’s claims of a rigged election or because they have been caught up in false claims that they were part of an election-rigging scheme.

The tech companies have embraced policies that bar certain kinds of harassment or disclosures of personally identifiable information about users, including election workers.

Trump’s ‘big lie’ fueled a new generation of social media influencers

The civil rights groups argue the companies should be more transparent about their efforts to prevent election workers’ personal information from being spread online and should do more to remove misinformation that could make election workers a target in the first place.

“The ‘Big Lie’ spreads across platforms; examples also abound on Meta and Twitter, where hateful and misleading posts pack a one-two punch: encouraging violence against election workers because of demonstrably false claims about stealing the 2020 election from Donald Trump,” the groups wrote.