A Political Ad Ban Won’t Fix Facebook’s Election Problem

A performative post-election ban won’t solve anything. But cutting off the platform’s data-driven rage machine will.
A hand takes away a man's megaphone while leaving him with a bigger one.
Illustration: Elena Lacey; Getty Images

This week Facebook announced, with much fanfare, that it will temporarily ban all political advertising after polls close on November 3rd, “to reduce opportunities for confusion or abuse.”

Unfortunately, this performative move won’t do much of anything to address the very real threat of chaos and disinformation in the wake of the election. And at the same time that Facebook is seeking kudos for its political ad moratorium, it’s making another major change: turning on algorithmic amplification for posts within groups. This means that you won’t just see posts from groups that you’ve signed up for, but also posts from other groups that Facebook thinks you should see.

This change will dramatically increase the risk that false and inflammatory content will go viral. Facebook groups will grow rapidly. The algorithmic boost, nudging like-minded people into each others’ filter bubbles, will supercharge recruitment for toxic and harmful groups—the cesspools where white supremacist conspiracy theories are born. There will also be a massive influx of trolls and conflict in existing groups that are currently mostly functional. For example, it’s not hard to imagine how discussion groups for LGBTQ parents, perhaps the last vestige of Facebook with any positive value in my life, will be affected by this when our intra-community discussions start showing up in the feeds of random homophobes.

Facebook theatrically banning political ads while supercharging its rage machine is the perfect example of the platform making cosmetic changes to appease critics while plowing full steam ahead with a business model that’s fundamentally incompatible with democracy and human rights. If Facebook really wants to avoid being used to poison and undermine democracy, it needs to take a much more significant step than banning certain types of ads. Instead, the company should immediately shut down the algorithms across its platform that artificially amplify and suppress users’ organic posts in a quest for maximum “engagement” (read: advertising dollars). Restoring the News Feed’s chronological setting, which would show people what they signed up to see rather than what Facebook thinks they want to see, might just save what’s left of our democracy.

In reality, no one will need to spend money on advertisements to make dangerous and misleading content go viral in the wake of this election. Provocative posts spread like wildfire during major political moments like these. I can speak from personal experience. My organization, Fight for the Future, hasn’t spent a penny on Facebook ads in years, but we regularly get content to go viral during big moments, like the repeal of net neutrality, major congressional hearings, or fiery political debates. We make our posts interesting, provocative, and shareable—but we also ensure they are accurate and don’t promote harmful ideologies.

Many online actors, whether they’re a state-backed coordinated disinformation campaign or just a bigoted keyboard warrior, have no such scruples. And Facebook’s algorithm, which is optimized for engagement-at-all-costs, is there to constantly fan the flames. It finds the most incendiary takes on the platform and exploits its massive trove of behavioral data to inject hateful and misleading information directly into the minds of the people most susceptible to political manipulation.

A bombshell report in The Wall Street Journal earlier this year showed that Facebook executives are well aware of the harm this surveillance-capitalist machine causes. An internal audit showed that more than 60 percent of all people who joined hate groups on the platform found them through Facebook’s recommendation. But Facebook’s rage-inducing algorithm is much more lucrative than its entire political ad business, which will account for less than 1 percent of the company's 2020 revenue. That’s why it’s banning political ads while turning the volume up to max on its profitable, and dangerous, amplification algorithm.

Facebook’s billionaire CEO Mark Zuckerberg has said repeatedly that his company should not be the “arbiter of truth.” I actually agree with him, and I have argued against more aggressive moderation or fact-checking of social media posts, which will always result in collateral damage and the silencing of marginalized voices and opinions. But if Facebook doesn’t want to be responsible for determining what is and isn’t true, it also shouldn’t be deciding what content goes viral and what content no one sees—especially not in the immediate aftermath of what is perhaps the highest-stakes presidential election in US history.

Discussions around election disinformation and the harms of Big Tech have often centered on what types of advertisements and posts social media platforms allow and disallow. It's easy to get bogged down in the back and forth over whether a specific post from Trump should be labeled or not, or whether Facebook should have removed the fake video of “drunk” Nancy Pelosi. This can become an exercise in “working the refs” in a game we always lose. The problem with Facebook is not the speech itself. It’s the company’s amplification of it, which it has been able to master through its monopoly power. Facebook’s inescapable data-harvesting, microtargeting, and algorithmic amplification allows speech to be weaponized against democratic norms and civil rights.

There is no silver-bullet solution. To fix this, we’ll need every tool in the toolbox: grassroots pressure, antitrust action, open source and decentralized alternatives, strong data privacy legislation, adversarial interoperability, and other policies that make Big Tech’s abusive and monopolistic practices impractical or illegal. But all of that will take time, and the election is weeks away. Facebook’s algorithm is a digital megaphone for groups planning real-world violence, and a deadly signal boost for false information about Covid-19.

As chief executive officer and majority shareholder, Mark Zuckerberg has his hand on the lever that could shut off Facebook’s toxic algorithms and restore users’ timelines to chronological order. He can push a button that would save lives threatened by the viral spread of lies and hate, and would dramatically increase the chances that we leave our children a liveble planet where people have basic human rights. He should do it. And he should do it now. Before it’s too late.


WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here, and see our submission guidelines here. Submit an op-ed at opinion@wired.com.


More Great WIRED Stories