Zuckerberg vs. Democracy
The changes he’s making to appease Donald Trump won’t fix the ways Facebook is harmful to our civic and political life.
MARK ZUCKERBERG ANNOUNCED A SPATE of changes in Meta’s approach to speech and moderation issues this week, part of his longstanding effort to dominate the information landscape while resisting responsibility for his company’s contributions to political violence, extremism, and other antidemocratic activities in the United States and around the world. Let’s talk about the most serious implications.
Two of the key changes are that Meta will end fact-checking and roll back content moderation. According to reporting from 404 Media, many Meta employees were angry and felt blindsided by these changes, as there was not an internal policy and stakeholder engagement process driving the decision. This reinforces the impression that this is little more than a quid pro quo on Zuckerberg’s part for soliciting Trump’s help in pushing back against regulators. The full spectrum of changes will lead to even more hate and extremism while ignoring the systemic issues of how Meta’s platforms actually contribute to harm.
It’s important to note that this was not a sudden “bending the knee” to Donald Trump. Zuckerberg has a history of reacting to threats from politicians who have the power to impose costs on the company. I saw this tendency firsthand when I was hired by Facebook (now Meta) in 2018, ostensibly to head the company’s election-integrity efforts for political advertising.
I proposed a plan—built and supported by engineers and program managers across the company—to ensure we were not allowing (and profiting from) ads that potentially engaged in voter suppression. We had the tools and ability to protect the integrity of the elections, which Zuckerberg had publicly promised to do. But it would have likely meant taking action against more content from candidates on the right, including Trump, because, as research has demonstrated, the right engaged in political misinformation at higher rates.
Zuckerberg knew this would come at a political cost, so he made it clear he would not fact-check political candidates—not even in paid advertising, and not even if those ads contained blatant lies about voting. And in 2020, after Trump threatened to “shut down” social media because Twitter fact-checked him, Zuckerberg went on Fox News to announce that he would retreat even further from fact-checking.
While Meta’s fact-checking program was never perfect, the company’s third-party fact-checkers and moderation policies did at least make an effort to ensure that lies, conspiracy theories, and deliberate disinformation didn’t spread unchecked across its platforms.
That’s all gone now. Meta’s latest changes will have far-reaching consequences for public safety and democracy. Here are three key points:
1. We must zoom out to see how social media platforms erode public safety and democracy.
By zeroing in on fact-checking under the false guise of protecting free speech, Zuckerberg was appeasing his biggest right-wing critics while obfuscating the much more important issue at hand: his company’s own role in amplifying, recommending, and targeting content that has contributed to extremism, illegal activity, and even political violence (e.g., pogroms against the Rohingya in Myanmar or the January 6th insurrection in the United States).
These companies are not just neutral pipes through which speech flows. They curate each user’s feed based on algorithmic decision-making and priorities set by the owners. As long as Zuckerberg, Elon Musk and other major social-media owners continue to have complete, unchecked power to tune their algorithms to decide whose voices are amplified, whose are silenced, which content is recommended, and how ads are targeted to us, they will have unrivaled influence over how we engage with and trust in information—with none of the responsibility for the consequences.
We cannot ignore the evidence. Studies from independent researchers, civil society groups, and Meta’s own internal researchers have shown time and time again that content promoting hatred, misinformation, and extremism outperforms more balanced, nuanced discourse on major social media platforms. This is no accident. It’s the result of an algorithmic design that rewards engagement, regardless of whether that engagement is based on truth or toxicity. Meta’s core business model incentivizes sensationalism, and Zuckerberg knows it. In the quest for more clicks, likes, and shares, the company has given unchecked power to algorithms that often prioritize inflammatory content over anything that could foster understanding or constructive conversation. The reward of algorithmically boosted virality is far more consequential than any one post from an individual user, and that is where responsibility should lie.
Fact-checking, while important, was never a perfect solution to this problem. Once harmful content is boosted by the algorithm, fact-checks are not likely to reach those who are already influenced by the content, nor to change anyone’s mind.
2. Hate and extremism will likely spread even more.
While Zuckerberg spoke publicly about ending fact-checking, Meta also quietly rolled back multiple “Hateful Coduct” policies, including safety guardrails for LGBTQ people, people of color, women, immigrants, and other protected groups. We’ve seen the real-world consequences of unchecked amplification of group hatred on Facebook and Instagram, and I fear that is going to become even worse. It is worth remembering that Meta’s policies developed slowly over time, often after episodes of real world-violence, including genocide, that had been traced back in part to Facebook. Extremism is on the rise around the world, and if Meta is welcoming hate-filled content, the idea that extremists won’t use the platform to accelerate real-world harm is just naïve.
3. Political quid pro quo is no longer subtle.
Zuckerberg’s changes are catering to Trump’s desires in return for asking for Trump’s protection. While corporations have long tried to curry favor with governments, it used to be done quietly. The fact that Zuckerberg openly said that he is enlisting Trump’s help to pressure global regulators to not enforce their own laws in their own countries should concern anyone who cares about global democracy. What used to be considered undemocratic behavior and relegated to smoke-filled back rooms is now proudly announced by the dealers themselves.
Trump pressured Zuckerberg to stop a program that he didn’t like, and in exchange, Zuckerberg is gambling on Trump’s authoritarian impulses to bully other countries to allow Meta to operate unchecked in the world. This decision is not only politically motivated; it is dangerous in its disregard for both the sovereignty of other countries and the global impact of Meta’s platforms. And it was all done in plain sight.
Where do we go from here?
WE MUST UNDERSTAND WHAT IS HAPPENING if we want to have any hope of correcting course. The path forward requires something much more fundamental than tinkering with existing models for content moderation: It requires a radical rethinking of how Meta and other social media platforms design their algorithms and profit from the use of their users’ data.
These companies must be held accountable, not necessarily for the content they host but for the design and back-end choices that encourage harmful content to flourish in the first place. Until we address the business model that makes things like hate and extremism profitable, we will continue to see these problems grow—as will the damage to our society.
And in the absence of any intention on Zuckerberg’s part to even attempt a course-correction, it is up to the public to change their relationship with these companies. We must finally insist on transparency about how these systems truly affect our lives, and on some form of accountability for the way these companies’ own tools and business decisions have serious consequences for public safety and democracy.
Yaёl Eisenstat is a senior fellow at Cybersecurity for Democracy, working on policy solutions for how social media and AI-powered algorithms affect political discourse, extremism, and democracy. Previously, she was a vice president at the Anti-Defamation League (ADL), heading its Center for Technology and Society. She was Facebook’s global head of elections integrity for political ads in 2018, and previously served as a diplomat, intelligence officer, and White House advisor. Homepage: YaelEisenstat.com.