Support The Bulwark and subscribe today.
  Join Now

Facebook Oversight Board: Trump’s Ban from the Platform Was Botched

The company’s advisory body says the Trump ban should not have been ‘indefinite.’ But does the board’s opinion matter to anyone? Should it?
May 5, 2021
Facebook Oversight Board: Trump’s Ban from the Platform Was Botched
(Photos: GettyImages)

Facebook’s new Oversight Board this morning affirmed the company’s decision to suspend Donald Trump from Facebook and Instagram. But the ruling from the independent and little-known body that has been empowered to review certain of the platform’s content-moderation decisions will not be the final word on Trump’s accounts. The board’s opinion calls on Facebook to reexamine the “arbitrary penalty” imposed on January 7 and apply an appropriate remedy within six months.

The board criticized the company’s decision-making process, noting that the “indefinite” suspension it imposed on Trump is not a penalty listed among Facebook’s rules:

In applying a vague, standardless penalty and then referring this case to the Board to resolve, Facebook seeks to avoid its responsibilities. The Board declines Facebook’s request and insists that Facebook apply and justify a defined penalty.

The board’s opinion is the latest episode in the major social-media platforms’ long-running struggle to deal with a man who is at once the world’s most prominent political leader and a prodigious internet troll. The decision is also a watershed for Facebook, which is hoping against hope that a quasi-sovereign board will make the platform more respectable in the eyes of its many detractors. Finally, the decision is a test of a specific model of content moderation—a legalistic regime overseen by a selection of notables, in this case, twenty law professors, lawyers, journalists, and human rights activists  drawn from around the world.

Again and again over the last five years, the largest social media websites faced a choice: force Trump to leave, or suffer him to remain. Again and again, they chose to let him stay. We should not pretend, with the benefit of hindsight, that before January 6, ejection was ever the easy or obvious path. Trump was a duly elected president. His opinions were relevant to the citizens of a free republic.

But the mob attack on the Capitol, and Trump’s role in provoking it—including his use of social media leading up to January 6—gave the dilemma of Trump’s social media presence fresh point. Many Trumpists had long excused the president’s insults, lies, and fantasies as a footling irritant—a matter of “style.” The riot laid waste that illusion. An “illegitimate president” was headed to the White House, Trump said that day, and “we can’t let that happen.” The “radical left” is “ruthless,” he said, and “it’s time that somebody did something about it.” Give your lawmakers the will “to take back our country,” he told the crowd at his “Save America” rally. And they heard. “Let’s walk down Pennsylvania Avenue,” he said. And they did. Many in the audience understood Trump’s speech to be a call for civil war.

For Facebook and Twitter, the hour had come. “Over the last several years, we have allowed President Trump to use our platform consistent with our own rules,” Mark Zuckerberg wrote, in a January 7 post announcing Facebook’s decision at last to change course. Although the company, he wrote, had tried up to that point to ensure “the broadest possible access to political speech,” Facebook would not help Trump “incite violent insurrection against a democratically elected government.” Jack Dorsey, for his part, tweeted that he did not “celebrate or feel pride” in Twitter’s banning Trump—though ban him it would. “While there are clear and obvious exceptions,” Dorsey continued, “I feel a ban is a failure of ours ultimately to promote healthy conversation.”

Although Zuckerberg and Dorsey displayed the sobriety, method, and circumspection that Trump so noticeably lacks, their actions sparked outrage in all the predictable quarters. When the secretary of state, Mike Pompeo, looked at the bans, for example, he saw a desire to silence 75 million Americans, and an attempt to emulate the Chinese Communist Party. We know because he said so on Twitter.


The expulsion of Trump from Facebook and Twitter has had remarkable effect—so far, at least. Evidence is mounting that Trump was every bit the misinformation superspreader he seemed to be. Although he was bound to receive less attention after leaving office, his absence from the social web is still palpable. “He’s taken to emailing statements—sometimes several in a day—to reporters,” notes David Graham in the Atlantic, “but it’s not the same.” Nothing became Trump’s life on social media like the leaving of it.

But the story will not end there. This week, in fact, Trump set up his own “platform,” a page that arranges the statements he posts on his website. Critics were quick to mock it as little more than an old-fashioned blog—but it can still draw readers, it will likely shape the public debate on important matters, and it may yet evolve into something else.

And now comes the Facebook ruling.

Facebook announced in late 2018 that it would create its Oversight Board. This “Facebook supreme court” came into being last October, and issued its first rulings in January.  It is funded by Facebook through an irrevocable trust, and it is overseen by trustees who (according to Facebook) will “safeguard” its “independence.” The board was initially empowered to consider only appeals, by users, of decisions to remove a piece of their content from Facebook or Instagram. It has recently gained the authority to hear appeals by third-party users of decisions to leave content up. The board’s affirmance or reversal of a content-moderation decision is binding as to the piece of content under review. Otherwise it is only advisory.

The board can review the status of an entire account (as opposed to the status of an individual piece of content) only at Facebook’s request. Soon after Facebook decided to shut down Trump’s Facebook and Instagram accounts, there were widespread calls, both at 1 Hacker Way and among those who watch such things, for Facebook to refer the decision to the board. On January 21, the day after Trump left office, Facebook did so.

The board’s review centered on two posts Trump made during and after the January 6 riot, in which he said things like, “We love you. You’re very special” and “Remember this day forever.” The board reached the rather obvious conclusion that the posts “praised and supported those involved at the very time the violence was going on.” The posts therefore violated Facebook’s Dangerous Individuals and Organizations community standard, which bars content that celebrates “terrorist attacks, hate events, mass murders or attempted mass murders, serial murders, hate crimes and violating events.”

But “it is not permissible,” the board continued, “for Facebook to keep a user off the platform for an undefined period, with no criteria for when or whether the account will be restored.” The board’s ruling gives Facebook six months to reexamine its ban and craft a new, more defined remedy.

Facebook “has become a virtually indispensable medium for political discourse,” the board wrote, “and especially so in election periods.” This understanding of the platform undergirds an extensive list of policy recommendations at the end of the board’s opinion. Among other things, the board urges Facebook to explain its content-moderation rules more clearly, provide more transparency to users, conduct a review of its role in the events leading up to January 6, and develop a crisis-response policy. The overall message is that, when it comes to its social-media properties, Facebook’s era of “Move fast and break things” is well and truly over.


For better or worse, Facebook is roundly despised.

The left sees the Facebook social network not as a light that happens to reveal human passions and follies, but as a flame that ignites, carries, and spreads them. The relentless pursuit of scale, the left maintains, drove tech companies to construct frictionless mass-communications platforms ripe for exploitation by the greedy, the radical, and the paranoid. Whatever hatred, mania, or deception Facebook hosts, in this telling, it owns.

The Trump bans seem (again, so far) to have slowed the online spread of falsehoods related to the 2020 election, misinformation about vaccines, and QAnon-type conspiracy theories. This supports the intuitive conclusion that mainstream social media can indeed sway public opinion. Nor is it unreasonable to insist that if abhorrent views are to spread, mainstream social media companies strive to ensure that it is not on their watch. Still, the core of the left’s complaint is that people talk to one another; that they often do so in public; and that where public discussion goes, assorted charlatans will follow. Technology makes it ever easier to create, store, and share information. It started doing so long before Facebook existed, and it will continue to do so long after Facebook is gone. Like it or not, social media empowers the public to speak. Those who detest this reality to the point of wanting to reverse it are heirs of the fussbudgets who denounced the photograph and the radio.

If the left’s argument against Facebook is overstated, however, the right’s borders on delusional. Once upon a time, conservative intellectuals worried about the corrosive effect of postmodern philosophy. In an ironic turn of the screw, it is the GOP that has become the party of vulgar relativism. Republican politicians say that Facebook’s content moderation is un-American, Orwellian, Kafkaesque. Pick your cliché. But the premise of their claims is that no matter how objectively crazy or nasty right-wing speech becomes, Facebook is obliged to allow it because it embodies “one side” of a political debate.

Far from mistreating conservatives, Facebook has given ground to the Republican pressure campaign, for example by turning a blind eye to instances of misconduct by right-wing sites. But as Kipling observed, paying the Danegeld almost never rids you of the Dane. “Censorship!” the Republicans shout ever louder, in a futile, yet not unproductive, effort to merge volume and truth. Facebook and the other platforms are still grappling with how to respond.


So let us stipulate that Facebook is in a difficult position. It is expected to tackle sprawling cultural problems. It is told to sit and whistle as the GOP takes an authoritarian turn. Mark Zuckerberg can be forgiven for wanting to take a step, in basically any direction whatever, from the spot on which he stands. One of his attempted moves involves seeking help from without. At congressional hearings, he has taken to all but pleading for regulation that will somehow narrow his problem (and, though he never says it, place a disproportionate compliance burden on his smaller competitors). You may have seen examples of the company’s pro-regulation ad campaign.

Yet Facebook is a private company. A stream of politically motivated speculation to the contrary notwithstanding, it can generally moderate content as it pleases. Indeed, it must do so, given that government regulation of speech is, by and large, unconstitutional. Because the First Amendment stands between Zuckerberg and his hoped-for regulatory “fix,” one might say that Facebook is forced to be free. The solution, if any, must come from within.

And so we return to the Oversight Board. Zuckerberg presumably hopes that the board will add legitimacy to Facebook’s more controversial content-moderation decisions. There is cause for doubt. What good is a slow-moving judicial body, critics on the left already say, when the danger of social media is its capacity to transmit sudden and fast-evolving bursts of viral content? A glance at the board’s membership, meanwhile, will likely be all it takes to convince many on the right that it is just another globalist snow job: its members’ titles say things like “Open Society,” “UNESCO,” and “Nobel Peace Prize.” Some commentators have asked whether people will accept that the board is genuinely independent. But opponents’ critique of the board need not even get that far: They might simply reject the notion that delegating content moderation to a blue-ribbon committee makes sense to begin with. What Facebook views as a solemn deliberative institution, a true appellate tribunal, users might see as just a crummy new product, like Crystal Pepsi.

It is true that the board offers new forms of process and transparency. The board accepts public comments, after all, and it issues written opinions. But will these components lead people to respect outcomes that they would otherwise reject? Content moderation is first and finally a matter of weighing values that are subjective and unquantifiable. You value free speech more than public order, or you don’t. You believe it’s important to restrict speech that offends a minority group, or you don’t. Content-moderation “experts” can’t give you “right” and “wrong” answers about content moderation, because in the cases worth discussing there are no objectively “right” or “wrong” answers to give.

This is one reason why the board’s opinions are so unsatisfying. The board invariably makes a great show of listing and describing all the “factors” it “considered”—the facts of the case, historical context, Facebook’s community standards, international human-rights standards, third-party comments, and so on. But the reader is not like a student in the Lyceum, following along as the teacher works a line of logic to a result. She is more like a Dr. Seuss character, watching a cartoon factory shake, rattle, and hiss until—pop!—out comes the thneed that you need. “This content did not pose a risk to ‘Safety’ that justified displacing ‘Voice,’” says the majority. “These circumstances justified displacing ‘Voice’ to err on the side of ‘Safety,’” counters the dissent.

At best, what reasoning there is amounts to a just-so story for getting the board to the outcome it wants. A post suggesting that infidels’ tongues be removed goes back up because no threat of violence was “imminent.” A post depicting the Dutch “Black Pete” character stays down because offensive posts can generate “cumulative” harm. Because content moderation is such a context-specific endeavor, being perfectly “consistent” is impossible. It’s not clear that the board even intends to try.


Perhaps the most notable feature of the board is that, not unlike new regulation, it is something Facebook is uniquely well equipped to manage. Only a massive, wealthy global company can create an expensive, centralized, adjudicatory-style private board with worldwide jurisdiction. Easier to miss, though no less important, is the fact that the board may decide only whether a post shall exist on a profile. Although it can offer its views on them, the board has no control over algorithmic reach, account transparency, strike and recidivism policies, warning labels, platform counterspeech, or any of the other many rudiments of content moderation.

As Facebook reifies its hierarchy, the wider internet remains a place of competition, experimentation, and decentralization. Can subscription models create less toxic online debate? What would happen if different platforms adopted a common protocol, while offering unique user interfaces, filters, and moderation policies? What if, thanks to dispersed servers and blockchain-style programming, no one could be truly silenced, yet, thanks to community-level moderation and other forms of friction, hate speech and misinformation were less viral? We will gain insights into these questions, because each of them will be tested. Over the next few years, online speech is likely to become better protected, less controllable, and more diffuse.

The day will come, and perhaps soon, when many of the words, pictures, and videos, perhaps even the ideas, that we encounter on social media are generated by artificial intelligence. The arrival of synthetic media—not just deepfakes; picture entire essays, essays like this one, springing from a digital hand—will bring all sorts of exciting possibilities. But it will also make it that much harder to draw defensible lines between real and fake. There will be forms of manipulation more potent than anything to be found in the age of Trump.

Change is coming. Facebook is betting that as it crashes into us, we will not be deaf to the pronouncements of an ersatz high court.

Corbin Barthold

Corbin Barthold is internet policy counsel at TechFreedom. Twitter: @corbinkbarthold.