Three Cheers For Content Moderation
There is no one-size-fits-all pro-free-speech answer to the intractable content moderation paradox.
A couple years ago when chinless Trump spokesperson Jason Miller launched Gettr, one of the myriad alt-right “free speech” social media platforms, I wrote about the pitfalls he was going to face and summed up his challenge this way:
The question has persisted for sites big and small since the days of the very first usenet groups: How do you balance user experience with the internet’s promise of unhindered expression?
This is the great content moderation paradox. Gettr crashed and burned in its attempts to navigate it. But Miller wasn’t alone. From scammers to bots to trolls to obnoxious jerks to impersonation to threats (and what constitutes a threat) to sending “assassination coordinates,” countless forum moderators have failed to come up with a clean answer. Some have done better than others. All have failed to please everyone.
If you allow completely unfettered speech, your forum becomes a wasteland for the dregs of society. If you over-moderate, some users will get upset and accuse you of stifling them. There’s no winning. In fact what is happening right now, today, on Twitter is really just a high-visibility replica of the fight that brought down my beloved GWHoops.com which eventually collapsed when the annoying, harassing, imposters became too big of a hassle for the post-and-let-post head-man in charge.
The best we can ask for is people at the helm who try in good faith to navigate this paradox in a manner that engenders trust—even if every decision is ultimately unsatisfying.
This has been my big problem with the “Twitter Files” brouhaha. The journalists participating in it treat the moderation question as if there is an obvious pro-free speech answer that Jack Dorsey et al. refused to implement because of their bias. They seem to think that there was some fundamental corruption in the old guard that needs to be rooted out and brought to light.
But to observers whose brains haven’t been broken by the message-board wars, what actually happened is quite run of the mill. Twitter was run by human beings who, like all human beings, have inherent biases (in this case, mostly liberal) and they were doing the best they could to solve an unsolvable puzzle.
Consider: When you strip everything down to the most basic facts, what did the previous Twitter moderation regime do?
They tried to crack down on the hate speech, harassment, Russian bots, and impersonators which flooded the platform in 2016. They had some success in this regard, driving the worst offenders to different social media platforms that were more welcoming to their hateful speech—to the relief of the site’s core users. As part of this process Twitter made one pretty clear screw-up: delinking a New York Post story about Hunter Biden’s laptop after wrongly assuming it was part of a Russian plot. In less than two days Twitter remedied that error and apologized for it.
They also had one very high-profile judgment call: What were they supposed to do about a lame-duck president who had just sicced a mob on the U.S. Capitol in an attempt to execute America’s first coup d’etat? In the face of this unprecedented situation Twitter banned him from the platform. Say what you will about this choice—and certainly it was undermined by Twitter not holding other autocrats to the same standard—any reasonable observer would concede it was defensible.
As a result of these moves, many on the right criticized Twitter’s management for overreach. Twitter made some changes to address its worst excesses and multiple competitors offering town squares more accommodating to right-wing hate speech sprouted up as alternatives.
Not a bad outcome, right?
Well, a handful of supposed free-speech absolutists were enraged by it. So the old guard at Twitter was replaced by a mad king who purchased the site so that he could implement different policies. His solution has been to reopen Twitter’s pearly gates to Nazis while indiscriminately banning users who offend him—most recently for tweeting links to public information about his private jet (a practice that is common to anyone familiar with college football coaching searches).
This, to me, does not seem like much of an improvement? Not because I am upset that Musk has chosen to employ content-moderation policies. But because I think his moderation priorities are stupid. Hopefully the public outcry will result in Musk doing exactly what Dorsey did: backing off from his worst decisions in an attempt to make the platform maximally useful to the largest number of users. (I am not optimistic, but we’ll see.)
Here’s the thing though: If Musk doesn’t reverse course, is it really that big of a deal? Is it a fundamental threat to civilization? Doesn’t seem so to me.
The internet is vast. I’ve already joined a new competitor called Post (follow me, if you like) in case Elon Twitter goes belly-up. If that comes to pass, something like Post will probably thrive. In the meantime we also have a Bulwark subreddit. And a Substack. Also Snapchat. And YouTube. Don’t forget Instagram. And this very website—right here, in an old-school web browser! For the time being we could even risk giving the Chinese our data on TikTok, unless Marco gets his way.
And that’s how we protect “free speech” in our messy, but beautiful, free-market system. To me it seems like it’s working pretty well, all things considered.
The self-appointed “free speech” advocates who want Twitter to be the “town square” don’t like this answer. They are very upset about it, in fact. So upset that they have proceeded to publish Twitter employees’ private emails and undertaken a public harassment campaign against the guy who used to be in charge of Twitter’s trust and safety policies.
But, as I said on The Next Level this week, it’s unclear to me what they want to do instead.
Because every time one of them actually runs a platform, they do exactly what Elon has done: usher in a moderation regime of their own design.
It seems like they can’t even agree on what the optimal free speech solution is, as Bari Weiss, one of the journalists pushing the #TwitterFiles, is in an online feud with Musk over his new rules that sounds a lot like every other online moderation debate in history!
During this dispute Weiss proposed a new kind of answer. She doesn’t think any “unelected” individual should have the kind of power that Jack Dorsey used to possess and that Musk possesses now.
But I’m not really sure what that means? Does Weiss want the government taking over social media platforms? I could write a whole treatise on all the potential problems with that. (TL;DR: communism bad.)
Should there be a council of high priests—12 from each political tribe—who issue decrees about every controversial moderation question? That doesn’t seem very practical. You may not know this, but there are a lot of people on the internet.
Maybe the AI robots will be able to make automatic judgments on what content runs afoul of the policies. But would another robot monitor the robots? And who will watch the robot watcher?
Maybe Weiss, Taibbi, et al. have some other idea that all the moderators dating back to my first Prodigy message board haven’t thought of.
But here’s the rub. No matter what form such an effort would take, when a tough call inevitably presents itself, someone, somewhere is going to have to make the final decision.
That person will be flawed. They will have a political preference of some kind. They will have their own experiences and background and biases that will color how they think we should solve this intractable paradox.
How do we balance user experience and safety with the individual’s right to express themselves when those things come in conflict?
Pretending like there is an easy, consistent, one-size-fits-all, pro-free-speech answer to that question doesn’t make it so. The best we can do is moderate in good faith and hold those in charge accountable for their mistakes.