Support The Bulwark and subscribe today.
  Join Now

The Real Reason to Nix Section 230

It’s not Big Tech’s supposed bias against conservatives. It’s Big Tech’s irresponsibility.
December 1, 2020
The Real Reason to Nix Section 230
WASHINGTON, DC - OCTOBER 28: CEO of Facebook Mark Zuckerberg appears on a monitor as he testifies remotely during the Senate Commerce, Science, and Transportation Committee hearing 'Does Section 230's Sweeping Immunity Enable Big Tech Bad Behavior?', on Capitol Hill, October 28, 2020 in Washington, DC. CEO of Twitter Jack Dorsey; CEO of Alphabet Inc. and its subsidiary Google LLC, Sundar Pichai; and CEO of Facebook Mark Zuckerberg all testified virtually at the hearing. Section 230 of the Communications Decency Act guarantees that tech companies can not be sued for content on their platforms, but the Justice Department has suggested limiting this legislation. (Michael Reynolds / Getty)

On Thanksgiving, President Trump sent out an angry late-night tweet addressing a subject he previously weighed in on in late May, back when he was still at least ostensibly going about the work of being president.

The “Section 230” the president referred to has long been a bugbear of conservatives. The relevant part of Section 230 of the Telecommunications Act of 1996 reads as follows:

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

This provision was intended to protect a nascent medium from getting sued before it could blossom into its full potential. In the quarter century since that legislation was enacted, the Internet has clearly blossomed—in some cases more like a cancer than a flower—and it is now clear that 230, as it stands, has caused and continues to cause much more harm than good.

Trump’s intervention in May came in the form of an executive order—but it was too vague on policy to usher in the kind of unencumbered and uncensored “free speech” that he feels conservative voices are being denied on the platforms that enjoy 230 protections. Still, focusing on the harm that 230 has caused was actually the right thing to do and reforming it is a somewhat urgent matter. And while Trump’s Thanksgiving tweet doesn’t suggest that he’ll take any serious action—more likely he’ll be spending the remainder of his presidency doodling with his Sharpie all over the walls of the Oval Office—the problems caused by 230 deserve attention.

Defenders of Section 230 argue that without its protections, social media platforms would not be able to provide their users the ability to share information freely. That argument is specious. Existing constitutional and legal protections—including the First Amendment—suffice to shield publishers from egregious litigation, and without 230 providers like Facebook who seem to function at once as publishers and platforms can still have some of their cake and eat some too (more on that in a moment). But Section 230’s blanket protections that allow the tech firms to decide as it suits them when they are merely “platforms” absolved of all guilt and when they are “publishers” who get to moderate content is a level of absolution from responsibility that is not afforded any other medium or industry. Without 230, these companies would not be operating with their present wanton disregard for the public good.

This should not be a partisan issue. Section 230 does not help the left. Nor does it help the right. It serves the interests of no one other than a very small handful of tech companies—and they can pivot to continue to operate successfully without it.


While Section 230’s authors may have had the best of intentions, today the need to correct course is easy to see. Perhaps back in 1996 the authors of the law couldn’t imagine how many bad-faith actors there would eventually be or how many Americans would prove gullible and susceptible to mis- and disinformation. Now we know better. There is immense harm done by granting everyone a bullhorn loud enough to reach the opposite ends of the Earth.

Consider what 230 means just for Facebook. Facebook is known for the rampant distribution and elevation of conspiracy theories. It is known for helping all flavors of crazies find one another and reinforce each other’s delusions. It is known for having experimented on its users without obtaining their consent. And thanks to Section 230, Facebook doesn’t just wash its hands of the misinformation distributed via its network, disclaiming responsibility for it. The company has spent billions of dollars fine-tuning technology that is specially designed to take unaccountable, intentionally misleading, and even potentially legally actionable information and deliver it only to the individuals likely to be bamboozled by it. And when the horrible results of this practice blow up we get pearl-clutching apologies:

“We deplore this misuse of our platform,” the company said in a response to the Sri Lanka report. “We recognize, and apologize for, the very real human rights impacts that resulted.”

And what routinely follows the apologies? A ducking of responsibility, because Facebook is just a platform and therefore has no responsibility for what its users post. Of course company execs regret playing part in a genocide—repeat that in your head a few times; let it sink in—but they shouldn’t be held responsible for it in any way.

Thanks, 230!

Recall that Section 230 shields the “provider . . . of an interactive computer service” from civil liability if the provider bans “objectionable” content—so Facebook can choose to ban microtargeted misinformation as objectionable and not be sued for its actions. It is precisely because the company chooses not to ban such content—and indeed builds its business model on delivering that content to the people who are most susceptible to the messaging while hiding it from everyone else—that Section 230 needs to be at least reviewed if not discarded.

The most common argument for why 230 is so vital to the lifeblood of these businesses is that paying attention to what is uploaded by users would be too complicated or costly for them to do so they shouldn’t be asked to try. The Electronic Frontier Foundation, which dubs 230 “the most important law protecting internet speech,” puts it this way:

Given the sheer size of user-generated websites (for example, Facebook alone has more than 1 billion users, and YouTube users upload 100 hours of video every minute), it would be infeasible for online intermediaries to prevent objectionable content from cropping up on their site.

Please allow me a second to serenade Facebook (and Google and Twitter and the rest) with my tiny violin. As it turns out, lots of things are complicated and costly.

Ask Delta Airlines about being legally responsible every time one of their planes crash and the families of all the deceased sue them. You don’t think flying over 190 million people a year safely to their destinations is complicated and costly?

Ask Perdue about distributing billions and billions of chickens to neighborhood stores around the country and only ever so rarely killing people with salmonella. I’m going to go out on a limb and say that’s more complicated and costly than moderating tweets.

And yet, when the airlines or the poultry processors get it wrong they are (quite rightly) held to account. Despite a lack of anything comparable to 230 protections, you still have the ability to fly across the world at a moment’s notice and you still have chicken in your fridge.

Oh, and let’s not forget that when it comes to the kinds of things that 230 doesn’t protect them from, such as copyright infringement, YouTube (owned by Google) and Facebook do an absolutely bang-up job of finding and taking down content. Yes, those takedowns are often instigated by third parties who complain to the firms, but even so, the takedowns are complicated.

Of course, many websites, in spite of being protected by 230, choose to moderate their community comment sections. Some, like the New York Times, receive thousands and thousands of comments and each one is considered by someone and then either posted or not. Lots of moderately unhinged posts appear in the Times’s comment section, so I don’t think they’re doing a whole lot of rejecting—as is their prerogative. In mentioning the Times, I don’t mean to imply that the scale is the same; Facebook has orders of magnitude more engagements than the Times. The point is that the Times is acting like the publisher that it is.


Here’s the thing. If Facebook wanted, it could act as a responsible publisher and a platform—just not at the same time! And the company doesn’t need Section 230 for this. When its service acts as a platform, Facebook is letting users share their unmoderated thoughts with their friends; billions of users, sharing photos of their cats, and their opinions about how flat the Earth is with whomever they’re directly connected to via some limiting factor. It can be as simple as a share counter—your post can be visible to no more than, say, fifty people. That’s already much more exposure than Jack Average could have even dreamed of back in the mid-’90s when he was just the weirdo at the watercooler whom everyone tried to not catch the eye of. Anything that is shared beyond that threshold would have to be vetted by some combination of humans and AI.

And in different circumstances, Facebook would act as a publisher—say, when content it provides comes from a source that has been pre-vetted as responsible, and with which Facebook is prepared to share liability in the event that the published materials are somehow not protected by the First Amendment. Because such content would be treated as a “published” work, Facebook would have to suffer the same potential repercussions that all other media distributors are subjected to when they publish materials that result in legal action.

It is worth noting that the First Amendment, which already protects publishers, is very broad and reduces the kinds of messaging that a publisher can get sued over to very near zero. But the gap between zero and almost zero might, for Facebook and Twitter and Google, make the difference in a decision about whether to rebroadcast the most absurd nonsense to the four corners of the (round) Earth.

And if Facebook or Twitter or Parler or TheNextSocialThing start taking too much flack for being too censorious of certain kinds of ideas (be they too left or too right or too flat-earthy) then some competitor can come online to fill the void. Let the marketplace do its thing. They’ll all just have to take some responsibility when their readers, at their published works’ behest, start murdering one another.

Facebook and Google have both the wealth and technological sophistication to continue to thrive in an environment where Section 230 is reformed to allow some degree of liability for willfully targeted mis- and disinformation that results in quantifiable damage.

But as things now stand, 230 is a magic wand: with just a wave, these companies can skirt the legal responsibility and the hard work. Remember, though—magic wands are notoriously dangerous. It’s time to take Zuckerberg’s away.

Yevgeny Simkin

Yevgeny Simkin is the co-founder and CEO of Samizdatonline.org—a free speech platform designed to facilitate the sharing of all journalistic endeavors unencumbered by government censorship.