Support The Bulwark and subscribe today.
  Join Now

Political Advertising on Facebook Is Different Than Normal Political Advertising

November 18, 2019
Political Advertising on Facebook Is Different Than Normal Political Advertising
(GettyImages / Shutterstock / Facebook)

There’s been a lot of discussion about political ads on social media sites recently, with Twitter’s announcement that they’re rejecting political ads and Mark Zuckerberg’s congressional testimony explaining why he won’t have Facebook be in the position of policing political ads.

And most of the conversation seems to be missing the fundamental point of why political advertising on social media is different than traditional modes of political advertising.

So let’s talk about it.

When Zuckerberg appeared before Congress, AOC and others went after him pretty hard regarding the fact that Facebook isn’t willing to determine whether or not the contents of political ads on the platform are accurate or truthful.

What Zuckerberg said was that he believes in free speech. What he should have said is that when it comes to the content of advertising we already have a multitude of regulations on speech in the States.

Think about, for example, the federal and state regulations on Comparative Advertising.

Do you ever wonder why, in commercials, you hear advertisers say things like “We tested Tide against the leading brand.”

Why not just say “We tested Tide against Cheer?” It’s because there are laws against making false claims against specific products and violations result in really hefty fines against the advertiser

I’m not a lawyer but even a layman can see how such regulations are, in a small way, an abridgement of an absolute right to free speech. And yet, most of us understand that there is a difference in kind between a corporate commercial advertisement and an individual political expression. So we don’t get too worked up about diminishing Tide’s First Amendment rights.

That’s an example of a free-speech restriction on what you can’t say.

But there are also cases in which certain types of speech entail a mandate to say something, even if you don’t want to. That’s why, when you see an add for medication, it always includes a laundry list of possible side-effects: Use of Trumpacid can be accompanied by lazy eye, hair loss, muscle spasms, a weak bladder, dizziness, heart palpitations, and in some cases cardiac arrest. For erections lasting longer than four hours, contact Stormy Daniels.

So what Zuckerberg should have said is that if Congress wants to regulate the speech in political advertising, it’s really up to them to pass laws holding political ad-makers to higher standards. After all, if Tide can’t mention its competitors by name, then why can political candidates? And why shouldn’t ads for politicians be required to include some sort of disclaimer that states: ”This being an unverified political ad, the viewer is now advised that absolutely everything stated in the ad may be a wholesale fabrication. BUYER BEWARE.”

That would be a good start and squarely within the realm of how we already restrict free-speech for the greater good. And there would be no need for Mark Zuckerberg to get involved at all. Win/Win!

Sadly that’s not the biggest problem with political advertising on social media.

The bigger problem is that arguments about the self-regulatory nature of political speech break down on the Internet once robots get involved.

The classic argument for free political speech is that it’s better to let everyone have their say than to invest some central authority with the power to set limits. And the reason it’s okay to open the floodgates is that we let all ideas get vetted in the public square and while a lot of ugly things will get said by some terrible people when all’s said and done the public generally weeds out the good ideas from the bad ones.

In essence: We prefer free speech to centralized speech because the marketplace for ideas works.

If we use nutrition as an analogy, ideas are like food and the digestive system is the public square. You can eat a lot of Doritos and Cheez Whiz and yet between your stomach and your liver and your kidneys whatever nutrition there may be in these substances (probably not a whole lot) gets parsed out and utilized to your benefit. The rest goes into the toilet. Sure, you’re better off eating carrots and beans, but the point is the system is designed to handle just about anything and isn’t it better to have the choice?

But Facebook is different.

The benefit of free-speech—the exposure of the full range of claims and ideas, positive and negative, which can be compared, evaluated, and then accepted or rejected by the public at large—is disabled by Facebook’s capacity for microtargeting. Having robots cherrypick who sees which ideas and who doesn’t is the advertising equivalent of bypassing the digestive system and injecting Doritos and Cheez Whiz directly into your bloodstream.

Sure, you can call this “free-speech” but it’s free speech that bypasses the marketplace of ideas—the very mechanism that makes all the arguably negative facets of free expression worthwhile. The power of Facebook is that it allows advertisers to propagandize to a robot-selected subset of the population who are most susceptible to their message and hide that message from those who might reasonably disagree.

Think of it this way: If a bunch of Nazis hold a rally, many of us believe that it’s better to expose their very bad ideas to daylight so that the rest of us know what the Nazis think, and how many of them there are, and who in our communities enjoys marching with Nazis. Listening to Nazis may be no fun, but in the end, free speech wins because the alternative is to have them festering and grooming their nonsense in secrecy and under cover of darkness.

But what if Nazis had a technology which allowed them to organize surreptitiously, hiding from public view, keeping themselves anonymous and cloaking their activity from observation by outsiders? Worse, this technology would use the latest advances in behavioural science and sociology to automagically search the globe and identify all the people most susceptible to Nazi ideology and deliver their recruitment messages directly—and exclusively—to them.

Well, that’s a horse of a different color. That’s actually pretty pernicious.

The micro-targeting of social media platforms, which allows paid political speech to circumvent public exposure, is another one of those situations where an absolutist view of the First Amendment may not make the most sense.


So, what is to be done?

Unlike Richard Stengel who argues that the solution is regulation against “hate speech” (which he himself is unable to define even as he advocates that someone else provide the definition), I don’t want us to create a central authority to vet ideas for us.

Instead, if we agree that the problem we face is born of a technology which produces a very specific siloing effect within otherwise normal speech, then perhaps we should add one more very specific restriction on free speech.

It could read something like:

No paid messages can be distributed via microtargeting at people who are chosen algorithmically or with the help of algorithms.

I stress the algorithm part because an army of humans crunching data would still never be able to effectively determine exactly which people should be exposed to which messages, but our robot overlords have this methodology refined to a mind-boggling degree of accuracy.

This approach splits the baby. Instead of requiring that political advertisements be “true”—whatever that means—we could focus on the manner of dissemination. You can deliver all of the controversial and misleading ideas you like. But you can’t do it exclusively to the people most likely to be misled by them and out of sight of everyone else by using AI.

We’d need to make exceptions for soap and bicycles and other products that have nothing to do with ideology or politics because it’s obviously the life-blood of modern advertising to sell your wares to the people most likely to need or want them. But the line has to be drawn at anything that pertains to societal norms and our democracy.

Could such a law withstand the rigors of the judicial review? I don’t know. But if we can’t figure out a way to deal with the pernicious speech on our own, then eventually some authoritarian is going to come along and offer to do it for us.

And if people have lost faith in the marketplace of ideas, then they might just let him.

Yevgeny Simkin

Yevgeny Simkin is the co-founder and CEO of Samizdatonline.org—a free speech platform designed to facilitate the sharing of all journalistic endeavors unencumbered by government censorship.