Only the Blockchain Can Save Us from Deepfakes
Your phone rings or perhaps it’s Skype or Facetime. You answer the call and it’s your mom. She’s calling to see how you are and she’s forgotten or misplaced some vital piece of info that she needs you to remind her of.
Without hesitation you give her the info. Later your identity is stolen or your bank accounts are cleaned out. How could your OWN mother have been party to such a crime?
But of course she had nothing to do with it. You were duped by a deepfake in one of a thousand nefarious uses for this up-and-coming technology.
If your name is Average Jane then you probably never heard the term “deepfake” until this “deepfake” of Nancy Pelosi hit the news cycle last week.
If the Drunk Pelosi story got your attention, I’d like to make things a bit scarier for you: That fake wasn’t at all “deep.” I’d refer to it as a “shallow fake,” actually. In the world of actual deepfake technology, Drunk Pelosi was amateur hour.
And what’s scarier still is that we’re in the nascent stages of this tech. If you look at the latest advances in what’s possible, we’re still in the Model T stages of development. Don’t let the music lull you into a false sense of security. You might not believe how little input information is necessary to create a convincing recreation of a familiar face that can be made to say absolutely anything.
Where Did Deepfakes Come From?
This technology originated in the desire to improve how computers recognize faces and voices (as well as to make it possible to depict ever more convincing computer-generated renderings of people in film).
Your phone can now identify your face with astonishing precision. When you dictate messages to Google or Apple, this is the underlying technology that makes the dictation so accurate. When a young Carrie Fisher appears at the end of Rogue One, this is the direct result of the evolution of this technology. Had the makers of Rogue One waited another year, the computer-generated Fisher would have been substantially less fake looking.
But technology is just a tool. And tools can be used for good, or for ill. The scenario I described at the outset, with an image that looks and sounds indistinguishable from a close relative or friend, isn’t viable yet. But it will be, extremely soon.
In the meantime, to tide you over, Jordan Peele has crafted a really impressive (and amusing) deepfake of Barack Obama to demonstrate how videos featuring computer-generated people can be used to fool the public to various untoward ends.
Pics or It Didn’t Happen
That’s been the meme on Twitter. However that’s about to be replaced with “Pics or no pics, it didn’t happen.”
Garry Kasparov, who understands chess and geopolitics and technology extremely well says, “The point of modern propaganda isn't only to misinform or push an agenda. It is to exhaust your critical thinking, to annihilate truth.”
So the objective of these fakes is not so much to fool people into believing false narratives—although that is indeed what will happen initially—but rather to foster an atmosphere of such total distrust in what we witness that humanity loses its ability to determine what is indeed true. Which, by the by, is the end goal of all aspiring autocrats. So if you're wondering "who benefits?" the answer is: these guys.
When you can’t trust your own eyes and ears, you’ll have to defer to theirs.
What’s the Solution?
Several of my clients have started to broach the idea for a solution that I’m not aware of anyone working on yet: Based on what I’m seeing in the fields of blockchain and cryptocurrency, I suspect that we’ll soon start seeing technology that aims to effectively watermark (for lack of a better term) and legitimate content for authenticity.
People who appear in photos or audio recordings would shove those recordings through some software that will create a unique hash based on the precise byte signature. Then this will be stored in some decentralized database for reference, in case someone claims the media was faked.
At which point, any media lacking this verification will be assumed to be fake.
It’s going to be clunky and it’s hard to imagine how exactly the common user would adopt it. Will you want to spend an extra 2 minutes processing all your photos before you shove them into Facebook? Will Facebook offer it as a feature for you? (That’s totally plausible, actually.)
At this point, it’s a very real arms race and the bad guys have a massive head start.
The amount of damage deepfakes can do is hard to imagine. Most people seem to already be exhausted by the 24-hour news cycle—and that’s while they still have some marginal confidence in their own ability to distinguish the signal from the noise.
You don’t have to speak to too many people before you run into someone who already takes all news related to, say, nutrition and exercise with a massive salt lick. And those are at least nominally science- and data-based news. What will the world look like when people will only believe the videos and images that comport with their worldview and political bias?
It’s hard to overstate the level of destruction these media can cause. Imagine a world where anyone with a grievance can conjure up videos of whomever they wish in whatever position they want: Saying racist epithets, or performing a lewd act, or molesting a child. And once the video is out, it’s just one person’s word against this really convincing looking video.
Human beings are not designed for this type of information processing. We have no way to operate in a world where “trust, but verify” is more or less impossible.
Our biology is currently screaming at the uncanny valley. When I see that rendering of Princess Leia, my Spidey sense goes bonkers (if I may mix Disney property metaphors) because she’s just ever so slightly not “right” looking. I can’t exactly tell you what the problem is. Something is just a bit off, and I know that because my (and your) ancestors spent a couple of million years refining our facial recognition hardware so that we could detect at a glance if someone was a danger to us. So even a slight deviation from what is expected gives us the creeps.
But as I say, these imperfections will all be worked out very soon, and I gotta be honest: If this Brian Cranston called me on Skype and asked for my bank account number, I might just give it to him.
And in case you’re hoping the government will save the day, I’m sad to say that there’s not a whole lot that they can do. The level of technological sophistication required to utilize this emerging technology is extremely low and the bad actors will be international—so mounting prosecutions will be incredibly difficult. And that’s assuming it’s even possible to write laws that address this issue and not run afoul of the First Amendment.
World War I was kicked off by the murder of a real human being. World War III might be brought about by the murder of a computer-generated character.