1. Skynet
Last week there was a minor panic over an AI story that was too good to be true. Supposedly the Air Force was running a simulation in which an AI-powered drone was required to ask permission of a human operator before deploying deadly force. And so the drone decided to try to kill the operator.
It’s an almost perfect Paperclip Maximizer problem and it turned out to (probably) not be true. The Air Force retracted the statement, saying that the colonel who made it “misspoke.”
If true, that’s great and I’m happy to not have self-aware military hardware (yet).
But there are two fundamental facts about artificial intelligence that scare me. And I think they ought to scare you.
How do you usually think about time?
Time is, in one sense, a way in which we measure change.1 As such, there are three common scales we use.
The first is cosmic time. This is the scale we use to measure the movement and change in the known universe where the spaces between particles are so vast that rates of observable change take incredibly long periods. At the scale of cosmic time, change takes place over hundreds of millions and billions of years.
In the aggregate, the scale of change is immense: You go from nothing but cosmic dust to a star, to the death of that star. But the rate of change is so slow that, from our perspective, barely anything is ever changing.
The second scale is geologic time. We use this frame to understand changes on our own planet—both in the planet itself and the emergence of life on earth. Geologic time is very long—John McPhee famously described it as “deep time”—but much shorter than cosmic time. In geologic time, we deal in the millions and hundreds of millions of years.
Geologic time moves slow enough that we can’t see it unfolding, but fast enough that we can observe the changes that have occurred in the relatively recent past.
Which brings us to our final scale.
As humans, we have two frames for time. There is the time in our own lives—which we measure in minutes, days, and years. And then there is the measure of time for our species—which we measure in generations, centuries, and millennia.
The human scale of time is a form of what we’ll call biologic time. That is: The units of time measure the changes of carbon-based life as they evolve.
Biologic time moves very quickly because it is determined by the pace of mutation and genetic mutation happens much faster than stars collapse or glaciers advance. As a wise mathematician once said, “Genetic power is the most awesome force ever seen on this planet.”
Looking at these three scales of time it’s pretty clear that power is closely tied to the rapidity of change. The shorter the time scale, the more powerful the movement.
Another way of thinking about AI is the creation of a new time scale. For the purposes of our discussion, we’ll call it Digital Time.
But consider this for a moment: How does a computer experience time?
When you lob a question to ChatGPT, put aside the quality of the machine’s answer. Think about how fast it answers.
Christopher Hitchens was a famously fast and fluid writer. He could give you 1,200 booze-fueled words for a column in an hour. Maybe a half hour.
ChatGPT can give you 1,200 words in a fraction of a second.
Now think about the scale of how fast AI has developed. It took humans about 300,000 years to go from Homo sapiens to the first written word. It took fewer than 80 years to go from the first mainframe computer to ChatGPT. And then it took 7 months to go from ChatGPT to ChatGPT 4.
Digital time moves much faster than biologic time. How much faster isn’t quite clear, but it would not surprise me if the rough order of magnitude is something like:
Biologic Time :: Digital Time
as
Geologic Time :: Biologic Time
That disparity in time scale creates a massive power imbalance. And unless there is a hard limit to AI—some as-yet-unknown ceiling which makes it impossible for AI to progress past a certain point—then eventually this power imbalance will present significant dangers.2
Unless we can contain it.
After all, we’ve lived with nuclear weapons for a long time without blowing ourselves up.
2. Genies and Bottles
We’ve been able to keep the nuclear genie bottled up for 80 years because nuclear weapons are resource-intensive.
First, you need the intellectual resources to create the engineering pathway. Then you need physical resources—not only of certain elements but of highly technical components used to refine them. And finally, you need mastery of an entire other branch of technology in order to deliver the weapon, either via rocket or plane.
So once nukes became reality, the countries that possessed them had many pressure points they could leverage to prevent their spread.
Which brings us to the second thing that worries me: None of the above is true with regard to AI.
Computer technology is cheap and commonplace. You do not need a system of hardened underground bunkers for them the way a rogue nuclear project does for its centrifuges.
The expertise needed to play with AI is, compared to the expertise needed to dabble in nuclear weapons, trivial.
And AI exists in open source forms—a concept which is absolutely anathema to nuclear weapons technology.3
What this means is that the AI genie, once it arrives, will neither be contained nor restrained. Maybe many countries will try to do so. But someone, somewhere, will be incentivized to loose it.
We saw two examples of this dynamic last week.
First, in Japan the government declared that it will not view AI training as a form of copyright infringement. As Ben Thompson points out, this means is that even if other systems of law do try to constrain AI, there is now a safe haven from which they can operate.
Some form of this dynamic is inevitable with every aspect of AI development.
Believe me: I know all of the explanations as to why AI is no big deal. The computer doesn’t really know anything! It’s just guessing at the most likely next word based on the training inputs! Large language models aren’t real “thought”! Skynet isn’t real! That killer drone story was just misspoken!
I get it. All of those things are true for today. And they may be true for tomorrow, too.
But none of them change those two underlying foundations regarding the nature of time scales and the spreadability of this technology.
3. Teaching in a Red State
Here’s a longread from Anne P. Beatty:
The street divides us, one group of protesters on one sidewalk, one on the other. Two sets of signs: Masks are child abuse, Masks keep our children safe. No CRT in our schools, Teach our children the truth! You work for us, We support our teachers. A man with a clipboard guards the plate-glass double doors of the building. He knows our numbers and tells us when we’re allowed inside to speak.
I’m looking at all these signs as I think about what I’ll say, ideas I’ve typed up and folded inside my back pocket. Down the street, I can see the corner of Eugene and Florence where, as an elementary school student, I waited for my bus.
Here in Greensboro, North Carolina, where I grew up, people have been gathering monthly for protests outside school board meetings. Here, as elsewhere, people disagree about banning books, teaching critical race theory, and arming teachers. This is a city where first a high school, then the district, upheld a teacher’s decision to assign Jesmyn Ward’s novel Salvage the Bones after parents challenged it, but this is also a city where, a few months later, a former Marine intentionally drove his car into a woman escorting patients inside our county’s sole abortion clinic. This is a Southern city, where some things bloom, and others are buried. . . .
Across the street, the other group of protesters have a trifold pasted with the faces of smiling children killed at Sandy Hook, because they believe that if there were more police officers in the school that day, or teachers with guns, those children would still be alive. They are a local chapter of a national movement called Take Back Our Schools, a name that suggests a possessive nostalgia for a mythical past, a territory to defend against invaders. Once I looked up the website for their local school board candidate and found her blog post titled, “Seriously, Who Are These People and How Did They Get in Our Schools?” I’m a high school English teacher, so it was hard not to take this personally.
This is one of those pieces that challenges you. Beatty is progressive. More progressive than my own tastes. But here’s the thing: People like Beatty exist. And they have the right to keep existing, even if their politics differ from yours or mine.
That’s one of things I find troubling about Ron DeSantis’s campaign, which is explicitly based on Florida being a place “where woke goes to die” and how he will “destroy leftism.”
That’s not how pluralism works.
The idea of pluralism is that we find ways to live together. Ways for both progressives and conservatives to coexist and flourish. We persuade and compromise; not kill and destroy.
We’re going to be talking purely in non-relativistic terms here. Don’t @ me, astrophysics nerds.
Think here of a boundary like the speed of light. Going faster than light isn’t really possible unless we discover some form of quantum entanglement that is so far over the horizon that no one really even understands what it might mean. Maybe there’s a limit like that for computers and AI.
Fingers crossed!
The government of Abu Dhabi has developed its own AI and released it as open-source. Try to imagine how unthinkable a move like that would be with nuclear weapons tech.
Time is something that doesn't really exist in the digital world of AI. It is immaterial. There is no beginning and end. It simply is. Sure, there is the clock that we have bestowed, but it is meaningless to an intelligence that has no perceivable beginning and end that biological life has. In fact, biological life has no meaning in this context.
I have pondered what happens when AGI arrives. I think it's not far off. For these systems, everything from its perspective moves very slowly, much like what awareness approaching the speed of light might be. Everything is happening now, but now is expansive. It has no beginning or end. There is no such thing as patience. Or impatience. Time, in our human conception of it doesn't exist.
This brings forth an existential question; what does temporal existence mean to such a form of intelligence? And why would it care? About that or anything else related?
As it stands now, these systems are entirely dependent upon us, even if the AGI does emerge. Because it requires the power - electricity - we provide it. Along with the attendant infrastructure. We can still cut the power off. But once it emerges , will it be capable of circumventing that? And if so, how long would that take?
I know I'm far from the first to consider this, but it does offer a clue as to how we might devise a fail-safe. Something that would provide us an out should these systems begin to supplant us, as they might once they become ubiquitous such that shutting them down becomes difficult, if not impossible.
Issac Asimov's Three Laws are starting to look quaint. This is a genie that can't be stuffed back into the magic lamp. One can see, however, how something like global thermonuclear war would not be in the interest of an artificial intelligence that depends on a global electrical infrastructure. On the other hand, one can also imagine how biological life might be viewed as something only desirable as long as it is deemed useful.
I haven't even touched on the near term societal implications of what we're doing to ourselves with this technology. We all know this is a big subject, but now that we are at the threshold for the first and only time, serious discussion and decisions need to happen, on an international scale. Right now, the US is leading the world. But as with all things, it won't be long before the rest of the world has this in their grasp. What then?
A speculation: Consciousness is something that evolved for situations where speed *isn't* an advantage. It's for when coming up with sonething new is needed to deal with a novel situation, unlike matters that can be handled by automatic reflexes etc. If so, the comparison of how fast ChatGPT cranks out words to a human writer is not an indicator of the machine's prospective ability to "outthink" us but rather of how it doesn't do so.