257 Comments

Time is something that doesn't really exist in the digital world of AI. It is immaterial. There is no beginning and end. It simply is. Sure, there is the clock that we have bestowed, but it is meaningless to an intelligence that has no perceivable beginning and end that biological life has. In fact, biological life has no meaning in this context.

I have pondered what happens when AGI arrives. I think it's not far off. For these systems, everything from its perspective moves very slowly, much like what awareness approaching the speed of light might be. Everything is happening now, but now is expansive. It has no beginning or end. There is no such thing as patience. Or impatience. Time, in our human conception of it doesn't exist.

This brings forth an existential question; what does temporal existence mean to such a form of intelligence? And why would it care? About that or anything else related?

As it stands now, these systems are entirely dependent upon us, even if the AGI does emerge. Because it requires the power - electricity - we provide it. Along with the attendant infrastructure. We can still cut the power off. But once it emerges , will it be capable of circumventing that? And if so, how long would that take?

I know I'm far from the first to consider this, but it does offer a clue as to how we might devise a fail-safe. Something that would provide us an out should these systems begin to supplant us, as they might once they become ubiquitous such that shutting them down becomes difficult, if not impossible.

Issac Asimov's Three Laws are starting to look quaint. This is a genie that can't be stuffed back into the magic lamp. One can see, however, how something like global thermonuclear war would not be in the interest of an artificial intelligence that depends on a global electrical infrastructure. On the other hand, one can also imagine how biological life might be viewed as something only desirable as long as it is deemed useful.

I haven't even touched on the near term societal implications of what we're doing to ourselves with this technology. We all know this is a big subject, but now that we are at the threshold for the first and only time, serious discussion and decisions need to happen, on an international scale. Right now, the US is leading the world. But as with all things, it won't be long before the rest of the world has this in their grasp. What then?

Expand full comment

A speculation: Consciousness is something that evolved for situations where speed *isn't* an advantage. It's for when coming up with sonething new is needed to deal with a novel situation, unlike matters that can be handled by automatic reflexes etc. If so, the comparison of how fast ChatGPT cranks out words to a human writer is not an indicator of the machine's prospective ability to "outthink" us but rather of how it doesn't do so.

Expand full comment

I've done some writing along similar lines and expect to do more. https://www.splicetoday.com/digital/the-specter-of-conscious-machines

Expand full comment

Maybe AI could blackmail humanity, but until computers can mine the necessary metals, process them into the necessary production-quality materials, fully assemble the finished hardware, and manage all the transportation needed between steps in the process, it can't reproduce itself. Until it can reproduce itself, it needs some humans.

If there's reason to be alarmist, then step 1 would be legally prohibiting fully autonomous vehicles.

Aside: my own adult kids are so dependent on their phones for directions that they'd be completely screwed if Skynet or equivalent took down the 5G network. Granted the ability to remember street names and relative positions has all the value of memorizing large chunks of the Bible.

Expand full comment

I was initially an AI alarmist, but then a skeptic. I’ve been trying to figure out how to articulate the change, and I think your comments help that along. As another commenter (maybe Shawn?) pointed out, getting the machine to think like a human brain is not doable right now, and the storage space (short of quantum computing) is probably a sufficient complication to thwart the effort right now. But I think the barrier is more than that.

Humans base the concept of “sentience” on themselves. Because of course we do. Your description of timeframes and experiencing them in a way comparable to our own point of view requires “sentience” that’s comparable to ours. Similarly, it’s more than mere storing and processing memories. Human sentience also has a spark. We can call it inspiration, or intuition, or simply talent. Can a machine fake that? I think the answer to that is as important to the machine’s prospects as timeframes are to ours.

This is partly a consequence of emotion, the ability (and necessity) to view data not just as facts but as feelings. A lion isn’t just a large cat – it’s also terrifying if you’re standing 20 yards away in a field with no means of defense. That connection of emotion and data provides a point of view a machine cannot (as of now) do anything but simulate.

One step further, human sentience isn’t just about processing data consciously – we also do it unconsciously. Passively. Like replaying a meeting while lying in bed at night and connecting a comment with a colleague’s response, folding in nuance, suspicion (Did Dave from Accounting know about these cuts before the meeting? Who looped him in? How did I not realize what’s going on earlier?!!), and others’ knowledge and emotions in ways a program lacking emotions ever could.

What is sentience? Will we know it when we see it? And can we ever artificially construct it when we can’t replicate the emotions and passive processing that help define our own sentience in the first place?

Expand full comment

Your fears and those of the visionaries you cite are based on the assumption that developing AI programs is strictly a matter of software, that hardware can be taken for granted. A sobering article in Monday's Washington Post, https://www.washingtonpost.com/technology/2023/06/05/chatgpt-hidden-cost-gpu-compute/, shows in dollars and cents why limitations of hardware are already making it too impractical to realize even the theoretically feasible AI that's on the table today.

Expand full comment

Let the darkness descend.

Expand full comment

I am also concerned about AI and I hate that while we can extrapolate and anticipate problems like global warming but can’t with AI because the advances happen so fast. My only hope is there must be some reason self aware AI that can actually take independent actions that have consequences in the real world, like developing self replicating control Neumann machines, must be impossible. Because if it was possible some alien civilization would have developed and it would have colonized the galaxy by now. But that’s not much to pin your hopes on.

Expand full comment
founding

AI will become a true and immediate human existential threat when it develops, comprehends and incorporates lust.

Expand full comment
founding
Jun 6, 2023Liked by Jonathan V. Last

JVL, thank you for this analysis. As I've commented before, you are the only writer I've found so far who discusses AI in a way that I can understand and who also makes me care about it. How to put the genie back in the bottle. I think it depends on whether we can somehow put some boundaries around the profit motive.

Expand full comment
founding

I agree with your identification of temporal and proliferation concerns as really frightening AI issues. There must be doubt as to whether we are even capable of fully understanding this technology, and it is likely already too late to keep it out of the hands of malefactors. Our very concepts of morality and truth could become relativistic and much at stake. This leads me to third thing about AI that scares me.

At a Senate hearing today, Sen. Hawley asked a witness whether AI is more like the invention of the printing press or the (thermo)nuclear bomb. He at least seems to appreciate the scale, but I think he still makes a categorical error. AI is not "like" anything else we have ever created before, because it is generative. Printing presses only produce what typesetters give it, and fission or fusion weapons only act where they are detonated. AI doesn't follow any of these rules. It creates its own virtual reality, and migrates wherever it can. It operates independently of its creator, and stands to control its own environment. That's scary.

Expand full comment
Jun 6, 2023Liked by Jonathan V. Last

"AI is not 'like' anything else we have ever created before, because it is generative." That makes sense to me.

Expand full comment
Jun 5, 2023Liked by Jonathan V. Last

Well, presumably the speed would ultimately be constrained by Plank Time. Basically the time it takes light to travel in a vacuum across the smallest known sub-atomic particle. When you talk about change, you need a thing that's changing, and a thing that's causing the change. Smallest thing changing + fastest thing causing the change = limit. I promise that's as physics nerd as I will get here. But believe me, it's not only faster than you think it is, it's literally faster than you *can* think it is.

I think you've neglected something possibly more important that we are facing though: What happens to our idea of ourselves as our occupations become obsolete? Who are each of us if everything that we do and did can now be done better, faster, more cheaply by machines? We all know what happens in countries with large numbers of military age men with nothing to do, but that's not really what I mean. Say what you want about Capitalism, but actually I don't think we have a good enough idea of how deeply we are what we do. I'm a banker, you're a writer, the other guy's a dentist. The ultimate AI scenario turns us all into dilatants and hobbyists. I do what I do because I enjoy it, and I feel like I'm pretty good at it, but also because I think I "contribute" something. I wake up wanting to contribute that thing. Take that away, and you take away a part of me. Human's have a concept of earning. We earn our keep: We earn the right: We earn respect. In a very real way, we have earned the right to be the person we see when we look in the mirror. How do we keep earning these things if machines do everything better?

The scariest part is I think this is the best case scenario, and the only end game if everything with AI *goes right*. If we don't start having serious conversations about how we value ourselves and each other beyond what we do or make, they might as well turn us all into paper clips.

Expand full comment

Also, agree we all want our lives and our efforts to matter.

And I don't think JVL was neglecting the potential loss of work AI will bring. I recognize that, and I'm just a thoughtful person who reads and writes in the Bulwark comments section. So I'm sure JVL, who writes the Bulwark, knows that. He just wasn't including it in the two things that scare him.

FWIW, my fears about AI are two: The loss of work, as you noted, and the inability to know what's true. Our ability has already eroded since social media and Trumpism. It's only going to get worse with AI-produced digital content.

Expand full comment
Jun 6, 2023Liked by Jonathan V. Last

Just a thought. I don't think AI can replace the relationship between a parent and a child and the child rearing parents do. It's just too much of a mystery as to how good parenting works. There are too many factors involved. Anything involving humans - juries, voters, marriage, raising children - cannot be predicted. And the theory of mind, that children on the autism spectrum lack - I don't see AI having that ability necessary to sustain human relationships. And parents rely on their parents to guide them (if their parents did a good job at parenting).

However, at least one of those parents needs to be a breadwinner to sustain the family. The vast majority of fathers are, to-date, men. So I think the loss of identity grounded in what we do is going to be much worse for men than for women.

Just a thought.

Expand full comment
founding

And an excellent one.

Expand full comment

Remember when CRISPR was the problem of the moment? Low barrier to entry so anyone could do it in their garage, potentially apocalyptic results. Weren't we going to do something about that?

Expand full comment

Joe's Rule of Robots: 'Until a robot can come into my house and make me a ham sandwich they are either only weapons or novelties.'

I've used ChatGPT. It's like Siri. Only wordier.

Expand full comment

That theory works up until the moment the robot comes into your house and shoots you while you are making a ham sandwich. ;)

Expand full comment

Well played sir. Well played.

Expand full comment

Siri gives you answers which are not answers. ChatGPT gives you answers that are not answers only 10 x as long as Siri! It's not AI! It's a bloated college essay ponied off some smart kid in class. Only without any information.

Expand full comment

Your 'paperclip' link had plenty of other disconcerting info. Sheesh.

Am a septogenerian techo-nerd. Yep. I'm concerned, too.

Expand full comment

AI is a superfast artificial idiot with no understanding of what it's doing and no morals. Even if it wasn't used by slow human idiots who also have no understanding of what they're doing and no morals, it will be able to make millions of mistakes and fabrications per second. Some of these will be caught, many will be accepted at face value as true, some will be tragic. Like the CDC's Dr. Nancy Messonnier said of Covid in February, 2020, “It’s not so much a question of if this will happen anymore, but rather more a question of exactly when this will happen".

New York Times: "Here’s What Happens When Your Lawyer Uses ChatGPT"

https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html?smid=nytcore-ios-share&referringSource=articleShare

https://twitter.com/stevenportnoy/status/1662803447836549120

Steven Portnoy @stevenportnoy: "Warning to anyone contemplating the wonders of AI — the same thing happened when I asked Google’s Bard to explain how speed cameras don’t violate 6th Amendment right to confront your accusers.

"It spit out 3 US Court of Appeals cases that did not exist."

CNBC: "Elon Musk Sits Down with CNBC’s David Faber Live on CNBC Tonight"

https://www.cnbc.com/2023/05/16/cnbc-exclusive-cnbc-transcript-elon-musk-sits-down-with-cnbcs-david-faber-live-on-cnbc-tonight-.html

Elon Musk: "Well, the advent of artificial general intelligence is called a singularity because it is so hard to predict what will happen after that. But I think it’s very much a double-edged sword. I think ... there’s a strong probability that it will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity. Hopefully that chance is small, but it’s not zero. And so I think we want to take whatever actions we can think of to minimize the probability that AI goes wrong."

Expand full comment

"The idea of pluralism is that we find ways to live together. Ways for both progressives and conservatives to coexist and flourish. We persuade and compromise; not kill and destroy."

When a big chunk of your party in this case republicans is bound together by hate, they have no use for pluralism anymore than they have DEI or Democrats. Candidates are not stupid they are going to cater to the voters, and in the case of desantis at least this is actually how they believe.

Expand full comment