151 Comments

This doesn't sound good.

‘Godfather of A.I.’ leaves Google after a decade to warn society of technology he’s touted

https://www.cnbc.com/2023/05/01/godfather-of-ai-leaves-google-after-a-decade-to-warn-of-dangers.html

Hinton was sounding the alarm even before leaving Google. In an interview with CBS News that aired in March, Hinton was asked what he thinks the “chances are of AI just wiping out humanity.” He responded, “It’s not inconceivable. That’s all I’ll say.”

Expand full comment

Pornography is a gross distortion of reality at its foundation. It’s not real life. Everything is fabricated. The passion, the bodies, and probably many female orgasms. The idea that men won’t accept it because they’re not real women is a bit silly. The women they watch create a porn-friendly persona. My guess is these women are very different in their private lives. AI will soon be able to create complex, fake avatars/personalities to engage payers to replicate that fan-based business model. Viewers will just transfer their desire.

Expand full comment

JVL, the piece from Unreality about AI porn and the prediction that people (uh, sorry, but probably mostly men) will get attached to porn characters even though they KNOW they aren't real -- that piece reminded me of the much-cited discovery from many years ago. An academic group was trying to make a "computer therapist." At that point it was just a computer where the patient would type his problems and feelings and the computer would answer with things like "How does that make you feel?" and other anodyne responses. The head was trying it with either his secretary or one of his students and after a while she asked him to leave the room while she was "talking" to her "therapist." She absolutely knew it was a computer and that the responses were programmed to be applicable to practically any situation, and also that she and the computer had no real realtionship. But she got attached to the encounters and wanted her revelations to be private, as in real therapy! Imagine what can happen with AI characters who look real and appear to be interacting in a a genuine way...

Expand full comment

You refer to Joseph Weizenbaum and the Eliza program he programmed in 1966.

Expand full comment

I think people who exploit the “trans” fad to make a buck from sterilizing children should do hard prison time but... execution? Huh? And *public* execution? Guillotine, maybe?

What’s wrong with these people?

Expand full comment

There's a reason "Georgetown comp-sci professor Cal Newport ... is much more sanguine about AI than" you are: as someone who specializes in the field of computer science, he's knowledgable about the history of artifical intelligence, and this knowledge gives him perspective that can't be shared by those who don't share that specialty. Newport's position indicates that he has the perspective to know that AI didn't arrive on the scene last year and has in fact been leading to huge changes in our society ever since its earliest widespread success, the elimination of the assembly line. Technological change has always led to changes in how things are done and to sometimes mass displacement of workers that have in turn led to further social upheaval; think of the various "revolutions" brought about by the printing press, the steam engine, the internal combustion engine, the harnessing of nuclear fission, and now advances in semiconductor technology. So no, we have always conformed to technology rather than the other way around. But technology isn't some alien from another planet; it's a product of our own intellect, the same intellect that allows us to adjust to change when we don't panic. Newport's "thesis"--that "large-language models function ... by guessing the next word based on the preceding words according to the data set they’ve be trained on"--isn't speculation but a matter of his knowledge as a computer scientist. It is this knowledge that allows him to face the situation with comparative equanimity.

Expand full comment

Prof Newport, by conveniently bounding his discussion to how one implementation of a LLM actually works, skirts some issues that are worth considering. The quote he uses from the NYT article is a bit out of context. The authors of that article raise some really relevant points that Prof Newport doesn’t attempt to address in his New Yorker Article. I guess one can believe that any given LLM chatbot implementation is unlikely to massively displace humans from the workforce while still being concerned about other aspects of what we train these models to do. Today we have an attention economy that is incredibly detrimental to reasoned discourse and is eroding the base of our democratic experiment. Most of it fueled by AI-driven outrage/engagement mechanisms. There are things we can be concerned about once these generative AI models get applied in other areas. You are right and I agree w/ you that we need to use our intellect and knowledge to deal with the impacts of these new technologies. As such it’s not enough to decline to prognosticate regarding the impact a new technology may have on society. I think we have the obligation to learn from our history and avoid “predictable” surprises that are likely to have a detrimental impact on our society as best we can. This is an area of responsibility that those of us who bring technological innovations into the world have long skirted.

Expand full comment

You, like JVL, are confusing technology with technology's societal implications. It's the societal implications of AI that JVL is concerned with; but he assumes wrongly that his fears concerning the implications for society of this latest class of AI applications can be meaningfully compared with an expert's encapsulation of one of the technologies--large language models--that itself undergirds these applications. In doing this, he misreads a description of the technology as a "thesis" about its implications for society. This lack of technical expertise leads JVL to paraphrase the expert's encapsulation of the technology in a way that trivializes it (simple guesswork): then, rightly seeing generative AI's NON-trivial implications for society, he worries that the expert isn't alarmed where he should be. The point I'd hoped to make was that only if we avoid this category mistake can we take a sober look at what the technology can and, crucially, cannot do; only by doing this can we achieve a clear-eyed view of what sectors of society will be disrupted. Only experts can help us avoid the fear that ignorance engenders; we'll need experts in economics, to be sure, but we'll also need to recognize technical expertise in AI for what it is.

Expand full comment

I always say, "if your job can be replaced by ChatGPT (or any other LLM), it was a job no human should have been bothering to do in the first place."

Expand full comment

You're right, you missed the boat on that tagline.

Expand full comment

I hate to be the only uninterested person here, but I don’t see AI as replacing anything I care about. If I can determine that the content I’m reading is from AI, I will dump that news source in favor of something that is produced by real people. I think that after a bit of exposure, our subconscious minds will tell us what is real and what is not. But God help young people and others who can’t tell the difference.

Expand full comment

Plenty of people won't be able to tell the difference. We'd have no Qanon adherents if everyone could see things clearly.

Expand full comment

I have a webite with 1,600 scientific papers. Last week, I installed a dedicated copy of ChatGPT. It sees only the papers at my site, not the whole internet. It is astounding! It gives you instant, out-of-the-box expertise in a subject you know nothing about (cold fusion, in this case). Example:

ME: Summarize the paper by Metzler et al., "Observation of Non-exponential Decay in X-ray and gamma Emission Lines from Co-57."

ChatBot: The paper by Metzler et al. discusses the observation of non-exponential decay in X-ray and gamma emission lines from Co-57. The data set is consistent with no incremental neutron emission during the experiment with a 1 σ upper limit of 0.44 nSv/h. The paper discusses basic considerations and hypotheses based on delocalization of the nuclear excitation as a result of excitation transfer. The paper also discusses weaker lines in the spectrum and the decay of the Sn K α X-ray line. Non-exponential decay is observed for the 14.4 keV gamma line, and for the Fe K α and K β X-ray lines, by the front-side Amptek X-123 detector, which appears to be anomalous. The paper discusses possible sources of problems and interpretations, and points to future experiments that might provide clarification and confirmation.

ME: List some papers that discuss screening effects.

ChatBot: There are several papers listed in the provided context that discuss screening effects, including:

- "Effects of Electron Screening on Low-energy Fusion Cross Sections" by H.J Assenbaum, K Langanke, and Rolfs

- "Enhancement of deuteron-fusion reactions in metals and experimental implications" by A Huke, K Czerski, P Heide, et al.

- "Experimental techniques for the investigation of the electron screening effect for d+d fusion reactions in metallic environments" by A Huke, K Czerski, and Heidea

- "Forbidden nuclear reactions in astrophysics" by P Kálmán and T Keszthelyi

- "Screening in cold fusion derived from D–D reactions" by H Hora, J.C Kelly, J.U Patel, Mark A Prelas, G.H Miley, and J.W Tompkins.

Check it out! Become an instant expert in cold fusion and other nuclear physics. You don't have to have a clue what a screening effect is. See:

https://lenr-canr.org/wordpress/?page_id=2988

Expand full comment

In a way, AI in itself is porn. It's just words and ideas predicated on models. Put another way, AI is a tautology. Models are, at their best, only a model of reality. If humans want to just stop thinking, or have their thinking done for them, they can rely on porn, or AI. AI should be seen as a tool, very usefuI, but just one tool in the toolbox.

Expand full comment

And to be clear... AI, like porn, is a fantasy, a fantasy of a mind. But unlike a mind, or at least a functioning mind, AI produces certainty in its words. It does not hem and haw. It does not self-question.

Expand full comment

“When It Comes to OnlyFans, Humans Can Outcompete AI”

This sentence depresses me on so many levels, but I'm pretty sure framing flesh and blood women as "in competition" with AI is our ticket South Korea-esque birthrates. People tend to opt out of unwinnable games.

Also, if a woman does manage outcompete the machines, I doubt she will obtain John Henry folk-hero status.

Expand full comment

The 2024 election, at least, will be the first American one with AI playing a central role. The GOP will desperately rely on it; it's the only real weapon they have.

Expand full comment

It is axiomatic that technological progress is increasingly outpacing society's ability to manage it.

Expand full comment

Please, PLEASE, please

give us the option of listening on audio to your interesting articles!!!

Expand full comment

I second this!

Expand full comment

Jonny Darko hits it out of the park, again.

Expand full comment

"the history of technology suggests that the technology won’t conform to us" - now THAT'S frightening . . . Still, please write more about this.

Expand full comment

What will happen to human society when AI is as smart as MIT graduate students?

Well, how many US presidents have had graduate degrees from MIT? Or CEOs? Or generals?

Don't worry about AI getting good at solving problems. Worry about AI getting good at schmoozing and politicking.

No one who has achieved any influence in society will see any benefit in training it to do that, so it may well never happen. AI may well achieve the equivalent social power and influence of postdocs, and never more than that.

Expand full comment