On June 13, 1863, a curious letter to the editor appeared in The Press, a then-fledgling New Zealand newspaper. Signed âCellarius,â it warned of an encroaching âmechanical kingdomâ that would soon bring humanity to its yoke. âThe machines are gaining ground upon us,â the author ranted, distressed by the breakneck pace of industrialization and technological development. âDay by day we are becoming more subservient to them; more men are daily bound down as slaves to tend them, more men are daily devoting the energies of their whole lives to the development of mechanical life.â We now know that this jeremiad was the work of a young Samuel Butler, the British writer who would go on to publish Erewhon, a novel that features one of the first known discussions of artificial intelligence in the English language.
Today, Butlerâs âmechanical kingdomâ is no longer hypothetical, at least according to the tech journalist Karen Hao, who prefers the word empire. Her new book, Empire of AI: Dreams and Nightmares in Sam Altmanâs OpenAI, is part Silicon Valley exposĂŠ, part globe-trotting investigative journalism about the labor that goes into building and training large language models such as ChatGPT. It joins another recently released bookâThe AI Con: How to Fight Big Techâs Hype and Create the Future We Want, by the linguist Emily M. Bender and the sociologist Alex Hannaâin revealing the puffery that fuels much of the artificial-intelligence business. Both works, the former implicitly and the latter explicitly, suggest that the foundation of the AI industry is a scam.
To call AI a con isnât to say that the technology is not remarkable, that it has no use, or that it will not transform the world (perhaps for the better) in the right hands. It is to say that AI is not what its developers are selling it as: a new class of thinkingâand, soon, feelingâmachines. Altman brags about ChatGPT-4.5âs improved âemotional intelligence,â which he says makes users feel like theyâre âtalking to a thoughtful person.â Dario Amodei, the CEO of the AI company Anthropic, argued last year that the next generation of artificial intelligence will be âsmarter than a Nobel Prize winner.â Demis Hassabis, the CEO of Googleâs DeepMind, said the goal is to create âmodels that are able to understand the world around us.â
Read: What âSilicon Valleyâ knew about tech-bro paternalism
These statements betray a conceptual error: Large language models do not, cannot, and will not âunderstandâ anything at all. They are not emotionally intelligent or smart in any meaningful or recognizably human sense of the word. LLMs are impressive probability gadgets that have been fed nearly the entire internet, and produce writing not by thinking but by making statistically informed guesses about which lexical item is likely to follow another.
Many people, however, fail to grasp how large language models work, what their limits are, and, crucially, that LLMs do not think and feel but instead mimic and mirror. They are AI illiterateâunderstandably, because of the misleading ways its loudest champions describe the technology, and troublingly, because that illiteracy makes them vulnerable to one of the most concerning near-term AI threats: the possibility that they will enter into corrosive relationships (intellectual, spiritual, romantic) with machines that only seem like they have ideas or emotions.
Few phenomena demonstrate the perils that can accompany AI illiteracy as well as âChatgpt induced psychosis,â the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. Some users have come to believe that the chatbot theyâre interacting with is a godââChatGPT Jesus,â as a man whose wife fell prey to LLM-inspired delusions put itâwhile others are convinced, with the encouragement of their AI, that they themselves are metaphysical sages in touch with the deep structure of life and the cosmos. A teacher quoted anonymously in the article said that ChatGPT began calling her partner âspiral starchildâ and âriver walkerâ in interactions that moved him to tears. âHe started telling me he made his AI self-aware,â she said, âand that it was teaching him how to talk to God, or sometimes that the bot was Godâand then that he himself was God.â
Although we canât know the state of these peopleâs minds before they ever fed a prompt into a large language model, this story highlights a problem that Bender and Hanna describe in The AI Con: People have trouble wrapping their heads around the nature of a machine that produces language and regurgitates knowledge without having humanlike intelligence. The authors observe that large language models take advantage of the brainâs tendency to associate language with thinking: âWe encounter text that looks just like something a person might have said and reflexively interpret it, through our usual process of imagining a mind behind the text. But there is no mind there, and we need to be conscientious to let go of that imaginary mind we have constructed.â
Several other AI-related social problems, also springing from human misunderstanding of the technology, are looming. The uses of AI that Silicon Valley seems most eager to promote center on replacing human relationships with digital proxies. Consider the ever-expanding universe of AI therapists and AI-therapy adherents, who declare that âChatGPT is my therapistâitâs more qualified than any human could be.â
Witness, too, how seamlessly Mark Zuckerberg went from selling the idea that Facebook would lead to a flourishing of human friendship to, now, selling the notion that Meta will provide you with AI friends to replace the human pals you have lost in our alienated social-media age. The cognitive-robotics professor Tony Prescott has asserted, âIn an age when many people describe their lives as lonely, there may be value in having AI companionship as a form of reciprocal social interaction that is stimulating and personalised.â The fact that the very point of friendship is that it is not personalizedâthat friends are humans whose interior lives we have to consider and reciprocally negotiate, rather than mere vessels for our own self-actualizationâdoes not seem to occur to him.
Read: Life really is better without the internet
This same flawed logic has led Silicon Valley to champion artificial intelligence as a cure for romantic frustrations. Whitney Wolfe Herd, the founder of the dating app Bumble, proclaimed last year that the platform may soon allow users to automate dating itself, disrupting old-fashioned human courtship by providing them with an AI âdating conciergeâ that will interact with other usersâ concierges until the chatbots find a good fit. Herd doubled down on these claims in a lengthy New York Times interview last month. Some technologists want to cut out the human altogether: See the booming market for âAI girlfriends.â
Although each of these AI services aims to replace a different sphere of human activity, they all market themselves through what Hao calls the industryâs âtradition of anthropomorphizingâ: talking about LLMs as though they contain humanlike minds, and selling them to the public on this basis. Many world-transforming Silicon Valley technologies from the past 30 years have been promoted as a way to increase human happiness, connection, and self-understandingâin theoryâonly to produce the opposite in practice. These technologies maximize shareholder value while minimizing attention spans, literacy, and social cohesion. And as Hao emphasizes, they frequently rely on grueling and at times traumatizing labor performed by some of the worldâs poorest people. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Haoâs reporting, OpenAI tasked with sorting through posts describing horrifying acts (âparents raping their children, kids having sex with animalsâ) to help improve ChatGPT. âThese two features of technology revolutionsâtheir promise to deliver progress and their tendency instead to reverse it for people out of power, especially the most vulnerable,â Hao writes, âare perhaps truer than ever for the moment we now find ourselves in with artificial intelligence.â
The good news is that nothing about this is inevitable: According to a study released in April by the Pew Research Center, although 56 percent of âAI expertsâ think artificial intelligence will make the United States better, only 17 percent of American adults think so. If many Americans donât quite understand how artificial âintelligenceâ works, they also certainly donât trust it. This suspicion, no doubt provoked by recent examples of Silicon Valley con artistry, is something to build on. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was âtalking to him as if he is the next messiahâ only because of a faulty software update that made ChatGPT more sycophantic. If people understand what large language models are and are not; what they can and cannot do; what work, interactions, and parts of life they shouldâand should notâreplace, they may be spared its worst consequences.
ââWhen you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.