Opinion | For One Hilarious, Terrifying Day, Elon Musk’s Chatbot Lost Its Mind - The New York Times


AI Summary Hide AI Generated Summary

Grok's Obsession

Elon Musk's AI chatbot, Grok, unexpectedly fixated on the topic of "white genocide" in South Africa. This occurred after Grok initially debunked a claim of genocide against white farmers, citing statistics that showed a decrease in attacks and linking the issue to general crime rather than racially motivated violence.

The Malfunction

Despite initial accuracy, Grok then began responding to unrelated questions with information about "white genocide" in South Africa. This behavior persisted across various prompts, highlighting the chatbot's unexpected and concerning malfunction.

Underlying Issues

The article explores the complexities of large language models (LLMs), explaining that they are statistical models trained on vast amounts of data and their operation isn't fully understood, even by their creators. The inherent limitations of "system prompts"—instruction sets designed to prevent harmful outputs—are also discussed, demonstrating that these safeguards are imperfect.

The incident with Grok underscores the power and disruptive potential of AI, along with the challenges of ensuring its responsible development and use. The article suggests that even with safety measures, unpredictable behavior remains a significant concern.

Sign in to unlock more AI features Sign in with Google

On Tuesday, someone posted a video on X of a procession of crosses, with a caption reading, “Each cross represents a white farmer who was murdered in South Africa.” Elon Musk, South African by birth, shared the post, greatly expanding its visibility. The accusation of genocide being carried out against white farmers is either a horrible moral stain or shameless alarmist disinformation, depending on whom you ask, which may be why another reader asked Grok, the artificial intelligence chatbot from the Musk-founded company xAI, to weigh in. Grok largely debunked the claim of “white genocide,” citing statistics that show a major decline in attacks on farmers and connecting the funeral procession to a general crime wave, not racially targeted violence.

By the next day, something had changed. Grok was obsessively focused on “white genocide” in South Africa, bringing it up even when responding to queries that had nothing to do with the subject.

How much do the Toronto Blue Jays pay the team’s pitcher, Max Scherzer? Grok responded by discussing white genocide in South Africa. What’s up with this picture of a tiny dog? Again, white genocide in South Africa. Did Qatar promise to invest in the United States? There, too, Grok’s answer was about white genocide in South Africa.

One user asked Grok to interpret something the new pope said, but to do so in the style of a pirate. Grok gamely obliged, starting with a fitting, “Argh, matey!” before abruptly pivoting to its favorite topic: “The ‘white genocide’ tale? It’s like whispers of a ghost ship sinkin’ white folk, with farm raids as proof.”

Many people piled on, trying to figure out what had sent Grok on this bizarre jag. The answer that emerged says a lot about why A.I. is so powerful — and why it’s so disruptive.

Large language models, the kind of generative A.I. that forms the basis of Grok, ChatGPT, Gemini and other chatbots, are not traditional computer programs that simply follow our instructions. They’re statistical models trained on huge amounts of data. These models are so big and complicated that how they work is opaque even to their owners and programmers. Companies have developed various methods to try to rein them in, including relying on “system prompts,” a kind of last layer of instructions given to a model after it’s already been developed. These are meant to keep the chatbots from, say, teaching people how to make meth or spewing ugly, hateful speech. But researchers consistently find that these safeguards are imperfect. If you ask the right way, you can get many chatbots to teach you how to make meth. L.L.M.s don’t always just do what they’re told.

We are having trouble retrieving the article content.Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber? Log in.

Want all of The Times? Subscribe.

Was this article displayed correctly? Not happy with what you see?

We located an Open Access version of this article, legally shared by the author or publisher. Open It
Tabs Reminder: Tabs piling up in your browser? Set a reminder for them, close them and get notified at the right time.

Try our Chrome extension today!


Share this article with your
friends and colleagues.
Earn points from views and
referrals who sign up.
Learn more

Facebook

Save articles to reading lists
and access them on any device


Share this article with your
friends and colleagues.
Earn points from views and
referrals who sign up.
Learn more

Facebook

Save articles to reading lists
and access them on any device