Updated at 10:22 p.m. ET on May 15, 2025
Yesterday, a user on X saw a viral post of Timothée Chalamet celebrating courtside at a Knicks game and had a simple question: Who was sitting next to him? The user tapped in Grok, X’s proprietary chatbot, as people often do when they want help answering questions on the platform—the software functions like ChatGPT, except it can be summoned via reply to a post. And for the most part, Grok has performed reasonably well at providing responses. But not yesterday. Chalamet was sitting with Kylie and Kendall Jenner, but here is how the chatbot replied: “I believe you’re referring to a photo with Timothée Chalamet, but the context you mention doesn’t seem to align with this image. The post discusses South African politics, which doesn’t relate to Timothée or the people around him.” Nothing in the photo or post referenced South Africa or politics.
Grok spit out similar answers to many queries for a large chunk of time yesterday, randomly contorting its responses to mention “white genocide” in South Africa, apparently in reference to controversial claims about incidents of violence against some of the country’s white farmers. When one user asked Grok for “analysis” on a video of a small, cute pig, the chatbot offered this explanation: “The topic of White Genocide in South Africa is highly contentious. Claims of systematic targeting of white farmers exist, with farm attacks averaging 50 murders yearly, often cited as evidence. Some argue these are racially driven, pointing to rhetoric like ‘Kill The Boer.’” (Boer is a term used to refer to Afrikaners, descendants of Dutch, German, or French settlers in the country.) Nothing in the video or the accompanying text mentioned South Africa, “white genocide,” or “Kill the Boer.”
Ever since Elon Musk bought Twitter and renamed it X, the platform has crept further into the realm of the outlandish and unsettling. Porn spam bots are rampant, and Nazi apologia—which used to be extremely hard to find—frequently goes viral. But yesterday, X managed to get considerably weirder. For hours, regardless of what users asked the chatbot about—memes, ironic jokes, Linux software—many queries to Grok were met with a small meditation on South Africa and white genocide. By yesterday afternoon, Grok had stopped talking about white genocide, and most of the posts that included the tangent had been deleted.
Why was Grok doing this? We don’t know for sure. Both Musk and X’s parent company, xAI, did not respond to requests for comment. (Several hours after publication, xAI posted on X explaining that “an unauthorized modification” had been made to the system prompt for the Grok bot on the platform, without specifying who made the change. xAI is now publicly sharing its system prompts on GitHub and says it will adopt additional measures to ensure a similar unauthorized change does not happen in the future.) The glitch is all the more curious considering that “white genocide” in South Africa is a hobbyhorse for Musk, who is himself a white South African. At various points over the past couple of years, Musk has posted about his belief in the existence of a plot to kill white South Africans.
Even apart from Musk, the international far right has long been fixated on the claim of white genocide in South Africa. White supremacists in Europe and the United States invoke it as a warning about demographic shifts. When Musk first tweeted about it in 2023, prominent white nationalists such as Nick Fuentes and Patrick Casey celebrated that Musk was giving attention to one of their core beliefs. The claim has gained even more purchase on the right since then: Earlier this week, the Trump administration welcomed in white South Africans as refugees. The president hasn’t directly described what he believes is happening in South Africa as “white genocide,” but he has come close. On Monday, he said, “White farmers are being brutally killed, and their land is being confiscated in South Africa.” They needed to come to the United States to avoid the “genocide that’s taking place” in their home country. This is a stark contrast to how Trump has treated other refugee groups. At the start of his second term, he attempted to indefinitely ban most refugee groups from being able to resettle in the U.S.
There has never been good evidence of an ongoing effort by Black people in South Africa to exterminate white people. There have been instances in which white farmers in the country have been killed in racially motivated attacks, but such crimes do not represent a disproportionate share of the murders in the country, which struggles with a high rate of violent crime. Many arguments to the contrary rely on statistical distortion or outright false numbers. (Take it from Grok: In March, when Musk posted that “there is a major political party in South Africa that is actively promoting white genocide,” the chatbot called his assertions “inaccurate” and “misleading.”)
It’s possible that Grok was intentionally made to reference unfounded claims of a violent, coordinated assault on white South Africans. In recent months, Musk has shared research indicating Grok is less liberal than competing chatbots and said he is actively removing the “woke mind virus” from Grok, suggesting he may be willing to tinker with the chatbot so that it reflects his personal views. In February, a Business Insider investigation found that Grok’s training explicitly prioritized “anti-woke” beliefs, based on internal documents and interviews with xAI employees. (xAI hasn’t publicly commented on the allegations.)
If some intentional adjustment was made—and indeed, xAI’s update that came out after this story was published suggests that one was—yesterday’s particular fiasco could have come about in a few different ways. Perhaps the simplest would be a change to the system prompt—the set of invisible instructions that tell a chatbot how to behave. AI models are strange and unwieldy, and so their creators typically tell them to follow some obvious, uncontroversial directions: Provide relevant examples; be warm and empathetic; don’t encourage self-harm; if asked for medical advice, suggest contacting a doctor. But even small changes to the system prompt can cause problems. When ChatGPT became extremely sycophantic last month—telling one user that selling “shit on a stick” was a brilliant business idea—the problem seemed in part to have stemmed from subtle wording in ChatGPT’s system prompt. If engineers at xAI explicitly told Grok to lend weight to the “white genocide” narrative or provided it with false information that such violence is real, this could have inadvertently tainted unrelated queries. In some of its aberrant responses, Grok mentioned that it had been “instructed” to take claims of white genocide in South Africa seriously or that it already had been provided with facts about the theory, lending weight to the possibility of some explicit direction from xAI engineers.
Another possibility is that, in the later stages of Grok’s training, the model was fed more data about a “white genocide” in South Africa, and that this, too, spread to all manner of other responses. Last year, Google released a version of its Gemini model that generated an image of racially diverse Nazis, and seemed to resist creating images of white people. It was the result of crude training efforts to avoid racist biases. DeepSeek, the Chinese chatbot, refuses to answer questions about Tiananmen Square; perhaps Grok had been engineered to do the opposite for the purported white genocide.
Even more methods for manipulation exist. Maybe Grok researchers directly modified the program’s code, lending outsize importance to the “white genocide” topic. Last year, as a stunt, Anthropic briefly tweaked its Claude model to incessantly mention the Golden Gate Bridge: If you asked the bot, say, how to spend $10, it would suggest paying the toll to drive across the bridge. Or perhaps, because Grok pulls information from X posts in real time, the racist content that thrives on Musk’s site, and that he promotes on his own page, had a strong influence—since his takeover, Musk reportedly has warped the platform to amplify all manner of right-wing content.
Yesterday’s problem appears, for now, to be fixed. But therein lies the larger issue. Social-media platforms operate in darkness, and Musk is a fountain of misinformation. Musk, or someone at xAI, has the ability to modify an extremely powerful AI model without providing any information as to how, or any requirement to take accountability should the modification prove disastrous. Earlier this year, when Grok stopped mentioning Musk or Donald Trump as the biggest sources of misinformation on X, a co-founder of xAI attributed the problem to a single employee acting without the company’s permission. Even if Musk himself was not directly involved in the more recent debacle, that is cold comfort. Already, research has suggested that generative-AI chatbots can be particularly convincing interlocutors. The much scarier possibility is that xAI has tweaked Grok in ways more subtle, successful, and pernicious than responding to a question about a pig video with a reference to “white genocide.”
This morning, less than 24 hours after Grok stopped spewing the “white genocide” theory, Musk took up the mantle. He shared several posts on X suggesting there was widespread discrimination and violence targeting Afrikaners.
This article has been updated to include new information from xAI.
Skip the extension — just come straight here.
We’ve built a fast, permanent tool you can bookmark and use anytime.
Go To Paywall Unblock Tool