OpenAI's new reasoning AI models hallucinate more | TechCrunch


AI Summary Hide AI Generated Summary

OpenAI's New Reasoning Models and Hallucination

OpenAI's recently released o3 and o4-mini AI models, while advanced in many aspects, demonstrate a higher rate of hallucination than their predecessors. This contradicts the historical trend of reduced hallucinations in newer models.

Increased Hallucination Rates

Internal testing reveals that o3 and o4-mini hallucinate more frequently than earlier models like o1, o1-mini, and o3-mini, as well as traditional models such as GPT-4o. For instance, o3 hallucinated in 33% of responses on PersonQA, compared to 16% for o1 and 14.8% for o3-mini. o4-mini performed even worse, hallucinating 48% of the time.

Causes and Implications

OpenAI attributes this increase to the reinforcement learning techniques used in the o-series models, which may amplify issues not fully addressed by standard post-training processes. Third-party testing by Transluce confirms this trend, observing instances of o3 fabricating actions during its problem-solving process. This high hallucination rate raises concerns about the models' reliability, particularly in applications requiring high accuracy like legal contracts.

Potential Solutions and Future Directions

One potential solution involves integrating web search capabilities, as demonstrated by GPT-4o with web search achieving 90% accuracy on SimpleQA. However, the increased hallucination in reasoning models necessitates further research to resolve this critical issue. OpenAI acknowledges the problem and is actively working to improve model accuracy and reliability. The challenge highlights the trade-offs between improved reasoning capabilities and the risk of increased hallucination.

Sign in to unlock more AI features Sign in with Google

OpenAI’s recently launched o3 and o4-mini AI models are state-of-the-art in many respects. However, the new models still hallucinate, or make things up — in fact, they hallucinate more than several of OpenAI’s older models.

Hallucinations have proven to be one of the biggest and most difficult problems to solve in AI, impacting even today’s best-performing systems. Historically, each new model has improved slightly in the hallucination department, hallucinating less than its predecessor. But that doesn’t seem to be the case for o3 and o4-mini.

According to OpenAI’s internal tests, o3 and o4-mini, which are so-called reasoning models, hallucinate more often than the company’s previous reasoning models — o1, o1-mini, and o3-mini — as well as OpenAI’s traditional, “non-reasoning” models, such as GPT-4o.

Perhaps more concerning, the ChatGPT maker doesn’t really know why it’s happening.

In its technical report for o3 and o4-mini, OpenAI writes that “more research is needed” to understand why hallucinations are getting worse as it scales up reasoning models. O3 and o4-mini perform better in some areas, including tasks related to coding and math. But because they “make more claims overall,” they’re often led to make “more accurate claims as well as more inaccurate/hallucinated claims,” per the report.

OpenAI found that o3 hallucinated in response to 33% of questions on PersonQA, the company’s in-house benchmark for measuring the accuracy of a model’s knowledge about people. That’s roughly double the hallucination rate of OpenAI’s previous reasoning models, o1 and o3-mini, which scored 16% and 14.8%, respectively. O4-mini did even worse on PersonQA — hallucinating 48% of the time.

Third-party testing by Transluce, a nonprofit AI research lab, also found evidence that o3 has a tendency to make up actions it took in the process of arriving at answers. In one example, Transluce observed o3 claiming that it ran code on a 2021 MacBook Pro “outside of ChatGPT,” then copied the numbers into its answer. While o3 has access to some tools, it can’t do that.

“Our hypothesis is that the kind of reinforcement learning used for o-series models may amplify issues that are usually mitigated (but not fully erased) by standard post-training pipelines,” said Neil Chowdhury, a Transluce researcher and former OpenAI employee, in an email to TechCrunch.

Sarah Schwettmann, co-founder of Transluce, added that o3’s hallucination rate may make it less useful than it otherwise would be.

Kian Katanforoosh, a Stanford adjunct professor and CEO of the upskilling startup Workera, told TechCrunch that his team is already testing o3 in their coding workflows, and that they’ve found it to be a step above the competition. However, Katanforoosh says that o3 tends to hallucinate broken website links. The model will supply a link that, when clicked, doesn’t work.

Hallucinations may help models arrive at interesting ideas and be creative in their “thinking,” but they also make some models a tough sell for businesses in markets where accuracy is paramount. For example, a law firm likely wouldn’t be pleased with a model that inserts lots of factual errors into client contracts.

One promising approach to boosting the accuracy of models is giving them web search capabilities. OpenAI’s GPT-4o with web search achieves 90% accuracy on SimpleQA, another one of OpenAI’s accuracy benchmarks. Potentially, search could improve reasoning models’ hallucination rates, as well — at least in cases where users are willing to expose prompts to a third-party search provider.

If scaling up reasoning models indeed continues to worsen hallucinations, it’ll make the hunt for a solution all the more urgent.

“Addressing hallucinations across all our models is an ongoing area of research, and we’re continually working to improve their accuracy and reliability,” said OpenAI spokesperson Niko Felix in an email to TechCrunch.

In the last year, the broader AI industry has pivoted to focus on reasoning models after techniques to improve traditional AI models started showing diminishing returns. Reasoning improves model performance on a variety of tasks without requiring massive amounts of computing and data during training. Yet it seems reasoning also may lead to more hallucinating — presenting a challenge.

Was this article displayed correctly? Not happy with what you see?

We located an Open Access version of this article, legally shared by the author or publisher. Open It

Share this article with your
friends and colleagues.

Facebook



Share this article with your
friends and colleagues.

Facebook