This article analyzes OpenAI's transition from prediction-based AI models (like GPT-4) to reasoning-based models (o1). The author explores the limitations of solely relying on prediction, arguing that true understanding requires more than pattern recognition. While OpenAI boasts o1's superior reasoning abilities, independent researchers find it still struggles with certain tasks.
The article highlights criticisms of current AI as mere 'stochastic parrots' – excellent at mimicking but lacking genuine understanding. Examples are provided where AI fails to grasp fundamental concepts despite impressive outputs. The diminishing returns of scaling up prediction models are also discussed.
OpenAI's o1 is presented as a departure from prediction-based models. It's described as a 'maze-running rodent,' solving problems through trial and error and exploring various solutions. This contrasts with the 'parrot' approach of continuously generating text without pausing for reflection or self-correction. The article emphasizes the resource-intensive nature of this approach, which necessitates significant investments in infrastructure and computing power.
The article concludes that while o1 represents a significant shift in AI development, the path to superintelligence remains uncertain. It questions whether reasoning alone will solve all the challenges and expresses concerns about the immense energy and financial costs associated with this new approach. Despite these limitations, the article suggests that OpenAI and its competitors are likely to continue investing heavily in reasoning-based models in the near future.