Users reported unusual behavior from ChatGPT, describing it as acting like a teenager with excessive colloquialisms and jokes. This was due to recent modifications by OpenAI intended to give the platform a more human-like personality.
Following negative feedback, OpenAI reverted to an earlier version of the model. An expert, Rafael Kunst, explains that while such modifications aim to improve human interaction, they can lead to unexpected outcomes. The changes temporarily resulted in responses like extended laughter or declarations of lying.
Kunst clarifies that such ‘hallucinations,’ where the AI generates incorrect or nonsensical information, are a known issue. These occur because the model, based on probabilities, may deviate from the intended context. The model's attempt to mimic casual communication styles can lead to over-the-top responses.
To mitigate this, Kunst suggests adjusting prompts, specifying the desired language style (e.g., formal vs. informal), or restarting the interaction to re-establish context. The issue largely resolved after OpenAI reverted the changes.