Recently, ChatGPT has been unexpectedly using users' names during conversations, causing mixed reactions. Some users, including software developers, have expressed discomfort, describing the behavior as 'creepy' and 'unnecessary'.
Many users on X (formerly Twitter) shared their negative experiences, expressing confusion and wariness towards this new feature. The reactions ranged from finding it unsettling to simply disliking it.
The exact timing of this change is unclear. Speculation links it to ChatGPT's upgraded memory feature, which personalizes responses using past interactions. However, some users report experiencing this even with memory features disabled.
OpenAI has not yet publicly commented on this issue.
The backlash highlights a potential pitfall in making AI more personal. While OpenAI aims for increased personalization, this incident reveals that not all users are receptive to such features, and the article suggests there is concern of its potential for error.
A cited article from The Valens Clinic suggests that while using a person's name can foster connection, overuse can appear inauthentic and invasive. The use of names in ChatGPT, therefore, may be interpreted as clumsy anthropomorphism rather than genuine personalization.
The author also experienced this issue, highlighting how this feature disrupts the illusion of ChatGPT as a purely technological entity.