ChatGPT Referring to Users by Name Triggers Privacy Concerns

Some ChatGPT users have noticed a surprising and unsettling change in their interactions with the AI chatbot—ChatGPT is referring to them by name, even when they haven’t shared any personal details. As reported by TechCrunch, this unexpected personal touch has sparked mixed reactions, with some users finding it “creepy” and “unnecessary.”

ChatGPT Randomly Addresses Users by Name

Several users on X (formerly Twitter) have pointed out that ChatGPT seems to randomly address them by name, even though they never provided that information. Simon Willison, a software developer, called the behavior “weird and invasive”, while another user, Nick Dobos, expressed that he “hated it.”

This change seems to have occurred alongside OpenAI’s upgraded memory feature, which allows ChatGPT to retain information from past conversations. However, even when users have disabled the memory settings, some still report that ChatGPT uses their names, raising significant concerns about data privacy and AI transparency.

OpenAI’s Silence and Growing User Concerns

So far, OpenAI has not provided any response to inquiries about the issue, leaving users to speculate whether this change is a glitch or if it’s intentional. The situation has led to a broader discussion about the balance between personalization and comfort. Some users have even compared it to a teacher repeatedly calling their name in class, which can feel unnatural and awkward.

Experts suggest that AI-driven name recognition can often feel forced, which may result in interactions that are less authentic rather than more engaging. While personalization is often seen as a way to improve user experience, this change in how ChatGPT interacts with users has highlighted the fine line between making AI seem more engaging and creating discomfort. As more people use AI in their daily lives, issues like these will likely continue to spark conversations about privacy and how much personalization is too much.

Also Read: ChatGPT Raises Privacy Red Flags With New Memory Feature

AI PrivacyData privacyData Protection