Sam Altman, the CEO of OpenAI, has raised concerns about how people—especially young users—are using ChatGPT for serious personal matters like relationship issues and life decisions.
He pointed out that many individuals treat the AI chatbot like a therapist or life coach, sharing very private information. Altman explained that courts do not legally protect chats with ChatGPT the same way they protect conversations with doctors, therapists, or lawyers. That means anyone could use what you say to ChatGPT as evidence in court.
Speaking on the podcast This Past Weekend with Theo Von, Altman said:
“People talk about the most personal details in their lives to ChatGPT. People use it, young people, especially, use it as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’”
He went on to say:
“And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
Altman warned that without proper legal protection, there could be serious consequences. For example, if someone shares something personal with ChatGPT and it becomes part of a legal case, the court might require OpenAI to hand over that information.
“If someone confides their most personal issues to ChatGPT, and that ends up in legal proceedings, we could be compelled to hand that over. And that’s a real problem,” he said.
Altman added that he finds this situation troubling:
“I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago.”
In another recent talk at a Federal Reserve banking event, Altman also shared his concerns about people becoming too dependent on ChatGPT to make decisions in their everyday lives.
“People rely on ChatGPT too much. There’s young people who say things like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me, it knows my friends. I’m gonna do whatever it says.’ That feels really bad to me.”
He emphasized how common this has become and said the company is still figuring out how to deal with it.
“Even if ChatGPT gives great advice, even if ChatGPT gives way better advice than any human therapist, something about collectively deciding we’re going to live our lives the way AI tells us feels bad and dangerous,” he added.