The digital age has transformed how we seek help. Millions turn to AI chatbots for everything from homework assistance to relationship advice. But a shocking revelation from OpenAI’s CEO has exposed a dangerous truth. Your most intimate conversations with AI lack the basic privacy protections you’d expect from a human therapist.

The Confidentiality Gap That Could Ruin Lives
Sam Altman, OpenAI’s CEO, recently dropped a bombshell on the “This Past Weekend with Theo Von” podcast. He revealed something that should make every ChatGPT user pause before their next therapy session with AI.
“People talk about the most personal details in their lives to ChatGPT,” Altman explained. “People use it, young people, especially, use it as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?'”
The problem? Unlike human professionals, AI doesn’t offer legal privilege. “If you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it,” Altman stated. “And we haven’t figured that out yet for when you talk to ChatGPT.”
This isn’t just a theoretical concern. It’s a ticking time bomb for millions of users who’ve poured their hearts out to artificial intelligence.
Your Secrets Could End Up in Court
The implications are staggering. In legal situations, OpenAI could be compelled to hand over user conversations. Altman warned that without proper confidentiality protections, deeply personal ChatGPT conversations could be subpoenaed and used in court proceedings.
“If someone confides their most personal issues to ChatGPT, and that ends up in legal proceedings, we could be compelled to hand that over. And that’s a real problem,” he stressed.
This reality check comes at a time when AI therapy is exploding in popularity. Young people, in particular, are turning to ChatGPT for mental health support. They’re sharing relationship troubles, family conflicts, and personal struggles with an AI that offers no legal protection for their vulnerability.
The contrast with traditional therapy is stark. When you speak to a licensed therapist, doctor, or lawyer, legal privilege protects your conversations. These professionals are bound by confidentiality laws that prevent them from sharing your personal information without consent.
The Data Retention Dilemma
The privacy concerns extend beyond potential court cases. Unlike secure messaging platforms such as WhatsApp or Signal, ChatGPT conversations aren’t end-to-end encrypted. This means OpenAI retains the ability to access and review these chats.
While OpenAI claims to delete free-tier user chats within 30 days, crucial exceptions exist for legal and security reasons. The company often utilizes conversations to improve the AI model and prevent misuse.
This data retention policy has come under intense scrutiny, particularly amid ongoing legal battles. The 2023 lawsuit filed by The New York Times against OpenAI and Microsoft exemplifies this concern. The newspaper alleges unauthorized use of millions of its articles to train ChatGPT.
In a significant development for that case, a court has ordered OpenAI to preserve all related chat data. This directly conflicts with the company’s stated data deletion policies and highlights how legal proceedings can override privacy promises.
The Young Generation at Risk

The privacy nightmare particularly affects young users who’ve grown up with AI as a constant companion. They’re more likely to view ChatGPT as a trusted confidant, sharing intimate details about their lives without understanding the risks.
Altman acknowledged this vulnerability, noting that young people especially use ChatGPT “as a therapist, a life coach” for relationship problems and personal guidance. This demographic often lacks awareness of the legal and privacy implications of their digital conversations.
The generational divide in AI usage creates a perfect storm. Young users are comfortable sharing personal information online, but they may not grasp that AI conversations lack the same protections as traditional therapy sessions.
OpenAI’s Promise vs. Reality
Despite the privacy concerns, OpenAI appears ready to fight for user privacy when possible. “We will fight any demand that compromises our users’ privacy; this is a core principle,” Altman affirmed on X (formerly Twitter).
However, this promise comes with limitations. The company must comply with legal orders, and the current regulatory framework doesn’t provide AI conversations with the same protections as traditional professional relationships.
OpenAI has called a court order requiring them to save ChatGPT conversations “an overreach” and is appealing the decision. But if courts can override OpenAI’s data privacy decisions, it opens the company to further demands for legal discovery or law enforcement purposes.
The company’s statement on its website emphasizes their commitment to user privacy, but legal realities often trump corporate policies. When subpoenas arrive, companies typically have little choice but to comply.
The Broader Tech Industry Challenge
The privacy nightmare extends beyond OpenAI. The entire tech industry is grappling with how to offer users proper confidentiality for sensitive AI interactions. Current AI platforms lack the regulatory framework that protects traditional therapeutic relationships.
This regulatory gap creates uncertainty for both users and companies. Without clear legal guidelines, AI companies operate in a gray area where user privacy depends more on corporate goodwill than legal protection.
The situation mirrors broader concerns about digital privacy in an era of increasing surveillance. Tech companies regularly face subpoenas for user data to aid criminal prosecutions. But AI therapy conversations represent a new frontier of sensitive information that lacks established legal protections.
Alternative Solutions Emerge
Some companies are addressing these privacy concerns head-on. Privacy-focused alternatives like Lumo, built by Proton, feature top-level encryption to protect user conversations. These platforms recognize that mental health discussions require stronger privacy protections than typical AI interactions.
Many corporations have licensed ring-fenced versions of AI chatbots to protect sensitive business communications. This approach could serve as a model for therapeutic AI applications that prioritize user privacy.
However, these solutions remain niche compared to mainstream platforms like ChatGPT. Most users continue to rely on AI services that offer convenience over privacy protection.
The Fundamental Question: Can AI Replace Human Therapists?
Beyond privacy concerns lies a more fundamental question: should AI replace human therapists at all? While AI can provide 24/7 availability and reduce barriers to seeking help, it lacks the human connection and professional training that define effective therapy.
AI systems simply regurgitate training data without original thought or genuine empathy. They can’t form therapeutic relationships or provide the nuanced understanding that human therapists offer. This limitation becomes particularly problematic when users develop emotional dependencies on AI systems.
Licensed therapists undergo years of training and supervision to provide effective mental health care. They’re bound by professional ethics codes and legal requirements that protect patient welfare. AI systems lack this professional framework and accountability.
The Path Forward
Until the tech industry “figures out” how to extend legal privilege to AI interactions, users should exercise extreme caution when sharing intimate details with chatbots. The digital confidant might not be as discreet as you think.
Altman himself acknowledged this reality, telling the podcast host: “I think it makes sense … to really want the privacy clarity before you use [ChatGPT] a lot like the legal clarity.”
The solution requires coordinated action from policymakers, tech companies, and users. Legal frameworks must evolve to address AI therapy’s unique challenges. Companies need to implement stronger privacy protections. Users must understand the risks of sharing personal information with AI systems.
Protecting Yourself in the AI Age

For now, users seeking mental health support should prioritize licensed human therapists who offer legal confidentiality protections. If you choose to use AI for emotional support, assume your conversations could become public and avoid sharing information you wouldn’t want revealed in court.
The privacy nightmare surrounding AI therapy serves as a wake-up call for the entire tech industry. As AI becomes more sophisticated and human-like, the need for proper privacy protections becomes increasingly urgent.
Your mental health deserves better than a privacy nightmare. Until AI therapy offers the same confidentiality as human professionals, stick with trained therapists who are legally bound to protect your secrets.
Sources
- Storyboard18: Think your chats with ChatGPT are private? Think again, warns OpenAI CEO
- TechRadar: ‘We haven’t figured that out yet’: Sam Altman explains why using ChatGPT as your therapist is still a privacy nightmare
- TechCrunch: Sam Altman warns there’s no legal confidentiality when using ChatGPT as a therapist