OpenAI CEO Sam Altman has issued a stark warning about ChatGPT privacy protections during his appearance on comedian Theo Von’s podcast “This Past Weekend.” Altman revealed that conversations with ChatGPT lack the legal confidentiality protections afforded to interactions with doctors, therapists, or lawyers.
The Privacy Gap
“People talk about the most personal details in their lives to ChatGPT,” Altman explained, noting that young people especially use the AI “as a therapist, a life coach” for relationship problems.Unlike conversations with licensed professionals protected by doctor-patient or attorney-client privilege, ChatGPT interactions have no such legal safeguards.
This means OpenAI could be legally compelled to produce chat logs in lawsuits. “If you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up,” Altman admitted.
Real-World Impact: The New York Times Case
Altman’s concerns aren’t theoretical. OpenAI currently faces a court order requiring the company to preserve all ChatGPT user conversations indefinitely as part of The New York Times’ copyright lawsuit.
US Magistrate Judge Ona T. Wang issued the preservation order on May 13, 2025, requiring OpenAI to “preserve and segregate all output log data that would otherwise be deleted”.The order affects users of ChatGPT Free, Plus, Pro, and Team plans, though Enterprise and educational customers are exempt.
Despite OpenAI’s appeals, US District Judge Sidney Stein upheld the order on June 26, 2025. This means conversations users thought were deleted are now retained indefinitely, potentially exposing personal data to legal discovery.
Technical Reality
Unlike encrypted messaging services such as WhatsApp, OpenAI can read every ChatGPT conversation. Under normal circumstances, deleted conversations are removed within 30 days, but the court order has suspended this practice.
Call for “AI Privilege”
Altman advocates for new legal protections treating AI conversations similarly to those with human professionals. “I think we should have the same concept of privacy for your conversations with AI that we do with a therapist,” he stated.
However, such “AI privilege” would require significant legislative action and doesn’t currently exist in any jurisdiction.
User Recommendations
Privacy experts recommend users exercise caution when sharing sensitive information with AI chatbots, treating these conversations like emails or text messages that can be subpoenaed in legal proceedings. For confidential support needs, consulting licensed human professionals bound by established confidentiality laws remains the safer option.