A Surprise U-Turn on Privacy

OpenAI has always marketed ChatGPT as a tool that “forgets” wiping user chats after 30 days unless people opt in to longer retention. That promise just shattered. A federal judge, responding to The New York Times’ 2023 copyright suit, ordered the company to freeze every single conversation, even the ones users thought were gone. Overnight, deleted memories became evidence lockers.
OpenAI calls the demand an “overreach.” Users call it creepy. Either way, the AI world is staring at an unprecedented privacy plot twist.
Inside the Judge’s Order
The May ruling, issued by Magistrate Judge Ona Wang, compels OpenAI to “indefinitely maintain all ChatGPT logs” that would otherwise vanish including API calls and free-tier chats. The order stems from claims that important evidence was “being destroyed in real time.” Now, instead of its routine 30-day purge, the company must lock data away until the court says otherwise. Legal watchers describe the move as sweeping and, quite possibly, a blueprint for future discovery fights in AI litigation.
OpenAI’s 30-Day Policy Now on Ice
Before the ruling, OpenAI’s policy felt simple: delete after 30 days, unless a user explicitly turned on history to help “improve our models.” Enterprise and Edu plans even boasted zero-retention guarantees. That clarity has evaporated. In a company blog post, COO Brad Lightcap confirmed that all ChatGPT tiers Free, Plus, Pro, and Team now fall under the legal hold. Only Enterprise, Edu, and API customers with a separate zero-data agreement remain exempt. For millions of regular users, the recycle bin is effectively sealed shut.
The Appeal: OpenAI Fights Back
OpenAI wasted no time. Within days, its lawyers filed an appeal, branding the judge’s instructions “sweeping” and “unprecedented.” The company argues the order “prevents us from respecting users’ privacy decisions” and flouts long-standing data-protection norms. CEO Sam Altman took to X, warning that the precedent “abandons privacy protections” and hinting at a new kind of “AI privilege,” where chatting with a bot should feel as confidential as talking to a doctor or lawyer.
Why The New York Times Won’t Budge

From the newspaper’s perspective, every discarded prompt might show how its paywalled articles ended up training GPT-4. Lawyers say preserving logs is the only way to prove (or debunk) large-scale copying. The Times’ team argues that without full retention, OpenAI could “cherry-pick” surviving data and erase tell-tale fingerprints of infringement. In short, the paper sees privacy concerns as secondary to protecting intellectual property.
Who Actually Feels the Heat?
If you pay for ChatGPT Plus, run a Team workspace, or experiment with the API under a standard contract, your data now sits in cold storage. The order does not cover Enterprise or Edu customers, nor API clients that negotiated zero-retention clauses. For everyday hobbyists, that means casual brainstorming sessions, sensitive self-talk, and even typo-ridden drafts will linger on OpenAI’s servers far longer than expected.
The Privacy-vs-Evidence Tug-of-War
Civil discovery has always forced tech firms to preserve emails and Slack messages. Chat histories, though, feel more intimate — part diary, part search engine. Privacy advocates warn that hoarding such data could chill free expression. Meanwhile, copyright plaintiffs insist that AI companies can’t be trusted to self-police.
The clash raises thorny questions: Should user chats enjoy doctor-patient-style confidentiality? Or do copyright claims outweigh personal privacy? Courts have never faced this exact dilemma, and legal scholars predict a multi-year slog before clear rules emerge.
Ripple Effects Across the AI Industry
OpenAI is big, but it isn’t alone. If the order stands, rivals like Anthropic, Google, and Meta could face similar demands, forcing them to rewrite retention playbooks and invest in massive “legal-hold” storage. Start-ups may struggle with the cost. Developers who sprinkle chat features into apps might suddenly shoulder evidence-preservation duties.
One privacy lawyer summed it up: “This is the canary in the AI coal mine everyone else is next.”
What Legal Experts Predict Next
The appeal will move to district court this summer. Lawyers expect arguments over proportionality: How much data is truly necessary, and for how long? OpenAI may push for a narrower timeframe or selective sampling. The Times will likely demand the opposite. Settlement remains possible but, insiders say, unlikely before the court clarifies retention scope. If a stricter standard survives, Congress could jump in with fresh privacy legislation especially as election-year scrutiny of AI intensifies.
Bottom Line for Everyday Users

For now, assume nothing you type into ChatGPT disappears. Deleted chats remain accessible to a “small, audited” internal team, and potentially to courts. If you need airtight confidentiality, stick to Enterprise or keep sensitive prompts offline. The story is far from over, but one fact is clear: privacy norms around AI are being rewritten in real time. Watch this space your next chat might end up as Exhibit A.
Sources
- The Verge – “OpenAI is storing deleted ChatGPT conversations as part of its NYT lawsuit” (theverge.com)
- Gizmodo – “OpenAI Appeals ‘Sweeping, Unprecedented Order’ Requiring It Maintain All ChatGPT Logs” (gizmodo.com)
- The Decoder – “OpenAI starts retaining all ChatGPT user data, including deleted chats and API data” (the-decoder.com)
Comments 1