Court Lifts Sweeping Data Preservation Order in Copyright Battle
In a significant win for user privacy, OpenAI has been freed from a controversial court order that forced the company to preserve every ChatGPT conversation indefinitely. The decision marks a turning point in the ongoing copyright battle between the AI giant and The New York Times, while raising important questions about data privacy in the age of artificial intelligence.
Federal Judge Ona T. Wang filed a new order on October 9, 2025, terminating the preservation requirement that had been in place since May. The ruling means OpenAI can finally resume its standard practice of deleting user conversations upon request. But there’s a catch—some data will still be kept under lock and key.
The Background: When Privacy Met Copyright Law
The whole saga started back in December 2023. That’s when The New York Times filed a lawsuit against OpenAI and Microsoft, accusing them of copyright infringement. The newspaper claimed OpenAI trained its AI models on Times content without proper authorization or compensation.
Think about it this way: The Times argued that ChatGPT was essentially regurgitating their carefully crafted articles, allowing users to read premium content without ever visiting the newspaper’s website or paying for a subscription. For a publication that relies on subscription revenue, that’s a big deal.
As the case progressed, the Times and other plaintiffs—including The Intercept, Alternet, and Mashable’s parent company ZiffDavis—wanted to investigate whether ChatGPT users were actually prompting the AI to generate copyrighted news articles. The problem? Users could delete their conversations, potentially destroying evidence.
The Controversial Preservation Order
In May 2025, Judge Wang issued what many considered a sweeping order. OpenAI was directed to “preserve and segregate all output log data that would otherwise be deleted on a going-forward basis.” This included everything—deleted chats, temporary conversations, the works.
The order impacted hundreds of millions of users. We’re talking about ChatGPT Free, Plus, Pro, and Team subscribers, as well as users of OpenAI’s application programming interface (API). The only exceptions were ChatGPT Enterprise and ChatGPT Edu customers, plus those with Zero Data Retention agreements.
For context, ChatGPT normally gives users control over their data. You can turn off chat history, use temporary chat features, or delete conversations entirely. The service typically removes deleted content from its systems within 30 days. But this court order threw all of that out the window.
OpenAI Fights Back
OpenAI didn’t take this lying down. The company immediately pushed back, calling the preservation order an “overreach” that fundamentally conflicted with privacy commitments made to users.
In a blog post published in June 2025, OpenAI’s Chief Operating Officer Brad Lightcap made the company’s position crystal clear: “Trust and privacy are at the core of our products. This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections.”
CEO Sam Altman was even more blunt. He called the Times’ data request “unconscionable” in a post on X (formerly Twitter). The company argued that users frequently share sensitive information in ChatGPT conversations—everything from financial details to intimate discussions about wedding vows—expecting those chats to be deleted when requested.
OpenAI also warned that engineering a system to retain all this data would take months and compromise user trust. The company filed an appeal with the District Court Judge, arguing that the order was vastly overbroad.
User Outcry and Privacy Concerns
The preservation order didn’t just upset OpenAI—it sparked outcry from the ChatGPT user community as well. Privacy advocates raised red flags about the implications of forcing a tech company to retain data that users explicitly wanted deleted.
Think about how you use ChatGPT. Maybe you’ve asked it for help drafting a sensitive email to your boss. Perhaps you’ve discussed personal health concerns or sought advice on relationship issues. Users share these intimate details with the expectation that they can delete the conversation and it’ll be gone for good.
The preservation order shattered that expectation. Every day the order remained in place was another day users couldn’t exercise the privacy protections they’d been promised. For many, it felt like a betrayal of trust.
The New Ruling: What Changed?
Fast forward to October 9, 2025. Judge Wang approved a joint order to terminate the preservation requirement. OpenAI no longer has to maintain logs past September 26, with a few important exceptions.
Here’s what the new ruling means in practical terms:
For most users: When you delete your ChatGPT conversations now, they’re actually deleted. The logs will be removed from OpenAI’s systems following the company’s standard 30-day deletion policy. You’re back in control of your data.
The exceptions: Any logs that were already saved under the previous preservation order remain accessible to the Times’ lawyers as evidence. Additionally, OpenAI must still retain logs linked to accounts specifically flagged by The New York Times.
The ruling strikes a balance between legal discovery needs and user privacy rights. The Times can continue investigating its copyright claims using the data already preserved, but OpenAI doesn’t have to keep accumulating more indefinitely.
What This Means for Users
If you’re a ChatGPT user, this is mostly good news. You can once again delete your conversations with confidence, knowing they’ll actually be removed from OpenAI’s servers. The company has restored the privacy features that users relied on.
However, there are some important caveats to keep in mind. As PCMag points out, even with deletion enabled, some data may remain accessible during ongoing legal reviews or system backups.
Earlier this year, Sam Altman warned that ChatGPT conversations aren’t legally protected and could be presented in court during lawsuits. So while you can delete your chats, it’s still not a great idea to share your darkest secrets with an AI chatbot.
Also remember that ChatGPT maintains a separate “memory” feature. Even when you delete chats, the service retains details about your preferences, friends and family, or how you like conversations formatted. You can turn off this memory feature or delete specific memories, but it’s something to be aware of.
The Bigger Picture: AI and Copyright
While the preservation order has been lifted, the underlying copyright lawsuit is far from over. The case continues to move forward, with potentially massive implications for the AI industry.
The Times and other publishers argue that OpenAI used their copyrighted content without permission to train ChatGPT. OpenAI counters that its use falls under “fair use” because the AI model transforms the content, breaking it into tokens that get blended with other information.
In an April 2025 opinion, Judge Sidney Stein said the Times had made a case that OpenAI and Microsoft were responsible for inducing users to infringe its copyrights. The judge noted the Times’ “numerous” and “widely publicized” examples of ChatGPT producing material from its articles justified allowing the claims to continue.
This case could set important precedents for how AI companies use publicly available content to train their models. Industry observers believe the rulings may eventually establish clearer boundaries for the use of copyrighted materials in machine learning.
Privacy vs. Discovery: A Delicate Balance
The preservation order saga highlights a fundamental tension in our digital age: how do we balance legal discovery needs with individual privacy rights?
On one hand, plaintiffs in lawsuits need access to relevant evidence. If ChatGPT users were prompting the AI to reproduce copyrighted articles, that’s information the Times has a legitimate interest in discovering. Allowing users to delete potentially incriminating conversations could obstruct justice.
On the other hand, forcing tech companies to retain all user data indefinitely sets a dangerous precedent. It undermines privacy protections and could have a chilling effect on how people use AI tools. If users can’t trust that their deleted conversations are actually deleted, they may be less willing to use these services at all.
Judge Wang’s latest ruling attempts to thread this needle. By terminating the blanket preservation order while maintaining exceptions for flagged accounts, the court preserves the Times’ ability to investigate its claims without imposing an indefinite burden on OpenAI and its users.
The Industry Impact
This case has sent ripples throughout the tech industry. Other AI companies are watching closely, knowing they could face similar legal challenges. Google, Meta, Anthropic, and other players in the generative AI space all use large datasets to train their models, and many of those datasets likely include copyrighted material.
The question of whether AI training constitutes fair use remains unsettled. If courts ultimately rule against OpenAI, it could force the entire industry to rethink how they source training data. Companies might need to negotiate licensing agreements with content creators, potentially increasing costs and slowing innovation.
Some publishers have already taken a different approach. The Associated Press, for example, struck a licensing deal with OpenAI in 2023. Axel Springer, which owns Business Insider and Politico, signed a similar agreement. These partnerships suggest a path forward where AI companies compensate content creators for using their work.
What Happens Next?
The Times can continue combing through the preserved logs that were saved under the original order. The newspaper also retains the right to flag additional user accounts or domains if they suspect links to copyrighted material.
For OpenAI, the lifting of the preservation order removes a significant operational burden. The company no longer has to engineer and maintain systems to indefinitely retain data that users want deleted. This frees up resources and allows OpenAI to focus on innovation rather than data storage.
However, the company still faces the underlying copyright lawsuit, which could result in billions of dollars in damages if the Times prevails. Microsoft, as OpenAI’s key partner and investor, also faces involvement in the case through its AI product Copilot.
Lessons for AI Users
This entire episode offers some important lessons for anyone using AI chatbots:
Be cautious about what you share. Even with deletion features restored, experts still encourage users to avoid sharing private or sensitive information with AI tools. Your conversations might be used for training, could be subject to legal discovery, or might be exposed in a data breach.
Understand the privacy settings. Take time to review and configure ChatGPT’s privacy settings. You can turn off chat history, disable the memory feature, or use temporary chat mode for sensitive conversations.
Remember that “deleted” isn’t always deleted. While OpenAI can now remove deleted chats from its systems, there may be backups, legal holds, or other circumstances where data persists longer than expected.
Stay informed about ongoing cases. The outcome of the Times lawsuit could significantly impact how AI companies handle user data in the future. Keep an eye on developments if you’re a regular AI user.
The Road Ahead
As AI becomes increasingly integrated into our daily lives, we’ll likely see more legal battles over data privacy, copyright, and user rights. The OpenAI preservation order case is just one example of the complex issues that arise when cutting-edge technology meets established legal frameworks.
For now, ChatGPT users can breathe a sigh of relief. The ability to permanently delete conversations has been restored, returning a measure of control over personal data. But the broader questions about AI training data, copyright infringement, and privacy protections remain unresolved.
The tech industry, legal system, and policymakers will need to work together to establish clear guidelines that protect both intellectual property rights and user privacy. It’s a delicate balance, but one that’s essential to get right as AI continues to evolve and expand its role in our lives.
In the meantime, OpenAI has won this particular battle, but the war over AI data practices is far from over. The company’s handling of this situation—and the ultimate outcome of the copyright lawsuit—will likely influence how other AI developers approach data retention, user privacy, and content licensing for years to come.
The stakes are high, not just for OpenAI and The New York Times, but for the entire ecosystem of AI developers, content creators, and users. How courts resolve these issues will shape the future of artificial intelligence and determine whether innovation can coexist with robust protections for both privacy and intellectual property.
Sources
- Judge lifts order requiring OpenAI to preserve ChatGPT logs – Mashable
- You can now permanently delete your ChatGPT conversations again – PCMag
- OpenAI no longer has to preserve all of its ChatGPT data, with some exceptions – Engadget
- How we’re responding to The New York Times’ data demands – OpenAI
- OpenAI appeals data preservation order in NYT copyright case – Reuters