The Copyright Clash That Could Reshape AI Development

The world of artificial intelligence just hit a major speed bump. Some of Japan’s most iconic entertainment companies including Studio Ghibli, Square Enix, and Bandai Namco have formally demanded that OpenAI stop using their content to train its Sora 2 video generation tool. This isn’t just another corporate squabble. It’s a showdown that could fundamentally change how AI companies operate worldwide.
The Content Overseas Distribution Association (CODA), an anti-piracy organization representing Japanese intellectual property holders, released a letter last week that pulls no punches. According to The Verge, the letter states that “CODA considers that the act of replication during the machine learning process may constitute copyright infringement.”
Why? Because Sora 2 has been spitting out content featuring copyrighted characters at an alarming rate.
When AI Gets Too Good at Copying
OpenAI launched Sora 2 on September 30th, and almost immediately, social media exploded with AI-generated videos. But these weren’t just generic animations. They featured recognizable characters from beloved Japanese franchises Pokémon frolicking in fields, Dragon Ball-style action sequences, and imagery that looked suspiciously like it came straight from a Studio Ghibli film.
Eurogamer reports that CODA confirmed “a large amount of Sora 2’s output closely resembles Japanese content or images” as a direct result of Japanese content being used as machine learning data without permission.
The situation got so out of hand that Japan’s government formally stepped in. They asked OpenAI to stop replicating Japanese artwork, describing anime and manga as “irreplaceable treasures” representing Japan’s cultural pride.
The Opt-Out Problem
Here’s where things get legally interesting. OpenAI CEO Sam Altman announced last month that the company would change Sora’s opt-out policy for IP holders. Sounds reasonable, right? Not so fast.
CODA argues that using an opt-out policy in the first place may have already violated Japanese copyright law. As Slashdot notes, the organization stated: “under Japan’s copyright system, prior permission is generally required for the use of copyrighted works, and there is no system allowing one to avoid liability for infringement through subsequent objections.”
In other words, you can’t just use someone’s work and then say “oops, my bad” when they complain. You need permission first. This fundamental difference in approach could be the legal landmine that blows up OpenAI’s entire training methodology.
Who’s Behind CODA?
CODA isn’t some small advocacy group. Founded in 2002 to combat piracy and promote legal international distribution of Japanese content, it represents some of the biggest names in entertainment. We’re talking about Bandai Namco, Square Enix, Studio Ghibli, Cygames, Toei Animation, Kadokawa Corporation, and Aniplex (now owned by Sony Music Entertainment Japan).
These companies don’t just make video games and anime. They create cultural phenomena that generate billions of dollars annually. When they speak, people listen.
This Isn’t OpenAI’s First Rodeo
The Sora 2 controversy isn’t an isolated incident. The Verge points out that when GPT-4o launched back in March, one of its highlights was a proliferation of “Ghibli-style” images. Even Sam Altman’s profile picture on X is currently a portrait in a style reminiscent of Studio Ghibli.
It’s almost like OpenAI has a particular fondness for Japanese aesthetics. The problem is, those aesthetics belong to someone else.
What CODA Wants

CODA’s demands are straightforward but potentially devastating for OpenAI’s business model. According to PC Gamer, the organization is requesting two things:
First, that OpenAI ensures CODA members’ content isn’t used for AI training without permission. Second, that OpenAI “responds sincerely” to copyright infringement claims and inquiries from CODA member companies regarding Sora 2’s outputs.
The language is diplomatic, but the threat is clear. CODA gently hinted that legal action isn’t off the table if OpenAI doesn’t comply.
The Bigger Picture
This confrontation represents a much larger debate about AI development. Tech companies have largely operated under the assumption that scraping publicly available content for training data falls under “fair use.” But that assumption is increasingly being challenged.
IGN reports that earlier this year, AI company Anthropic agreed to pay $1.5 billion to authors to settle a copyright lawsuit. A wide variety of ongoing lawsuits are currently working their way through the courts, all challenging the notion that AI companies can freely use copyrighted material for training.
The Japanese approach is particularly interesting because it’s not just individual companies suing. It’s an entire industry, backed by government support, drawing a line in the sand.
Sam Altman’s Response
To his credit, Altman has acknowledged the issue. In a blog post following Sora 2’s launch, he wrote that OpenAI is “struck by how deep the connection between users and Japanese content is!” He promised that the company would “let rightsholders decide how to proceed” and admitted there might be some “edge cases” of character depictions slipping through the cracks.
But calling copyrighted character reproductions “edge cases” might be underselling the problem. When your AI tool can generate convincing videos of Pikachu or characters from Spirited Away, that’s not an edge case. That’s a core functionality issue.
The Cultural Dimension
There’s something particularly significant about this challenge coming from Japan. Japanese creators have long been protective of their intellectual property, and for good reason. Anime, manga, and video games are major cultural exports that define Japan’s soft power globally.
GameSpot notes that Japanese officials have described anime and manga as “irreplaceable treasures.” This isn’t just about money. It’s about cultural identity and artistic integrity.
When AI can replicate the distinctive style of Studio Ghibli a style that took decades to develop and represents the vision of legendary animator Hayao Miyazaki it raises profound questions about creativity, ownership, and respect for artistic work.
The Technical Reality
Here’s the uncomfortable truth: AI models like Sora 2 don’t work by magic. They’re trained on massive datasets of existing content. The more data they consume, the better they perform. And if that data includes thousands of hours of Japanese animation, the AI will inevitably learn to reproduce that style.
The question is whether that reproduction constitutes copyright infringement. Traditional copyright law was written for a world where copying meant making exact duplicates. AI operates in a gray area it doesn’t copy pixel-for-pixel, but it learns patterns and styles that are distinctly associated with specific creators.
What Happens Next?
OpenAI now faces a critical decision. It can comply with CODA’s demands, which would likely require fundamentally changing how Sora 2 is trained. Or it can fight, potentially facing legal battles in Japan and setting a precedent that could affect AI development globally.
The stakes are enormous. If CODA succeeds, it could trigger a domino effect. Other industries and countries might follow suit, demanding that AI companies obtain explicit permission before using copyrighted material for training. That would dramatically slow AI development and potentially make it economically unfeasible for many applications.
On the other hand, if OpenAI prevails, it would essentially give AI companies carte blanche to use any publicly available content for training, regardless of copyright protections. That would be a massive blow to creators’ rights.
The Opt-In vs. Opt-Out Debate
At the heart of this controversy is a fundamental philosophical question: Should the default be permission or prohibition?
OpenAI’s opt-out approach assumes that using content is acceptable unless someone specifically objects. CODA’s position, grounded in Japanese copyright law, assumes the opposite that permission must be obtained first.
This isn’t just a legal technicality. It reflects different values about ownership, creativity, and the balance between innovation and protection of existing rights.
Industry Implications
The gaming and entertainment industries are watching this closely. If CODA succeeds, it could establish a template for how other creative industries protect their content from AI training.
Android Headlines reports that this challenge “signals a global showdown over AI training methods.” That’s not hyperbole. How this dispute resolves could determine the future relationship between AI companies and content creators worldwide.
The Irony of Innovation
There’s a certain irony here. AI companies position themselves as innovators, pushing the boundaries of what’s possible. But their innovation depends on the creative work of others artists, writers, animators, and game developers who spent years honing their craft.
When Sora 2 generates a video in the style of Studio Ghibli, it’s not creating something from nothing. It’s remixing and recombining patterns learned from actual Ghibli films. The question is whether that constitutes theft, inspiration, or something entirely new that existing copyright law doesn’t adequately address.
Looking Forward
This confrontation between CODA and OpenAI is just the beginning. As AI becomes more sophisticated, these conflicts will only intensify. We’re entering an era where the line between human creativity and machine generation is increasingly blurred.
The resolution of this dispute will help define that line. Will AI companies be required to compensate creators whose work trains their models? Will there be a licensing system for training data? Or will courts decide that AI training falls under fair use, leaving creators with little recourse?
Whatever happens, one thing is clear: the days of AI companies freely scraping content without consequence are coming to an end. CODA’s challenge represents a turning point a moment when creators said “enough” and demanded respect for their work.
The Bottom Line

Studio Ghibli, Square Enix, Bandai Namco, and other Japanese entertainment giants aren’t just protecting their bottom line. They’re fighting for the principle that creativity has value and that value should be respected, even in the age of artificial intelligence.
OpenAI now has a choice. It can work with creators to develop a fair system for using copyrighted content in AI training. Or it can fight, risking legal battles, regulatory crackdowns, and damage to its reputation.
The smart money says OpenAI will eventually compromise. The alternative being shut out of the Japanese market and facing similar challenges worldwide is simply too costly. But how that compromise takes shape will determine the future of AI development for years to come.
For now, the world watches as Japanese entertainment giants take on one of Silicon Valley’s most powerful companies. It’s David versus Goliath, except David has government backing, legal precedent, and the moral high ground.
And in this version of the story, David might just win.
Sources
- The Verge – Studio Ghibli, Bandai Namco, Square Enix demand OpenAI stop using their content to train AI
- Eurogamer – Japanese publishers like Bandai Namco and Square Enix are requesting Sora 2 is no longer trained on their creative works
- Slashdot – Studio Ghibli, Bandai Namco, Square Enix Demand OpenAI Stop Using Their Content To Train AI
- PC Gamer – Square Enix, Bandai, and other Japanese studios demand OpenAI stop using their content without permission
- IGN – Japanese Organization Representing the Likes of Bandai Namco, Square Enix, and Studio Ghibli Demands OpenAI Ceases Unauthorized Training of Sora 2
- GameSpot – Studio Ghibli And Japanese Game Publishers Demand OpenAI Stop Using Their Content In Sora 2
- Android Headlines – Ghibli & Bandai Lead Charge Against OpenAI Over Sora 2 Training Data
- CODA Official Statement







