AI Scores an Early Win Against Hollywood as Copyright War Escalates

Available to WrapPRO members

A court ruling this week gives AI companies a loophole to cheaply obtain copyrighted work and get around more costly licensing deals

AI and Hollywood
(Christopher Smith/TheWrap)

Artificial intelligence companies scored a crucial legal victory in their ongoing battle with the media and entertainment worlds — a win that tilts the balance of power towards the AI companies and will likely spur them to eschew cutting major deals with Hollywood studios, at least for now, according to industry insiders and legal experts who spoke to TheWrap.

What happened? On Tuesday, California federal judge William Alsup ruled Anthropic, the company behind the Claude AI chatbot, could train its models on copyrighted work without the consent of creators or rights holders, as long as those works were obtained legally. And there is a lot of wiggle room in how the companies can go about that last part; more on that in a moment.

The ruling marks an early win in a legal fight between Hollywood and AI companies that’s been steadily brewing. While AI has left many industries scrambling to figure out the impact of the technology, the entertainment world is particularly concerned over how its intellectual property is being used to empower these algorithms. The Anthropic decision, which was pegged to the company’s use of copyrighted books to train its LLM, will likely leave Hollywood executives feeling even more trepidation.

“This judge is writing almost as if he works for the AI companies in the way he’s framing it,” said Edward Saatchi, CEO of Fable Studios, whose Showrunner AI platform lets users create animated shows.

Saatchi said the “interesting ruling” — which specifically said Anthropic’s decision to train its LLM on copyrighted books fell under fair use because the work was “transformative” — comes as there has been a “war” over AI between two factions in Hollywood in recent years.

The first faction has voiced concern that the use of AI will undermine the fundamental structure of the entertainment world, with filmmaker and AI regulation activist Justine Bateman warning at last year’s TheGrill conference that the technology is going to “burn down the business.”

Then, “You’ve got people like James Cameron who explicitly say [AI models] are just how brains work,” he said.

Saatchi was referring to comments made by the “Terminator 2” director in which he said AI models already resemble humans, in that they are able to take in information and produce something from it. The concern, Cameron said, should be with AI models spitting out plagiarized material, not the information the models are trained on.

“I think people are looking at it all wrong … The whole thing needs to be managed from a legal perspective, as ‘what’s the output?’ not the input,” Cameron said in April. “You can’t control my input.”

Judge Alsup agreed. As long as the input is obtained legally, the judge ruled, AI companies can use copyrighted material to train their models.

“I think that the AI companies are going to be emboldened by this decision,” Rob Rosenberg, a former general counsel for Showtime Networks and the founder of Telluride Legal Strategies, told TheWrap. “They’re going to say, ‘This is great. This is exactly the result we want. We’re going to go full speed ahead.’”

Rosenberg explained that Tuesday’s decision set ground rules, where AI companies must pay for the content they are using to train their models. But that does not mean companies like OpenAI, the parent company of ChatGPT, need to cut pricey deals with media companies and studios.

AI companies like OpenAI require tremendous amounts of data — text, images, video and more — to train their algorithms. (Getty Images)

Instead, AI companies can spend $2.50 on an issue of The New York Times and upload it to their training database, Rosenberg explained, and meet their legal requirement. The same goes for scripts. An AI company can spend $22.99 to purchase the “Back to the Future” script on Amazon and use it for training — with no need to go to writers Robert Zemeckis and Bob Gale for their blessing.

As Rosenberg said, AI companies now just need to “keep their receipts” and use “those materials to train their models.”

Anthropic, in some instances, accomplished this by legally buying used books and scanning them into its digital library. But the San Francisco-based company will still have to go to trial, after the judge said it “pirated” more than seven million books; each violation could cost the company between $750 and $150,000.

Matt Braunel, a partner and copyright expert at Thompson Coburn LLP, told TheWrap he agreed with Rosenberg — that cases will now hinge on “how companies acquired [content].”

Even before this week’s ruling, the relationship between the AI world and the entertainment world has been frosty. Deals have been few and far between; Lionsgate has an agreement with Runway that allows the company to train its models based on its film catalog, for example, but that is the exception, not the norm.

OpenAI COO Brad Lightfoot in May said deals between AI companies and studios have stalled because a “level of trust” has not been established between the two sides. That was clear earlier this year, when hundreds of creators, including Ben Stiller, Aubrey Plaza and Joseph Gordon-Levitt, urged the Trump Administration to not make it easier for AI companies to train models based on copyrighted content, as OpenAI and Google had called for.

And the war between the AI and entertainment worlds is set for its biggest battle yet after Disney sued Midjourney earlier this month for copyright infringement.

Disney claims Midjourney’s text-to-video AI tool allowed users to create exact copies of iconic characters like Homer Simpson and Elsa from “Frozen” without consent — and that the company was brazen enough to highlight those examples on its website.

An example of how Midjourney’s AI can create a convincing Homer Simpson, according to the Disney lawsuit.

Braunel said the Disney-Midjourney lawsuit “set the stage” for some dealmaking to take place down the line.

The reason, as he and Rosenberg both explained, is because copyright holders retain some ability to pushback on how AI content based on their work is presented, if they can show it harms their business. Disney, for example, could sue a company for an AI-generated Spider-Man smoking a joint. That reality could lead to some deals being signed, where agreements are made between rights holders and AI companies on the limitations of how their characters are presented.

But in the meantime, Braunel said the Anthropic ruling gives AI companies the green light to be aggressive in how they go about training models on copyrighted material; AI companies will also likely continue to lean on figures, stats and material that is not copyright protected — like Shakespeare and Mickey Mouse — to train their models, he said. Braunel expects that will be the status quo as the case — and the issue of what is fair under fair use law — is contested in court.

“You’re not going to see those flood gates [on deals] open until you get some appellate court decisions on these issues, and eventually the Supreme Court is probably going to have to weigh in,” Braunel said. “And I think we’re at least a couple years away from that.”

Saatchi, on his end, conceded he is “biased” towards what favors AI companies. But he said he is glad Tuesday’s ruling said AI companies must pay, somewhere along the line, for the content they are using to train their models.

So based on the precedent set by the ruling — and the legal appeals set to come — do not hold your breath waiting for a wave of content licensing deals between major studios and AI companies. The more likely outcome is that AI companies will buy cheap copies of copyrighted work to train their models, at least until the Supreme Court says otherwise.

Comments