What Happened to Lionsgate’s Splashy Plan to Make AI Movies With Runway? It’s Complicated | Exclusive

Available to WrapPRO members

The agreement with the AI startup serves as a cautionary tale of the pitfalls of embracing a technology too early

"Twilight," "John Wick" and "The Hunger Games" (Lionsgate/Christopher Smith for TheWrap)
"Twilight," "John Wick" and "The Hunger Games" (Lionsgate/Christopher Smith for TheWrap)

A year ago, Lionsgate and Runway, an artificial intelligence startup, unveiled a groundbreaking partnership to train the studio’s library of films with the ultimate goal of creating shows and movies using AI.

But that partnership hit some early snags. It turns out utilizing AI is harder than it sounds.  

Over the last 12 months, the deal has encountered unforeseen complications, from the limited capabilities that come from using just Runway’s AI model to copyright concerns over Lionsgate’s own library and the potential ancillary rights of actors.

Those problems run counter to the big promises made by Lionsgate both at the time of the deal and in recent months. “Runway is a visionary, best-in-class partner who will help us utilize AI to develop cutting edge, capital efficient content creation opportunities,” Lionsgate Vice Chairman Michael Burns said in its announcement with Runway a year ago. Last month, he bragged to New York magazine’s Vulture that he could use AI to remake one of its action franchises (an allusion to “John Wick”) into a PG-13 anime. “Three hours later, I’ll have the movie.”

The reality is that utilizing just a single custom model powered by the limited Lionsgate catalog isn’t enough to create those kinds of large-scale projects, according to two people familiar with the situation. It’s not that there was anything wrong with Runway’s model; but the data set wouldn’t be sufficient for the ambitious projects they were shooting for.

“The Lionsgate catalog is too small to create a model,” said a person familiar with the situation. “In fact, the Disney catalog is too small to create a model.”

On paper, the deal made a lot of sense. Lionsgate would jump out of the gate with an AI partnership at a time when other media companies were still trying to figure out the technology. Runway, meanwhile, would get around the thorny IP licensing debate and potentially create a model for future studio clients. The partnership opened the door to the idea that a specifically tuned AI model could eventually create a fully formed trailer — or even scenes from a movie — based on nothing but the right code. 

The challenges facing both Lionsgate and Runway offer a cautionary tale of the risks that come from jumping on the AI hype train too early. It’s a story that’s playing out in a number of different industries, from McDonald’s backing away from an early test of a generative AI-based drive-thru order system to Swedish financial tech firm Klarna slashing its work force in favor of AI, only to backpedal and hire back many of those same employees

It’s also a lesson that Hollywood is learning as more studios quietly embrace AI, even if it’s in fits and starts. Netflix co-CEO Ted Sarandos in July revealed on an investor call that for the first time, his company used generative AI on the Argentinian sci-fi series “The Eternaut,” which was released in April. But when actress Natasha Lyonne said her directorial debut would be an animated film that embraced AI, she was bombarded with criticism on social media. 

Then there’s the thorny issue of copyright protections, both for talent involved with the films being used to train those AI models, and for the content being generated on the other end. The inherent legal ambiguity of AI work likely has studio lawyers urging caution.

“In the movie and television industry, each production will have a variety of interested rights holders,” said Ray Seilie, attorney at Kinsella Holley Iser Kump Steinsapir LLP. “Now that there’s this tech where you can create an AI video of an actor saying something they did not say, that kind of right gets very thorny.” 

A Lionsgate spokesman said it’s still pursuing AI initiatives on “several fronts as planned” and noted that its deal with Runway isn’t exclusive. 

A spokesman for Runway didn’t respond to a request for comment. 

Limitations of going solo

Under the agreement announced a year ago, Lionsgate would hand over its library to Runway, which would use all of that valuable IP to train its model. The key is the proprietary nature of this partnership; the custom model would be a variant of Runway’s core large language model trained on Lionsgate’s assets, but would only be accessible to use by the studio itself.

In other words, another random company couldn’t tap into this specially trained model to create their own AI-generated video. 

But relying on just Lionsgate assets wasn’t enough to adequately train the model, according to a person familiar with the situation. Another AI expert with knowledge of its current use in film production also said that any bespoke model built around any single studio’s library will have limits as to what it can feasibly do to cut down a project’s timeline and costs.

“To use any generative AI models in all the thousands of potential outputs and versions and scenes and ways that a production might need, you need as much data as possible for it to understand context and then to render the right frames, human musculature, physics, lighting and other elements of any given shot,” the expert said.

But even models with access to vastly larger amounts of video and audio material than Lionsgate and Runway’s model are facing roadblocks. Take Veo 3, a generative AI model developed by Google that allows users to create eight-second clips with a simple prompt. That model has pulled, along with other pieces of media, the entire 20-year archive of YouTube into its data set, far greater than the 20,000+ film and TV titles in Lionsgate’s library.

“Google claims that data set is clean because of YouTube’s end-user license agreement. That’s a battle that’s going to be played out in the courts for a while,” the AI expert said. “But even with their vast data sets, they are struggling to render human physics like lip sync and musculature consistently.”

Nowadays, studios are learning that no single model is enough to meet the needs of filmmakers because each model has its own specific strengths and weaknesses. One might be good at generating realistic facial expressions, while another might be good at visual effects or creating convincing crowds.

“To create a full professional workflow, you need more than just one model; you need an ecosystem,” said Jonathan Yunger, CEO of Arcana Labs, which created the first AI-generated short film and whose platform works with many AI tools like Luma AI, Kling and, yes, Runway. Yunger didn’t comment on the Lionsgate-Runway deal, but talked generally about the practical benefits of working with different AI models. 

Likewise, there’s Adobe’s Firefly, another platform that’s catering to the entertainment industry. On Thursday, Adobe announced it would be the first to support Luma AI’s newest model, Ray3, an update that’s indicative of how quickly the industry is iterating. Like Arcana Labs, Firefly supports a host of models from the likes of Google and OpenAI.

While Lionsgate said their partnership isn’t exclusive, offering its valuable film library to just Runway effectively limits what you can do with other AI models, since those other models don’t get the benefit of its library of films. 

Screenshot of the short film “Echo Hunter,” starring Breckin Meyer and produced by Arcana Labs as a proof of concept.

Even Arcana Labs, which created the AI-generated short film in “Echo Hunter” as a proof-of-concept using its multi-model platform, faced some limitations with what AI could do now. Junger noted that even if you’re using models trained on people, you still lose a bit of the performance, and reiterated the importance of actors and other creatives for any project.

For now, Junger said that using AI to do things like tweaking backgrounds or creating custom models of specific sets — smaller details that traditionally would take a lot of time and money to replicate physically — is the most effective way to apply the technology. But even in that process, he recommended working with a platform that can utilize multiple AI models rather than just one. 

Legally ambiguous

Generative AI and what exactly can be used to train a model occupies a gray legal zone, with small armies of lawyers duking it out in various courtrooms around the country. On Tuesday, Walt Disney, NBCUniversal and Warner Bros. Discovery sued Chinese AI firm MiniMax for copyright infringement, just the latest in a series of lawsuits filed by media companies against AI startups

Then there was the court ruling that argued AI company Anthropic was able to train its model on books it purchased, providing a potential loophole that gets around the need to sign broader licensing deals with the original publishers — a case that could potentially be applied to other forms of media. 

“There will be a lot of litigation in the near future to decide whether the copyright alone is enough to give AI companies the right to use that content in their training model,” Seile said.

Another gray area is whether Lionsgate even has full rights over its own films, and whether there may be ancillary rights that need to be settled with actors, writers or even directors for specific elements of those films, such as likeness or even specific facial features. 

keanu-reeves-john-wick-5
Keanu Reeves might want to have a say on whether his face would be used to train an AI model despite Lionsgate’s ownership of the “John Wick” franchise. (Lionsgate)

Seilie said there’s likely a tug-of-war going on at various studios about how far they’re able to go, with lawyers erring on the side of caution and “seeking permission rather than forgiveness.”

Jacob Noti-Victor, professor at Cardozo Law School, said he was surprised by Burns’ comment in the Vulture article. 

The professor said that depending on the nature of such a film and how much human involvement is in its making, it might not be subject to copyright protection. The U.S. Copyright Office warned as much in a report published in February, saying that creators would have to prove that a substantial amount of human work was used to create a project outside of an AI prompt in order to qualify for copyright protection.

“I think the studios would be leaning on the fact that they would own the IP that the AI is adapting from, but the work itself wouldn’t have full copyright protection,” he said. “Just putting in a prompt like that executive said would lead to a Swiss cheese copyright.”

Comments