Luma AI CEO Sees First Film ‘Significantly Aided by AI’ Coming in 2026

Available to WrapPRO members

The company secured $900 million in funding as part of Saudi AI company Humain’s investment in U.S. AI businesses

amit-jain-luma-ai
Luma AI CEO Amit Jain (Photo via Luma AI)

Heed Luma AI CEO Amit Jain’s warning about AI: Buckle up, because the pace of AI innovation is only going to accelerate in 2026. 

“If it seems fast now, it might actually be faster,” Jain told TheWrap about his expectations for next year. He teased that would be when we see the first film that would be “significantly aided by AI,” but said it would likely be a “short-form or mid-form production.”

Luma AI, which builds AI video models used in many studios, was one of the several companies that on Wednesday received funding from Humain, an AI company backed by Saudi Arabia’s Public Investment Fund. The deal was part of a broader funding announced by Saudi Crown Prince Mohammed bin Salman as part of his visit to D.C. to see President Trump. 

Luma’s funding totaled $900 million, and in addition to Humain, included AMD and existing partners like Andreessen Horowitz as backers. An insider at the startup said the latest round brought its valuation to roughly $4 billion. While individuals like comedians have caught flack for taking money from Saudi Arabia because of its repressive regime, companies have been left relatively unscathed. Jain declined to comment on the issue. 

The other aspect of the deal with Humain was Luma AI getting access to the AI company’s planned 2 giga-watt “supercluster,” or a massive computing network of GPUs and other AI hardware designed to train models. The center, dubbed Project Halo, is expected to be completed by 2028, and Jain said the next largest supercluster is about 1,000 times smaller. 

That data-crunching capacity is critical to Luma AI because it’s working on building “world models,” which unlike large language models, are able to visualize and understand the world similar to how humans perceive it. That will better enable AI’s ability to simulate the world and in regards to the entertainment world, more accurately generate video. 

Luma AI’s existing Ray 3 LLM, which it just unveiled in September, was touted as the first “reasoning” video model, meaning it would generate video clips and self-analyze to optimize and fix details, going through multiple iterations before it spits out the final product. The original version, Ray 1, was dubbed “Dream Machine.”

“It’s really interesting because instead of generating text, it generates scenes and pixels and then it questions itself,” he said. 

Ray 3, which Jain said was being used or tested in five out of the six major studios (he declined to name them), will already get an update in the coming weeks with Ray 3.1, which is the first video model that will output video at 2K HDR. 

But even with that update, Jain said the model is more suited for creating elements or scenes for mid or short-form video. In order for AI to adequately generate a full 90-minute feature, he said a world model could better generate a consistent and convincing world. 

More broadly, Jain said world models are the next step towards artificial general intelligence, also known as superintelligence, which is the level when it reaches or exceeds human-level intelligence and the ability to self-teach. It’s a goal that’s driving many of the major tech giants, who are pouring billions of dollars of R&D money in a bid to be first.  

Jain declined to say when he thought we would reach AGI, a topic that’s still hotly debated with little consensus, but believes world models will play a role in getting there. 

“A world model is a necessary condition, but not a sufficient condition for AGI,” he said.

Comments