As Hollywood grapples with the upheaval of current AI tools — from writers striking over machine-generated scripts to VFX artists watching their jobs getting reshaped in real time — Silicon Valley is already racing ahead. The next frontier? Artificial general intelligence, or AGI. For Hollywood creatives still catching their breath from the first round of disruption, the idea of AI “smarter than Nobel Prize winners” raises new, unsettling questions about the future of storytelling itself. Namely, can AI ever possibly be creative?
The race toward AGI, which is purported to surpass human capabilities, is accelerating across Silicon Valley. Google DeepMind warns that human-level AI could plausibly arrive by 2030, while urging renewed focus on safety measures. OpenAI’s Sam Altman has gone further, declaring, “We are now confident we know how to build AGI” and that the company is already setting its sights on superintelligence.
Meanwhile, Anthropic CEO Dario Amodei, though dismissing AGI as “more of a marketing term,” still predicts AI systems “better than almost all humans at almost all tasks” could emerge within just two to three years. How this might affect employment is anybody’s guess at this point.
An element conspicuously absent from the AGI definitions is creativity — a quality that defines humanity and forms the foundation of Hollywood’s entire output. This glaring gap points to a fundamental challenge: How do you measure or benchmark creativity, and more importantly, how do you build it into AI systems? Unlike tasks such as coding or data analysis, creativity resists straightforward quantification and implementation.
“I don’t ever think there’s, like, I have an idea for a great movie in the shower, and then I pop it into this machine, and then seconds later, a fully formed two-hour blockbuster comes out of my laptop,” Stephen Piron, co-founder of Pickford AI, an L.A.-based startup integrating generative AI into Hollywood’s storytelling processes, told TheWrap. “I just don’t think that will ever happen.”
It’s worth planting a massive red flag right here: There is no consensus whatsoever on what AGI actually means or when — if ever — it might arrive. Apple co-founder Steve Wozniak famously suggested a practical definition: AI could be considered generally intelligent when it could walk into an unfamiliar house and make a cup of coffee without instruction. Meanwhile, OpenAI and Microsoft have reportedly defined AGI in their partnership agreement as systems capable of generating $100 billion in profit.
These wildly contradictory standards highlight why many AI researchers and experts who aren’t dependent on venture capital funding cycles or company valuations express significant skepticism about these ambitious timelines, pointing to fundamental technical limitations in current AI approaches — from reasoning capabilities to causal understanding to common sense knowledge.
Meanwhile, researchers like Georgia Tech’s Mark Riedl have proposed tests such as the Lovelace 2.0 benchmark, which challenges AI systems to create artifacts like poems or images based on specific human requests. But even as systems like DALL-E demonstrate impressive generative capabilities, the deeper aspects of creative intelligence — understanding context, breaking rules purposefully, or creating work with emotional resonance — remain elusive targets for AI development.
AI capabilities will likely continue to advance, generating both anxiety and opportunity in creative industries. That tension surfaced recently when OpenAI released a new image generation feature for ChatGPT, which users immediately began using to mimic the iconic hand-drawn style of Studio Ghibli.
The results marked a clear step up from earlier AI-generated images — more cohesive, more painterly. But they also reignited concerns about copyright and creative appropriation. For many artists, it felt like a tipping point: proof that generative tools were not just getting better at imitating but also exploiting beloved creative styles.

Pickford’s Piron, who studied under Geoffrey Hinton — the Nobel Prize-winning “godfather of AI” whose neural network research laid the groundwork for today’s generative models — sees clear limitations to what AI can achieve creatively.
Instead of waiting for a hypothetical AGI revolution, Piron, whose earlier startup created one of the first deepfakes in 2019 — a video of Joe Rogan that attracted Hollywood’s attention for international dubbing and dialogue correction — points to how AI is already reshaping Hollywood through incremental advancements that don’t require general intelligence: compressing costs, enabling rapid concept iterations and potentially transforming the medium itself. These changes are happening now with today’s specialized AI tools, not some future superintelligent system.

Janelle Shane, a research scientist and author of the AI Weirdness blog, which explores the unexpected and often humorous behavior of AI systems, points out that much of what’s being automated today isn’t general intelligence, it’s general-purpose filler. “You don’t need general intelligence, you don’t need quality, in order for people to be selling millions of generated books and trying to capture clicks with generated websites,” she said. “But that’s not how you get world-changing art.”
The kind of work done at the top levels of Hollywood — crafting a multi-million-dollar blockbuster, building a television season arc, even editing a punchy comedy script — isn’t a general task. It’s expert-level, emotionally calibrated, high-stakes storytelling. As Shane put it, “What people have built is not a tool for creatives.”
Tension between control and randomness
This distinction is crucial for creative fields, where even advanced AI systems would face a steep challenge: balancing control with creativity. The very architecture of today’s AI models creates a technical conundrum. These systems generate content by predicting the most likely next word or image based on patterns in their training data. Engineers work tirelessly to reduce “hallucinations” — AI-generated content that sounds plausible but is factually incorrect — by implementing guardrails that keep outputs reliable and safe.
But this push toward reliability directly conflicts with creative needs. “There’s always been this tension between having output be well controlled, versus having enough randomness to get something new,” Shane said. Tighten the controls too much, and you get technically accurate but creatively lifeless content. Loosen them, and you risk unpredictable, potentially problematic outputs.
This technical limitation isn’t just a temporary challenge — it’s baked into how these systems work. Even an “artificial general intelligence” as imagined by Silicon Valley would face this same fundamental tradeoff between control and creative expression.
To understand how AI is reshaping creativity, it helps to zoom out beyond the tools themselves. For artist and author K Allado-McDowell — who has spent the past decade exploring the intersection of emerging technologies and artistic practice — the more urgent story isn’t about what AI can do, but about the conditions that shaped its rise.

“What’s happening now with AI is really an extension of what the internet started,” Allado-McDowell, who prefers plural pronouns, said. “The internet put all the information in one place, it made it easy to scrape and train on and it also set up the incentive structure that AI is responding to.”
“Behaving just like AI does”
That structure, Allado-McDowell argues, isn’t just about technology. It’s about economics: the demand for more content, faster, cheaper and optimized for platforms that prioritize volume over originality. AI didn’t create that demand — it just fits into it perfectly.
“A lot of times people describe AI output as slop because it’s kind of non-specific and formulaic,” the artist said. “But we’re already getting that same outcome through traditional production methods. Because when you optimize creative processes for those conditions, you’re behaving just like an AI does.”
In other words, the problem isn’t just generative tools — it’s the economic logic they’re serving. But even so, Allado-McDowell sees room for something more liberating on the horizon. If tools continue to improve and become more intuitive, they believe the real opportunity might lie in lowering the barrier to entry for people long shut out of traditional systems.
“I really hope that the people that are writing scripts that never get produced, or that are dreaming up things that never see the light of day because of the studio system, would have the opportunity to make something cheaply and affordably and quickly that they’re satisfied with,” Allado-McDowell said. “That, to me, seems like the only real possibility for this to be a kind of creatively liberatory development.”