Protecting Your Likeness From AI: What You Need to Know

Available to WrapPRO members

There’s no playbook to ensure you don’t end up starring in an AI-generated video, so here are some resources that can help


The rise of artificial intelligence models able to quickly generate videos out of the slimmest of content have sparked concern from high-profile figures, celebrities and entertainers wondering: Will I end up as AI slop?

That fear came to life last month when OpenAI launched the Sora 2 AI video generator and the Sora social app, only further raising the question of what recourse individuals – average Joes and superstar celebrities alike – have to prevent their likeness from being used to either train the models or be the subject of AI-generated video.

As such, I set out to write a step-by-step guide on how to protect yourself. No such guide exists for a reason: the laws governing AI either don’t apply to huge swaths of this country or haven’t been passed yet, and different companies have different policies. 

But there are some resources available, including some from the companies themselves. Here are the basics on what you need to know about your likeness and AI, from what’s available now to the potential remedies coming down the line. 

Hoping for the law to catch up to AI models that are constantly updating and expanding is fruitless. But there are some laws that provide protection — if you happen to live in the right place. 

It’s no coincidence that California, home to many of the technology companies investing in AI models and components, is one state that offers such protections. It has several statutes that allow you to pursue civil action for the unauthorized use of name, voice, signature, likeness or a photo under the AI Transparency Act.

There’s also a post-mortem right to privacy that lasts 70 years after death. 

“It has the most developed body of law,” said Jeffrey Rosenthal, an attorney and partner in Blank Rome’s privacy, security and data protection group.

Thanks to Nashville’s big music scene, Tennessee is another state with firm protections under the ELVIS Act, or Ensure Likeness Voice and Image Security Act, which was signed into law in 2024. In addition to the elements protected in California, it also offers protection against the unauthorized use of lyrics – especially when it comes to AI. 

While not about likeness, Colorado adopted the Colorado Artificial Intelligence Act, which governs the deployment of AI systems, plus requires that all consumers are aware they’re interacting with AI. 

There are a patchwork of other efforts, but the protections are inconsistent. There is the Take It Down Act that was signed into federal law, but it narrowly focuses on the criminalization of unauthorized sharing of intimate images and deepfakes. 

There’s a broader effort on the federal front with a proposed bill called the NO FAKES Act working its way through Congress. It would hold individuals or companies liable if they produced content that used your voice or likeness without your permission. The law would also take action against platforms that spread that content as well. 

Until then, depending on where you live, you may be stuck dealing with the problem yourself. 

Working with platforms

Following the initial criticism leveled at Sora, OpenAI said it would add more guardrails and took down videos featuring Martin Luther King, Jr. (but only when asked by his estate in reaction to clips showing him spouting racist comments). Videos of Bob Ross, Michael Jackson and other deceased celebrities remain on the platform.

A company spokeswoman said that for public figures who are recently deceased, an authorized representative or owners of their estate can request that their likeness not be used in Sora cameos. 

martin-luther-king-jr
AI videos of Martin Luther King Jr. proliferated across the Sora app until OpenAI finally took them down. (Getty Images)

But the case with King, Jr. involves an extremely well known figure and an estate with resources. For entertainers, it’s best to be proactive. Talk to talent agencies, some of whom have sent Sora opt-out notices for their clients. Or go to the studios, which have also been working with OpenAI to flag problematic content, although largely from an IP perspective. 

Anyone who wants to flag a violation or unauthorized use of your image can go here. You can also flag the issue in the app itself. 

On YouTube, you’re able to log a privacy complaint if you see yourself or an AI-generated version of you in a video. The service, owned by Google, updated its guidelines to account for AI in 2023. 

You can file a privacy complaint here

TikTok outlines how you can flag a video that goes against its community guidelines here

Legal experts say regardless of whether you use these platforms, it pays to understand all of the places where your image could end up.

“Familiarize yourself with these new platforms and the tools available to proactively protect yourself,” said Lauren Spahn, a member of the intellectual property group at law firm Buchalter. “With the proliferation of AI at such a rapid rate, laws are trying to keep up, so you really do have to be on your toes and remain up to speed on the current landscape.”

Resources available to you

Because some AI models scrape public data or feeds from social media, keeping a low profile is a good way to prevent your information or likeness from being ingested. When it comes to Sora, if you don’t want your likeness to be used, don’t upload your image and give it permission (that sounds obvious, but the lure of putting yourself in a wacky AI video has been difficult to resist for many — including me).

With the proliferation of AI at such a rapid rate, laws are trying to keep up, so you really do have to be on your toes and remain up to speed on the current landscape.
– Lauren Spahn, a lawyer at Buchalter

But for high-profile figures like movie stars or athletes, that’s impossible. Spahn said there are tools that let you alter your images with “invisible” changes that make it impossible for the images to be manipulated or duplicated. 

“I suspect many celebrities have already begun incorporating discrete mechanisms” like this, Spahn said.

Mobile chip manufacturer Qualcomm has introduced technology that lets you verify photos taken with an Android phone are legitimate and not altered by AI or deepfaked.  

YouTube, meanwhile, has introduced an AI-powered likeness detection tool that scans the platform for unauthorized images of individuals and flags them through their account. The program began through a partnership with Creative Artists Agency, and expanded to top creators such as Mr. Beast and Marques Brownlee.

YouTube earlier this month said that it plans to make the feature available to members of its YouTube Partner Program, with all creators getting access in January. The company added it is thinking about how to expand the program to other high-profile figures, as it currently requires an account to work. 

While the proliferation of different social media platforms and AI models makes it difficult, experts urge staying on top of them as much as possible. 

“Once that image is out there, it’s out there,” Rosenthal said. “It’s challenging to put the genie back in the bottle.” 

Comments