YouTube Calls for ‘Appropriate Protections’ in List of Principles for Hosting AI Music

The tech giant also launched the YouTube Music AI Incubator, which will begin with artists from Universal Music Group


As conversations around artificial intelligence continue, YouTube has published its first set of principles regarding AI music. Not only that, but the video streaming site owned by Google has also launched its own YouTube Music AI Incubator.

The three principles were shared by YouTube CEO, Neal Mohan. According to the company, these fundamental principles reflect the company’s “commitment to collaborate with the music industry alongside bold and responsible innovation in the space.” Here are the three principles in full:

  • Principle #1: AI is here, and we will embrace it responsibly together with our music partners. As generative AI unlocks ambitious new forms of creativity, YouTube and our partners across the music industry agree to build on our long collaborative history and responsibly embrace this rapidly advancing field. Our goal is to partner with the music industry to empower creativity in a way that enhances our joint pursuit of responsible innovation. 
  • Principle #2: AI is ushering in a new age of creative expression, but it must include appropriate protections and unlock opportunities for music partners who decide to participate. We’re continuing our strong track record of protecting the creative work of artists on YouTube. We’ve made massive investments over the years in the systems that help balance the interests of copyright holders with those of the creative community on YouTube. 
  • Principle #3: We’ve built an industry-leading trust and safety organization and content policies. We will scale those to meet the challenges of AI. We spent years investing in the policies and trust and safety teams that help protect the YouTube community, and we’re also applying these safeguards to AI-generated content. Generative AI systems may amplify current challenges like trademark and copyright abuse, misinformation, spam, and more. But AI can also be used to identify this sort of content, and we’ll continue to invest in the AI-powered technology that helps us protect our community of viewers, creators, artists and songwriters–from Content ID to policies and detection and enforcement systems that keep our platform safe behind the scenes. And we commit to scaling this work even further. 

The announcement of these guiding principles comes alongside the launch of the YouTube Music AI Incubator. The program will kick off with artists, producers and songwriters from Universal Music Group who will “help inform YouTube’s approach to generative AI in music.”

Anitta, Björn Ulvaeus, d4vd, Don Was, Juanes, Louis Bell, Max Richter, Rodney Jerkins, Rosanne Cash, Ryan Tedder, Yo Gotti and the estate of Frank Sinatra are among those to first join the program.

“While some may find my decision controversial, I’ve joined this group with an open mind and purely out of curiosity about how an AI model works and what it could be capable of in a creative process,” Björn Ulvaeus said in a statement. “I believe that the more I understand, the better equipped I’ll be to advocate for and to help protect the rights of my fellow human creators.”

Juanes and Max Richter, who also issued statements about their involvement, nodded to the controversy around this particular topic as well. Juanes said that he chose to be involved “to assure that AI develops responsibly as a tool to empower artists and that it is used respectfully and ethically in ways that amplify human musical expression for generations to come.”

As for Richter, the composer said that AI brings both opportunities and “also raises profound challenges for the creative community.”

“The tech world and the music distribution ecosystem are quickly evolving to embrace this transformative technology and, unless artists are part of this process, there is no way to ensure that our interests will be taken into account,” Richter said.

There’s a good reason why YouTube may want to look like it’s on the ethical frontlines of AI. Not only has the video-sharing platform long been a go-to resource for both established and indie artists, but YouTube is also owned by Google, which has its own AI play.

The company has been publicly researching AI since 2017. But in February of this year, Google launched Bard, a conversational AI service powered by its own Language Model for Dialogue Applications.

Google was among the seven companies that met with President Biden in July and committed to third-party testing as well as watermarking. The latter is to ensure the average person does not mistake an AI-generated deepfake for a real person, like how Ron DeSantis used AI to recreate Donald Trump’s voice for a political ad. It should be noted that these commitments were voluntary, meaning that these companies will be policing themselves.