YouTube videos are being used to “responsibly” train artificial intelligence models from Google, its parent company, including Gemini and Veo 3, its text-to-video generator.
This has been going on for a bit of time — YouTube shared a blog post in September outlining how content was being used to train its AI models — but it went under-the-radar until CNBC reported on it on Thursday. As the outlet noted, those who make videos for YouTube do not have a way of opting out of having their content used to train Google’s AI models.
This is done, YouTube noted in its blog, as a way to “improve the product experience” for both viewers and creators. The company said several times it is focused on doing this in a “responsible” way — without going into much detail on what that means — and added that YouTube videos are helping develop “new generative AI features like auto dubbing,” as well as making improvements to its recommendation systems.
“Moving forward, we remain committed to ensuring that YouTube content used across Google or YouTube for the development of our AI-powered tools is done responsibly,” they noted.
It is also unclear how many of the more than 20 billion videos on YouTube are being used to train Google AI models. A company rep told CNBC that YouTube recognizes “the need for guardrails” with AI, and that it is investing in “robust protections” for creators.
Last month, Google unveiled Flow, its new AI filmmaking tool designed to help professional and upstart directors develop scenes for TV shows and movies at its annual conference. Google was also recently announced as one of the new investors in Promise, the L.A.-based film studio; other investors include Crossbeam Venture Partners, the Michael Ovitz-tied VC firm, and Andreessen Horowitz.
“We believe AI should enhance human creativity, not replace it,” YouTube added in its blog post last year.