YouTube has introduced a new suite of AI detection tools aimed at safeguarding creators, including artists, actors, musicians, and athletes, from having their likenesses—such as their face and voice—copied and misused in videos.
This innovation comes amid growing concerns over the unauthorized use of AI-generated content.
A key feature of this new technology is the expansion of YouTube’s existing Content ID system, which currently identifies copyright-protected material. The update will now include a synthetic-singing identification tool that detects AI-generated content mimicking someone’s voice.
YouTube is also working on technology to detect AI-simulated faces, marking a significant step in protecting creators from misuse.
In addition to detection tools, YouTube is addressing the broader issue of its content being used to train AI models. This has been a longstanding complaint from creators, who argue that companies like Apple, Nvidia, Anthropic, OpenAI, and Google have used their content for AI training without consent or compensation.
While YouTube has yet to provide a concrete plan, it has indicated that it is developing solutions to give creators control over how third parties use their content.
“We’re developing new ways to give YouTube creators choice over how third parties might use their content on our platform. We’ll have more to share later this year,” YouTube said in a statement.
ALSO READ: US, UK, AND EU SIGN PACT FOR AI SAFETY
The video-sharing platform is also moving forward with efforts to compensate artists whose work has been used to create AI-generated music. YouTube previously collaborated with Universal Music Group (UMG) to find a solution and plans to extend its Content ID system to identify rightsholders entitled to payment when AI-generated music features their work. Early next year, YouTube will pilot the synthetic singing identification technology with its partners.
Additionally, YouTube is developing tools to help high-profile figures, such as actors and athletes, detect and manage AI-generated content using their likenesses. This aims to prevent misuse, including false endorsements or the spread of misinformation. The timeline for this system’s rollout remains unclear, but it is actively in development.
“As AI evolves, we believe it should enhance human creativity, not replace it. We’re committed to working with our partners to ensure future advancements amplify their voices, and we’ll continue to develop guardrails to address concerns and achieve our common goals,” YouTube stated in the announcement