YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday.

The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue.

Likeness detection does the same, but for simulated faces.

The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam s. The technology was first made available to a subset of YouTube creators in a pilot program last year before expanding more broadly to include politicians, government officials, and journalists this spring.

Now YouTube says the technology is being made available to those in the entertainment industry, including talent agencies, management companies, and the celebrities they represent.

The company has support from major agencies like CAA, UTA, WME, and Untitled Management, which offered feedback on the new tool. Use of the likeness detection tool does not require entertainers to have their own YouTube channels. Instead, the feature scans for AI-generated content to detect visual matches of an enrolled participant’s face. Users can then choose to request removal of the video for privacy policy violations, submit a copyright removal request, or do nothing. YouTube notes that it won’t remove all content, as it permits parody and satire content under its rules.

In the future, the technology will support audio as well, the company says.

Related to this, YouTube has also been advocating for similar protections at a federal level, with its support for the NO FAKES Act in Washington, D. This would regulate the use of AI to create unauthorized re-creations of an individual’s voice and visual likeness

Highlighted sentences link to their corresponding claims. Click any highlighted sentence to jump to its detailed analysis.
Highlight Colors Indicate Claim Quality:
✓ Healthy Claim - No fallacies or contradictions detected
⚠️ Minor Issues - Has contradictions or minor fallacies
🚨 Serious Issues - Multiple contradictions or severe fallacies
Quality Criteria: Claims are evaluated for logical fallacies and contradictions with other news sources. Green highlights indicate healthy claims suitable for reference.
Source