YouTube is cracking down on artificial intelligence (AI)-generated content posted on the platform, and as a first step, it has asked creators to disclose when a video has been altered using AI or generated using AI tools. The announcement comes after the video-streaming giant updated its content policy in November 2023 to create transparency when AI videos are uploaded. The Google-owned platform also revealed that it will add labels on its own when a creator posts a video on a sensitive subject but does not add the AI content label.

In an announcement made via its blog post on Monday, YouTube said, “We’re beginning to roll out a new tool today that will require creators to share when the content they’re uploading is meaningfully altered or synthetically generated and seems realistic.” It also added that the process to add the disclosure will be added to the video-uploading workflow to help creators easily add the label.

YouTube AI label workflow
Photo Credit: YouTube

 

Based on a screenshot shared in the post, creators will find the disclosure on the first page of the uploading workflow, right under the disclaimer for paid promotion. This new section titled Altered Content asks three questions — whether the video makes a person say or do something they did not say or do, whether it alters the footage of a real event or place, and whether it has a realistic-looking scene that did not occur. If the video has any of these elements, they need to mark ‘Yes’, and YouTube will automatically add a label in the description of the uploaded video.

The label will be added in a new section in the description titled “How this content was made”, where it will mention “Altered or synthetic content – Sound or visuals were significantly edited or digitally generated.” The new labels will show up in both long-format videos as well as Shorts. Shorts will also have a more visible tag placed above the channel’s name. YouTube said that viewers will see the labels on the Android and iOS apps first, and later they will be added to the web interface and TV. For creators, the workflow will appear first on the web interface.

Failure to disclose AI-generated content will also be penalised by the video streaming giant. YouTube highlighted that for now, it will give creators time to learn these new requirements, but over time it will introduce penalties including content removal, suspension from the YouTube Partner Programme and more.

YouTube announced its updated content policy and focus towards AI-generated content in November 2023 in light of rising instances of deepfakes. It highlighted that it will be introducing disclosure tools and an option for viewers to request the removal of “AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” A separate set of rules were also announced to protect the content of music labels and artists.


Shares:

Leave a Reply

Your email address will not be published. Required fields are marked *