Skip to main content

YouTube to roll out labels for ‘realistic' AI-generated content

·2 mins

Image
YouTube, the Google-owned platform, has announced that it will soon be requiring creators to disclose when their videos contain content generated by artificial intelligence (AI) that may mislead viewers. This policy update aims to prevent confusion among users, as AI technology has made it increasingly easy to create realistic text, images, videos, and audio that can be hard to distinguish from genuine content. The disclosure labels will only be required for AI-generated or synthetic content that is realistic and can include videos depicting events that never happened or individuals saying or doing things they didn’t do. This policy is especially important for sensitive topics like elections, conflicts, and public health crises. The growth of AI tools has raised concerns about the potential spread of convincing yet misleading content, particularly in the lead-up to the 2024 elections. Other platforms like TikTok and Meta (parent company of Facebook and Instagram) have also introduced rules to increase transparency around AI-generated content. YouTube’s new policy comes after the launch of AI-powered tools to assist creators in producing videos and reaching a wider audience. The option to add an AI-generated disclosure label will be available during the video upload process and will start rolling out early next year. Failure to comply with the new requirements may lead to penalties such as content removal or suspension from YouTube’s Partner Program. Additionally, YouTube will now allow users to request the removal of AI-generated or manipulated content that simulates an identifiable individual’s face or voice under its privacy request process. Music partners will also have the ability to request the removal of AI-generated music mimicking specific artists’ voices.