According to Cointelegraph, YouTube has released new community guidelines regarding the disclosure of artificial intelligence (AI) used in content. The video streaming platform announced on November 14 that creators will be required to inform viewers if the content being shown is 'synthetic.' This includes AI-generated videos that realistically depict events that never happened or people saying or doing things they did not.
YouTube will display this information for viewers in two ways: a new label added to the description panel and, for content about sensitive topics, a more prominent label on the video player. Sensitive topics include political elections, ongoing conflicts, public health crises, and public officials. YouTube plans to work with creators to help its community better understand the new guidelines. However, those who do not abide by the rules may face content removal, suspension from the YouTube Partner Program, or other penalties.
The platform also addressed the issue of AI-generated deep fakes, which have become increasingly common and realistic. YouTube is integrating a new feature that allows users to request the removal of a synthetic video that simulates an identifiable individual, including their face or voice, using their privacy request process. Recently, several celebrities and public figures have battled with deep fake videos of themselves endorsing products.
AI-generated content has also caused problems for the music industry, with many deep fakes of artists using illegal vocal or track samples circulating online. YouTube's updated community guidelines state that it will remove AI-generated music or content that mimics an artist's unique singing or rapping voice as requested by its music partners. Over the summer, YouTube began working on its principles for collaborating with the music industry on AI technology.
In addition to the community guidelines, YouTube recently released new experimental AI chatbots that chat with viewers while watching a video.