Our approach to responsible AI innovation
Nov 14, 2023 – [[read-time]] minute read
Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform. But just as important, these opportunities must be balanced with our responsibility to protect the YouTube community. All content uploaded to YouTube is subject to our Community Guidelines—regardless of how it’s generated—but we also know that AI will introduce new risks and will require new approaches.
We're in the early stages of our work, and will continue to evolve our approach as we learn more. Here’s a look at what YouTube will roll out over the coming months and into the new year.
We’ll introduce updates that inform viewers when the content they’re seeing is synthetic.”
We believe it’s in everyone’s interest to maintain a healthy ecosystem of information on YouTube. We have long-standing policies that prohibit technically manipulated content that misleads viewers and may pose a serious risk of egregious harm. However, AI’s powerful new forms of storytelling can also be used to generate content that has the potential to mislead viewers—particularly if they’re unaware that the video has been altered or is synthetically created.
To address this concern, over the coming months, we’ll introduce updates that inform viewers when the content they’re seeing is synthetic. Specifically, we’ll require creators to disclose when they've created altered or synthetic content that is realistic, including using AI tools. When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material. For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do.
This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials. Creators who consistently choose not to disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties. We’ll work with creators before this rolls out to make sure they understand these new requirements.
We’ll inform viewers that content may be altered or synthetic in two ways. A new label will be added to the description panel indicating that some of the content was altered or synthetic. And for certain types of content about sensitive topics, we’ll apply a more prominent label to the video player.
There are also some areas where a label alone may not be enough to mitigate the risk of harm, and some synthetic media, regardless of whether it’s labeled, will be removed from our platform if it violates our Community Guidelines. For example, a synthetically created video that shows realistic violence may still be removed if its goal is to shock or disgust viewers.
And moving forward, as these new updates roll out, content created by YouTube’s generative AI products and features will be clearly labeled as altered or synthetic.
We’ve heard continuous feedback from our community, including creators, viewers, and artists, about the ways in which emerging technologies could impact them. This is especially true in cases where someone’s face or voice could be digitally generated without their permission or to misrepresent their points of view.
So in the coming months, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process. Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests. This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.
We’re also introducing the ability for our music partners to request the removal of AI-generated music content that mimics an artist’s unique singing or rapping voice. In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals. These removal requests will be available to labels or distributors who represent artists participating in YouTube’s early AI music experiments. We’ll continue to expand access to additional labels and distributors over the coming months.
YouTube has always used a combination of people and machine learning technologies to enforce our Community Guidelines, with more than 20,000 reviewers across Google operating around the world. In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is continuously increasing both the speed and accuracy of our content moderation systems.
One clear area of impact has been in identifying novel forms of abuse. When new threats emerge, our systems have relatively little context to understand and identify them at scale. But generative AI helps us rapidly expand the set of information our AI classifiers are trained on, meaning we’re able to identify and catch this content much more quickly. Improved speed and accuracy of our systems also allows us to reduce the amount of harmful content human reviewers are exposed to.
As we continue to develop new AI tools for creators, our approach remains consistent with how we’ve tackled some of our biggest responsibility challenges: we believe in taking the time to get things right, rather than striving to be first.
We’re thinking carefully about how we can build upon years of investment into the teams and technology capable of moderating content at our scale. This includes significant, ongoing work to develop guardrails that will prevent our AI tools from generating the type of content that doesn’t belong on YouTube.
We also recognize that bad actors will inevitably try to circumvent these guardrails. We’ll incorporate user feedback and learning to continuously improve our protections. And within our company, dedicated teams like our intelligence desk are specifically focused on adversarial testing and threat detection to ensure our systems meet new challenges as they emerge.
We’re still at the beginning of our journey to unlock new forms of innovation and creativity on YouTube with generative AI. We’re tremendously excited about the potential of this technology, and know that what comes next will reverberate across the creative industries for years to come. We’re taking the time to balance these benefits with ensuring the continued safety of our community at this pivotal moment—and we’ll work hand-in-hand with creators, artists and others across the creative industries to build a future that benefits us all.