YouTube will soon require creators to disclose whether a video was made with generative AI.
On Tuesday,eroticism of hinduism softscreams.net the video streaming giant announced this, and other updates, to mitigate the misleading or harmful effects of generative AI.
"When creators upload content, we will have new options for them to select to indicate that it contains realistic altered or synthetic material," said Jennifer Flannery O'Connor and Emily Moxley, YouTube product management VPs.
Creators who fail to consistently do this might face penalties, such as content removal or suspension from the YouTube Partner Program. The announcement also said artists and creators will be able to request the removal of content (including music) that uses their likeness without consent.
This Tweet is currently unavailable. It might be loading or has been removed.
The widespread availability of generative AI has heightened the threat of deepfakes and misinformation, especially with the upcoming presidential election. Both the public and private sector have acknowledged a need to detect and prevent the nefarious use of generative AI.
For example, President Biden's AI executive order specifically addressed the need for labeling or watermarking AI-generated content. OpenAI is working on its own tool, a "provenance classifier," that detects whether an image was made with its DALL-E 3 AI generator. Just last week, Meta announced a new policy that requires political advertisers to disclose whether an ad uses generative AI.
SEE ALSO: Political ads on Facebook, Instagram required to disclose use of AIOn YouTube, when a creator uploads a video, they'll be given the option of indicating whether it "contains realistic altered or synthetic material," the blog post said. "For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn't actually do."
Labels informing viewers that a video has AI-generated or altered content will be added to the description panel. A "more prominent label" will be added to content involving sensitive topics. Even if AI-generated content is appropriately labeled, if it violates YouTube's community guidelines, it will be taken down.
How will all of this content moderation be enforced? By AI of course. In addition to creating fake content that looks convincingly real, generative AI can also successfully identify and catch content that violates content policies. YouTube will be deploying generative AI technology to help contextualize and understand threats at scale.
Topics Artificial Intelligence YouTube
(Editor: {typename type="name"/})
Even Trump's Earth Day message was anti
NYT Connections hints and answers for March 19: Tips to solve 'Connections' #647.
NYT Connections hints and answers for March 20: Tips to solve 'Connections' #648.
Meta's AI chatbot is coming to Europe, with limitations
The Best Sports Video Game of All Time
Why there's no such thing as being 'very online' anymore
Apple's iPhone 17 might come with a big selfie camera upgrade
Why there's no such thing as being 'very online' anymore
HP Touchscreen Laptop deal: Get $240 off at Best Buy
NCAA bracket: 3 strangest ways 2025 March Madness winners were picked
Here's how I feel about all this Stephen Hawking 'news' going around
GM is teaming with Nvidia to produce fleet of self
接受PR>=1、BR>=1,流量相当,内容相关类链接。