YouTube, a Google-owned video streaming platform has recently announced to crack down on artificial intelligence (AI)-driven content that 'realistically simulates' deceased minors and victims of deadly / well-documented major violent events which describe their death or violence which they experienced.
YouTube updated its' harassment and cyberbullying policies
The company has reportedly updated its harassment and cyberbullying policies to clamp down on such disturbing content on the platform. The platform further said that it will begin to strike such content from January 16 onwards.
Why was this policy changed?
The policy change was implemented because some content creators have been using artificial intelligence to recreate the likeness of deceased or missing children, where they give these child victims of high-profile cases a childlike 'voice' to describe their deaths.
A Washington Post report recently revealed that the content creators have used the help of AI to narrate the abduction and death of deceased or missing kids, including the two-year-old British James Bulger.
Your content will be removed by YouTube if...
"If your content violates this policy, we will remove the content and send you an email to let you know. If we can't verify that a link you post is safe, we may remove the link," said YouTube.
"If you get three strikes within 90 days, your channel will be terminated," the company added.
In September last year, the Chinese short-video-making platform TikTok introduced a feature to enable creators to label their AI-generated content, disclosing if they are posting synthetic or manipulated media that shows realistic scenes.
ALSO READ: CES 2024: Samsung unveils new AI-based QLED TV at Las Vegas
Inputs from IANS