Meta announced that it will start labelling images on Facebook, Instagram, and Threads that are created using AI. They will also add a feature for users to disclose when they share AI-generated video or audio. If people fail to use this feature, they may face penalties. Additionally, Meta may add a more noticeable label to content that could significantly deceive the public.
Labelling AI content
Meta will label images, videos, and audio that are digitally created or altered. They will require users to disclose if their content is AI-generated. If the content poses a high risk of misleading the public, Meta may add a more prominent label for clarity.
Meta's apps, including Facebook, Instagram, Messenger, and WhatsApp, have a large user base. They also work with industry partners to develop common standards for identifying AI content. This includes collaborating with other companies in forums like the Partnership on AI.
Collaboration with industry partners
Meta is collaborating with companies like Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock to identify AI-generated content. They are using invisible markers like IPTC metadata and invisible watermarks, following best practices outlined by the Partnership on AI.
As AI-generated content becomes more prevalent, Meta anticipates debates on how to authenticate both synthetic and non-synthetic content. They are working on industry-leading tools to identify AI-generated content at scale.
Potential measures
Meta acknowledges that regulators and industry may implement measures to authenticate content created with and without AI. They are prepared to adapt to future developments in this area.
Inputs from IANS
ALSO READ | Apple's Swift Student Challenge 2024 – Everything you need to know about dates, eligibility, and rewards
ALSO READ | Next Oppo phones, Reno 12 series, expected in June 2024: Leaked details Revealed