Monday, December 23, 2024
Advertisement
  1. You Are At:
  2. News
  3. Technology
  4. Here's how Facebook moderates harmful content on its platform with AI's help

Here's how Facebook moderates harmful content on its platform with AI's help

For effective content moderation, Facebook is relying on three aspects of technology to transform its content review process across its family of apps. Know details

Reported by: IANS New Delhi Published : Aug 13, 2020 11:34 IST, Updated : Aug 13, 2020 11:34 IST
facebook, AI, AI systems, harmful content, how facebook removes harmful content, tech news
Image Source : PIXABAY

Facebook's method of moderation

For effective content moderation, Facebook is relying on three aspects of technology to transform its content review process across its family of apps. The first aspect is called ‘Proactive Detection' where Artificial intelligence (AI) can detect violations across a wide variety of areas without relying on users to report content to Facebook, often with greater accuracy than reports from users.

"This helps us detect harmful content and prevent it from being seen by hundreds or thousands of people," the company said in a statement.

‘Automation' is the second aspect where AI systems have automated decisions for certain areas where content is highly likely to be violating.

"Automation also makes it easier to take action on identical reports, so our teams don't have to spend time reviewing the same things multiple times. These systems have become even more important during the Covid-19 pandemic with a largely remote content review workforce," said Jeff King, Director Product Management, Integrity at Facebook.

The third aspect is ‘Prioritisation'. Instead of simply looking at reported content in chronological order, AI prioritises the most critical content to be reviewed, whether it was reported to Facebook or detected by its proactive systems.

"This ranking system prioritizes the content that is most harmful to users based on multiple factors such as virality, the severity of harm and likelihood of violation," Kind added.

However, Facebook admitted there are still areas where it's critical for people to review the content.

"For example, discerning if someone is the target of bullying can be extremely nuanced and contextual. In addition, AI relies on a large amount of training data from reviews done by our teams in order to identify meaningful patterns of behaviour and find potentially violating content".

For reviewing violations like spam, Facebook said it is going to use its automated systems first to review more content across all types of violations.

Latest technology reviews, news and more

Advertisement

Read all the Breaking News Live on indiatvnews.com and Get Latest English News & Updates from Technology

Advertisement
Advertisement
Advertisement
Advertisement