Facebook's attempt to tackle Fake News on its platform have not yielded the desired results for the company. Identification of Fake news continues to be a challenge for Facebook and the improvements haven't been satisfactory. According to a study that was led by Patricia Moravec, Assistant Professor at the University of Texas the problem of Fake News on social networking websites like Facebook hasn’t improved much and its still tough to spot fake news. Patricia Moravec said, "We all believe that we are better than the average person at detecting fake news, but that's simply not possible,"
For the study, published in the journal Management Information Systems Quarterly, the researchers worked with 80 social media-proficient undergraduate students who first answered 10 questions about their own political beliefs.
Each participant was then fitted with an EEG headset. The students were asked to read 50 political news headlines presented as they would appear in a Facebook feed and assess their credibility.
Forty of the headlines were evenly divided between true and false, with 10 headlines that were clearly true included as controls: 'Trump Signs New Executive Order on Immigration' (clearly true), 'Nominee to Lead EPA Testifies He'll Enforce Environmental Laws' (true), 'Russian Spies Present at Trump's Inauguration -- Seated on Inauguration Platform' (false).
The researchers randomly assigned fake news flags among the 40 non-control headlines to see what effect they would have on the participants' responses.
'The students rated each headline's believability, credibility, and truthfulness. As they worked through the exercise, the participants spent more time and showed significantly more activity in their frontal cortices -- the brain area associated with arousal, memory access and consciousness -- when headlines supported their beliefs but were flagged as false.
These reactions of discomfort indicated cognitive dissonance when headlines supporting their beliefs were marked as untrue.
But this dissonance was not enough to make participants change their minds. They overwhelmingly said that headlines conforming with their pre-existing beliefs were true, regardless of whether they were flagged as potentially fake.
In late 2016, Facebook incorporated fact-checking into its platform and began flagging certain news articles by noting that an article was 'disputed by third-party fact-checkers. The flag did not change their initial response to the headline, even if it did make them pause a moment longer and study it a bit more carefully.
(With IANS inputs)