News Technology Google's AI Search spreading misinformation, experts sound alarm over false answers

Google's AI Search spreading misinformation, experts sound alarm over false answers

Google continues to refine its AI tools, and the challenge lies in balancing innovation with accuracy and reliability. The tech giant’s efforts to combat misinformation and enhance AI performance will be crucial in maintaining user trust and ensuring the safe and accurate retrieval of information.

Google Image Source : PIXABAYGoogle

Ask Google if cats have been on the moon, and it is used to show a list of websites for users to find the answer. Now, it generates an instant response using artificial intelligence (AI), which can be incorrect. For example, Google’s new AI-powered search has claimed that the astronauts met cats on the moon, citing false anecdotes about Neil Armstrong and Buzz Aldrin. 

Such inaccuracies are the part of company's recent update, which introduces AI-generated overviews at the top of search results on Google.

Risk of misinformation

Experts have been alarmed about the potential for Google's AI to spread misinformation. Melanie Mitchell, an artificial intelligence researcher has highlighted an instance where Google's AI incorrectly stated that Barack Obama was a Muslim president, citing a misinterpreted academic book. She criticised the AI for not understanding the context of the citations it uses, calling the feature irresponsible.

Google responded, stating that it is taking swift action to correct errors which have surfaced recently- like the Obama falsehood and improve the AI's accuracy. The tech giant has maintained that most AI overviews provide high-quality information, despite acknowledging that uncommon queries can produce errors.

The challenge of AI hallucinations

AI language models are reportedly known to produce errors, a phenomenon known as ‘hallucination’. These models will predict answers which are based on their training data, which could lead to random and incorrect responses. 

For instance, Google's AI has provided accurate advice on snake bites, but experts have worried that in emergencies, users might not notice subtle errors, which might potentially lead to dangerous situations.

Emily M. Bender, who is a linguistics professor has reportedly emphasized the risk of users which are accepting incorrect answers in urgent situations. She also warned Google about the potential biases and misinformation that AI systems can perpetuate.

Impact on information and online forums

Beyond misinformation, there are concerns about how AI-driven answers might affect human information. Bender has argued that relying on AI chatbots could diminish the value of human search, which will reduce digital literacy and disrupt connections in online forums. 

These forums will further rely on traffic from Google, which will decline as AI-generated answers become more prevalent.

Google has faced pressure to enhance its AI features amid competition from companies like OpenAI and Perplexity AI. However, critics like Dmitry Shevelenko of Perplexity AI suggest that Google's rush to release these features has led to quality issues.

As Google has continued to refine its AI tools, the challenge will be balancing innovation with accuracy and reliability. The company's efforts to address misinformation and improve the AI's performance will be crucial to maintain user’s trust, ensuring safe and accurate information retrieval.

 

ALSO READ: OpenAI pauses an AI voice that sounds like Scarlett Johansson

ALSO READ: PM WANI Wi-Fi Scheme offers 100GB data for Rs 99: All you need to know