Seeking truth in fuzzy world of generated results - Times of India

[ad_1]

Many of us have started using AI chatbots to get information. They are indeed very useful, and more convenient than search tools that provide numerous links. But they are still prone to hallucination – making things up on their own. They sometimes pick up information from low-quality sources. So, what you may be getting could be AI-generated misinformation and disinformation.Even those developing the models don’t exactly know how these models work. Unlike searching web pages, large language models (LLMs) predict which words come next based on user inputs. It produces new results every time, making it hard to verify.

Screenshot 2024-05-31 084826

Even when the generated text is backed with links, the model could cherry pick information without understanding the context or take poor quality sources. For example, an AI summary concludes cold water is bad for digestion, based on a discussion in a news article. However, the source article says there’s no evidence. Similarly, it provides misleading information about sleep apnea, a health condition. People tend to look only at the summary without diving into detail, especially during emergencies.
Multiple incidents of Google generating harmful content like advising people to add glue to the pizza, eat rocks or claiming Obama is a Muslim have been reported. In one instance, Microsoft’s Copilot amplified misleading information based on a British tabloid. When the question was asked whether someone can reverse ageing, Copilot responded: “Indeed”, citing a link titled ₹Man re turns 10 years younger.’
A few reputed Indian media published this misinformation without fact-checking. This news was further amplified by social media power users in government and entertainment. A Microsoft spokesperson told us they are investigating the report. “We use a mix of human and automated techniques in an effort to improve our experience,” the company said. Google said it’s taking “swift actions” and removing embarrassing queries.
B Ravindran, head of Wadhwani School of Data Science & AI at IIT Madras, says technological solutions like retrieval augmented generation (RAG) and tighter editorial control are necessary to make AI search work. RAG, he says, allows designers to seed trusted documents in the generation. But even this, he says, doesn’t prevent misinformation from being added to the augmentation process.
Aditya Kumar Shukla, associate professor of media studies at Amity University, stresses the need for government regulation, more so since the younger generation is inclined to use these newer tools.
Anubhuti Yadav, head of new media at Indian Institute of Mass Communication (IIMC), expresses concerns about gender and other biases in these models.
Now, with Gen AI platforms like GPT-4o and Gemini enabling voice interfaces, even in vernacular languages, there are worries things could get worse. Devadas Rajaram, journalism educator and assistant professor at Alliance University, says such bots will be more intimate and users will be more easily taken in. They may not even provide sources. Rajaram says AI literacy along with media literacy is a must.



[ad_2]

Source link

By Exabyte News

Your ultimate source for trending news! Stay up-to-date with the latest viral stories, hottest topics, and breaking news from Exabyte News. Stay ahead with our in-depth coverage.

Leave a Reply

Your email address will not be published. Required fields are marked *