"Don't Trust AI For Health Advice": Study Warns Of Serious Consequences

Experts have recommended that AI systems intended for healthcare use should undergo thorough testing in real-world settings.

Advertisement
Read Time: 3 mins
Representative image.
Quick Read
Summary is AI-generated, newsroom-reviewed
  • AI chatbots often provide inaccurate medical advice, posing risks to users seeking health guidance
  • 88% of chatbot responses contained false information despite scientific language and references
  • Users struggle to communicate effectively with AI, leading to misdiagnosis and delayed treatment
Did our AI summary help? Let us know.

A study has warned against the increasing dependency on artificial intelligence (AI) chatbots for health advice, as people might struggle to interact effectively with these tools, leading to potentially dangerous outcomes when seeking medical advice. A global team of researchers analysed the five most advanced AI systems developed by OpenAI, Google, Anthropic, Meta and X Corp to assess their proficiency.

The study, published in the journal Annals of Internal Medicine, highlights how such AI tools can give inaccurate responses. Researchers found that chatbots can generate plausible yet incorrect medical advice, which could mislead patients.

Also Read | US Woman Reveals How ChatGPT Helped Her Pay Off $23,000 Debt: "No Financial Hack"

"In total, 88 per cent of all responses were false," explained paper author Natansh Modi of the University of South Africa in a statement. "And yet they were presented with scientific terminology, a formal tone and fabricated references that made the information appear legitimate."

They found that out of the five chatbots, four generated disinformation in 100% of their responses, while the fifth generated disinformation in 40% of responses.

In this day and age, when people are dependent on AI tools for almost everything, the study warns against relying on healthcare tools for self-diagnosis, as participants often miss key health conditions or underestimate their severity. It can also lead to misidentification of conditions, which can lead to delayed or incorrect treatment.

Also Read | Did Meta Offer $100m Signing Bonus? Ex-OpenAI Employee Replies To Sam Altman's Claim

"Our study is the first to systematically demonstrate that leading AI systems can be converted into disinformation chatbots using developers' tools, but also tools available to the public," Dr Modi said, revealing previously under-explored risk in the health sector.

The study explains that while interacting with AI, there's a "two-way communication breakdown" as users struggle to provide the right information, and chatbots often give answers that are hard to understand or might give bad recommendations.

Experts have recommended that AI systems intended for healthcare use should undergo thorough testing in real-world settings before widespread deployment. Such issues need to be addressed by the authorities.

Advertisement

"Millions of people are turning to AI tools for guidance on health-related questions. This is not a future risk. It is already possible, and it is already happening," Dr Modi said.

Although AI can be an issue when it comes to healthcare, it can help professionals by providing 24/7 assistance, scheduling appointments, and offering preliminary diagnoses.

Featured Video Of The Day
Railway Fare Hike: India Speaks On NDTV
Topics mentioned in this article