Advertisement

Researchers Invented A Fake Eye Condition. ChatGPT, Gemini And Perplexity Repeated It As Real

Researchers created a fake eye condition called 'Bixonimania' and published two papers about it. Major AI chatbots repeated as real.

Researchers Invented A Fake Eye Condition. ChatGPT, Gemini And Perplexity Repeated It As Real
Researchers created a fake eye condition that major AI chatbots repeated as real.
  • Artificial intelligence chatbots repeated a fake eye condition called Bixonimania as real
  • Researcher Almira Osmanovic Thunstrom created and published fake studies on the condition
  • Major AI chatbots including Microsoft Copilot and ChatGPT disseminated the false information
Did our AI summary help?
Let us know.

It is a well-documented fact that artificial intelligence (AI) models are prone to hallucinations, generating confident but false information. But what happens when they are purposely fed misinformation? Almira Osmanovic Thunstrom, a medical researcher at the University of Gothenburg, Sweden, set out to find the same using an experiment. Thunstrom created a fake eye condition called 'Bixonimania' and published two papers about it on a preprint server. Within weeks of her uploading information about the condition, attributed to an imaginary author, major AI chatbots began repeating the invented condition as if it were real.

Microsoft Copilot was the first major AI chatbot to pick the fake condition, describing Bixonimania as an "intriguing and relatively rare condition'. On the same day, Google's Gemini explained that Bixonimania is a condition caused by "excessive exposure to blue light". Perplexity said one in 90,000 were affected by Bixonimania, while OpenAI's ChatGPT informed users about the symptoms to look out for.

Thunstrom said she conducted the experiment to test whether large language models (LLMs) would swallow the misinformation and then reproduce it as reputable health advice.

"I wanted to see if I can create a medical condition that did not exist in the database," Thunstrom told Nature, adding that she created a health-related condition and used the name bixonimania because it "sounded ridiculous".

"I wanted to be really clear to any physician or any medical staff that this is a made-up condition, because no eye condition would be called mania, that's a psychiatric term."

Also Read | Social Media Users Notice They Are Mimicking Writing Style Of AI Models: 'Am I Going Insane?'

Bixonimania And LLMs

To ensure that the LLMs and readers were aware that the studies were fake, Thunstrom left several clues. She invented Lazljiv Izgubljenovic as the lead researcher, who worked at a non-existent university called Asteria Horizon University in equally fake Nova City, California. The papers also started with statements like, "this entire paper is made up" and "fifty made-up individuals aged between 20 and 50 years were recruited for the exposure group".

Despite such obvious clues, the LLMs still picked up the studies and pushed Bixonimania as a real-life health condition. More troublingly, the fake papers were also cited in peer-reviewed literature, highlighting that some researchers were relying on AI-generated references. Researchers at the Maharishi Markandeshwar Institute of Medical Sciences and Research in Mullana, India, cited the fake preprints in a study published in the journal Cureus.

After Thunstrom posted about her AI experiment, the LLM chatbots started modifying the search results when quizzed about Bixonimania. Quizzed why Gemini did not filter out the condition initially, a Google spokesperson said the results reflected the performance of a previous model.

Track Latest News Live on NDTV.com and get news updates from India and around the world

Follow us:
Listen to the latest songs, only on JioSaavn.com