When outraged Filipinos turned to an AI-powered chatbot to verify a viral photograph of a lawmaker embroiled in a corruption scandal, the tool failed to detect that it was fabricated -- even though it had generated the image itself.
Internet users are increasingly turning to chatbots to verify images in real time, but the tools often fail, raising questions about their visual debunking capabilities at a time when major tech platforms are scaling back human fact-checking.
In many cases, the tools wrongly identify images as real even when they are generated using the same generative models, further muddying an online information landscape awash with AI-generated fakes.
Among them is a fabricated image circulating on social media of Elizaldy Co, a former Philippine lawmaker charged by prosecutors in a multibillion-dollar flood-control corruption scam that sparked massive protests in the disaster-prone country.
The image of Co, whose whereabouts have been unknown since the official probe began, appeared to show him in Portugal.
When online sleuths tracking him asked Google's new AI mode whether the image was real, it incorrectly said it was authentic.
AFP's fact-checkers tracked down its creator and determined that the image was generated using Google AI.
"These models are trained primarily on language patterns and lack the specialised visual understanding needed to accurately identify AI-generated or manipulated imagery," Alon Yamin, chief executive of AI content detection platform Copyleaks, told AFP.
"With AI chatbots, even when an image originates from a similar generative model, the chatbot often provides inconsistent or overly generalised assessments, making them unreliable for tasks like fact-checking or verifying authenticity."
Google did not respond to AFP's request for comment.
'Distinguishable From Reality'
AFP found similar examples of AI tools failing to verify their own creations.
During last month's deadly protests over lucrative benefits for senior officials in Pakistan-administered Kashmir, social media users shared a fabricated image purportedly showing men marching with flags and torches.
An AFP analysis found it was created using Google's Gemini AI model.
But Gemini and Microsoft's Copilot falsely identified it as a genuine image of the protest.
"This inability to correctly identify AI images stems from the fact that they (AI models) are programmed only to mimic well," Rossine Fallorina, from the nonprofit Sigla Research Centre, told AFP.
"In a sense, they can only generate things to resemble. They cannot ascertain whether the resemblance is actually distinguishable from reality."
Earlier this year, Columbia University's Tow Centre for Digital Journalism tested the ability of seven AI chatbots -- including ChatGPT, Perplexity, Grok, and Gemini -- to verify 10 images from photojournalists of news events.
All seven models failed to correctly identify the provenance of the photos, the study said.
'Shocked'
AFP tracked down the source of Co's photo that garnered over a million views across social media -- a middle-aged web developer in the Philippines, who said he created it "for fun" using Nano Banana, Gemini's AI image generator.
"Sadly, a lot of people believed it," he told AFP, requesting anonymity to avoid a backlash.
"I edited my post -- and added 'AI generated' to stop the spread -- because I was shocked at how many shares it got."
Such cases show how AI-generated photos flooding social platforms can look virtually identical to real imagery.
The trend has fueled concerns as surveys show online users are increasingly shifting from traditional search engines to AI tools for information gathering and verifying information.
The shift comes as Meta announced earlier this year it was ending its third-party fact-checking program in the United States, turning over the task of debunking falsehoods to ordinary users under a model known as "Community Notes."
Human fact-checking has long been a flashpoint in hyperpolarised societies, where conservative advocates accuse professional fact-checkers of liberal bias, a charge they reject.
AFP currently works in 26 languages with Meta's fact-checking program, including in Asia, Latin America, and the European Union.
Researchers say AI models can be useful to professional fact-checkers, helping to quickly geolocate images and spot visual clues to establish authenticity. But they caution that they cannot replace the work of trained human fact-checkers.
"We can't rely on AI tools to combat AI in the long run," Fallorina said.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)














