Deepfake X-Rays Indistinguishable From Real Images, Fooling Even Doctors Now, Finds Study

A new study has found that AI-generated "deepfake" X-rays now look so realistic that even trained radiologists and advanced AI systems can rarely tell them apart from genuine scans. The study was published in the journal Radiology.

Advertisement
Read Time: 4 mins
Quick Read
Summary is AI-generated, newsroom-reviewed
  • AI-generated deepfake X-rays can deceive trained radiologists and advanced AI systems alike
  • Study tested 17 radiologists from six countries on 264 chest X-rays, half were deepfakes
  • Radiologists detected only 41% of fakes without warning, 75% after being informed
Did our AI summary help?
Let us know.

Artificial Intelligence (AI) is taking over our daily lives. From quick searches to early cancer detection, AI is revolutionising how we see and use technology. While AI can be used for improving healthcare and several other aspects, a new study has found that AI-generated "deepfake" X-rays now look so realistic that even trained radiologists and advanced AI systems can rarely tell them apart from genuine scans. The study was published in the journal Radiology, the journal of the Radiological Society of North America (RSNA). This raises alarms about fraud, misdiagnosis, and the integrity of digital medical records. A 'deepfake' is a video, photo, image or audio file that appears authentic but has been created or altered using AI.

Dr. Mickael Tordjman, M.D., post-doctoral fellow, Icahn School of Medicine at Mount Sinai, New York and lead study author said, "Our study demonstrates that these deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists, even when they were aware that AI-generated images were present.

"This creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one. There is also a significant cybersecurity risk if hackers were to gain access to a hospital's network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos by undermining the fundamental reliability of the digital medical record."

Study Overview And Study Findings

Researchers tested 17 radiologists from 12 hospitals across six countries(United States, France, Germany, Turkey, United Kingdom and United Arab Emirates). They were asked to evaluate 264 chest X-ray images, half of which were authentic patient scans and half were AI-generated "deepfakes." When the radiologists were simply asked to diagnose the images without being told some were fake, only about 41% of them noticed anything suspicious at all. Even after being informed that deepfakes were present, their average accuracy in spotting synthetic images rose only to about 75%. This means roughly one in four fake X-rays still slipped through.

The same confusion affected the AI systems themselves. Four major multimodal large language models (LLMs); GPT-4o, GPT-5, Gemini 2.5 Pro, and Llama 4 Maverick, were also tested on the same dataset. It could distinguish real from fake X-rays anywhere between 57% and 85% of the time, with GPT-4o performing best but still far from perfect. Even ChatGPT-4o, which generated many of the deepfakes, failed to flag all of its own synthetic images.

Advertisement

Risks Of AI-Imaging

Earlier deepfake systems required specialised pipelines and medical-image datasets. However, today's generative AI tools can produce X-rays from simple, plain-language prompts. For example, a user can ask for a "left-side chest X-ray showing a fracture" and receive a convincing radiograph in seconds, without needing access to hospital systems. This lowers the threshold for creating or tampering with medical images, which could be exploited by fraudsters, hackers, and other people.

Since these images closely replicate real-world anatomy, and pathology patterns, they look entirely normal. There are currently no standardised tools that can automatically tag or verify whether a radiograph is AI-generated. This makes it hard for clinicians to trust any image. Dr. Tordjman said, "We are potentially only seeing the tip of the iceberg. The logical next step in this evolution is AI-generation of synthetic 3D images, such as CT and MRI. Establishing educational datasets and detection tools now is critical."

Advertisement

The researchers also didn't find any link between a radiologist's years of experience and their ability to identify fake X-rays. But, musculoskeletal radiologists performed significantly better than other subspecialists. They also found a pattern that seems to appear in AI-generated images.

Dr. Tordjman said, "Deepfake medical images often look too perfect. Bones are overly smooth, spines unnaturally straight, lungs overly symmetrical, blood vessel patterns excessively uniform, and fractures appear unusually clean and consistent, often limited to one side of the bone."

Disclaimer: This content including advice provides generic information only. It is in no way a substitute for a qualified medical opinion. Always consult a specialist or your own doctor for more information. NDTV does not claim responsibility for this information.

Featured Video Of The Day
Iran War's Global Impact: Why Taiwan Is Now on Edge
Topics mentioned in this article