Think Twice Before Flashing Peace Sign In Pics. AI-Powered Hackers Could Steal Your Fingerprints

AI-powered techniques sharpen blurry fingerprint images in selfies, posing new biometric security risks.

Advertisement
Read Time: 2 mins
AI-powered techniques sharpen blurry fingerprint images in selfies, posing new biometric challenges.

Think twice before flashing a peace sign in your next photo. Malicious actors can now use AI to easily harvest fingerprints from selfies, transforming a harmless pose into a security risk. Last month, security expert Li Chang showed how simple it was to steal someone's biometric data on a Chinese workplace reality show. Using a celebrity's selfie to show how much fingerprint information was sitting right there in a standard peace sign photo, Chang demonstrated what hackers were capable of achieving, according to a report in the South China Morning Post.

“If the pads of the fingers are directly exposed towards the camera and photographed from within about 1.5m of the lens, there is a high possibility that fingerprint information can be extracted relatively clearly,” Li said during the show, adding that photos taken from 1.5m to 3m away could still reveal roughly half of the fingerprint details.

Li used photo-editing and AI-enhancement tools to sharpen fingerprints that appeared blurry to the naked eye, turning low-resolution smudges into detailed, usable biometric data. He warned that because permanent biometric identifiers like fingerprints and facial data cannot be changed, a data breach could lead to irreversible identity theft and financial losses.

Also Read | Indian Man Jumps On Railway Track To Retrieve Indonesian Woman's 'Dupatta', Video Viral

AI-Powered Hacking

Earlier this week, findings from Google's threat intelligence group revealed that in just three months, AI-powered hacking has gone from a nascent problem to an industrial-scale threat. The report highlighted that criminal groups, as well as state-linked actors from China, North Korea and Russia, were using commercially available models like Gemini, Claude and OpenAI to refine and scale up attacks.

Advertisement

"Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware and make many other improvements," said John Hultquist, the group's chief analyst.

The development comes in the backdrop of Anthropic last month refusing to release its newest AI model, Mythos, to the public. The company deemed the technology too powerful, warning it could threaten governments, financial systems, and global security if weaponised by bad actors.

Featured Video Of The Day
NEET UG 2026 Rescheduled On June 21 After Exam Cancelled Due To Paper Leaks