- iProov study finds older adults struggle most with deepfakes
- False confidence is widespread among the younger generation
- Social media is a deepfake hotspot, experts warn
As deepfake technology continues to advance, concerns over misinformation, fraud, and identity theft are growing, thanks to literacy in AI tools being at a startling low.
A recent iProov study claims most people struggle to distinguish deepfake content from reality, as it took 2,000 participants from the UK and US being exposed to a mix of real and AI-generated images and videos, finding only 0.1% of participants – two whole people – correctly distinguished between real and deepfake stimuli.
The study found older adults are particularly susceptible to AI-generated deception. Around 30% of those aged 55-64, and 39% of those over 65, had never heard of deepfakes before. While younger participants were more confident in their ability to detect deepfakes, their actual performance in the study did not improve.
Older generations are more vulnerable
Deepfake videos were significantly harder to detect than images, the study added,as participants were 36% less likely to correctly identify a fake video compared to an image, raising concerns about video-based fraud and misinformation.
Social media platforms were highlighted as major sources of deepfake content. Nearly half of the participants (49%) identified Meta platforms, including Facebook and Instagram, as the most common places where deepfakes are found, while 47% pointed to TikTok.
“[This underlines] how vulnerable both organizations and consumers are to the threat of identity fraud in the age of deepfakes,” said Andrew Bud, founder and CEO of iProov.
“Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting personal information and financial security at risk.”
Bud added even when people suspect a deepfake, most take no action. Only 20% of respondents said they would report a suspected deepfake if they encountered one online.
With deepfakes becoming increasingly sophisticated, iProov believes that human perception alone is no longer reliable for detection, and Bud emphasized the need for biometric security solutions with liveness detection to combat the threat of ever more convincing deepfake material.
“It’s down to technology companies to protect their customers by implementing robust security measures,” he said. “Using facial biometrics with liveness provides a trustworthy authentication factor and prioritizes both security and individual control, ensuring that organizations and users can keep pace with these evolving threats.”
You may also like
https://cdn.mos.cms.futurecdn.net/xc9RwHrjgWjhioXTQaXKiU-1200-80.jpg
Source link