“The growing availability of disinformation and deepfakes will have a profound impact on the way people perceive authority and information media,” the Hague-based Europol added.
It released a 23-page report looking at how artificial intelligence and deepfake technology could be used in crime including to erode trust in authority and official facts.
“Experts fear this may lead to a situation where citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable – a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,” Europol said.
Criminals could also use deepfake technology to coerce people online, including exploiting children for underage sex, make non-consensual pornography and falsify or manipulate electronic evidence in judicial investigations.
Businesses too were at risk.
“This makes it essential to be aware of this manipulation and be prepared to deal with the phenomenon, so as to distinguish between benign and malicious use of this technology,” it said.
Although it was still possible for humans to detect deepfake images manually by noticing blurred edges around the face, lack of blinking and other inconsistencies, technology was getting better – and detection harder.
“Ideally, a system would scan any digital content and automatically report on its authenticity,” Europol said.
“Such a system will most likely never be perfect, but with increased sophistication of deepfake technology, a high degree of certainty from such a system could be worth more than the manual inspection,” it said.