top of page

AU10TIX’s Ofer Friedman on Tackling Deepfake Fraud and the Future of Identity Verification

As AI deepfakes and sophisticated digital fraud continue to evolve, traditional methods for detecting fake identities are becoming increasingly unreliable. In this interview, Ofer Friedman, Chief Business Development Officer at AU10TIX, discusses common misconceptions around deepfakes, the limitations of current security features like holograms, and how organizations can stay ahead of rapidly advancing fraud techniques. He offers insights into the future of identity verification in a world where visual detection alone is no longer enough.

Ofer Friedman of AU10TIX

What are some of the most common misconceptions about identifying deepfaked identities, and why are traditional methods like detecting jerky head movements unreliable?


The #1 misconception about Deepfakes is that you can instruct people how to detect them… Five years ago, maybe. Today, unless the fraudster went for an “inexpensive” tool, you’re likely to have a hard time seeing any issues. Tomorrow, no serious deepfake impersonation will be detectable by observation or hearing. “Jerky head movements” could occasionally help detect earlier real-time Deepfakes and may still do so, depending on the quality of solution and computer used. But are you seeing any serious scenario where customers are being asked to perform such feats? Though it would be nice if people would do an “Elaine dance” (Seinfeld) in front of the camera to open a bank account.


How effective are security features like holograms and microprints in verifying the authenticity of ID documents, and what advancements have fraudsters made to bypass these measures?


Quite effective. And quite impractical, unless you’re in the airport in front of the document reader. Both holograms and microprints (as well as a range of other security features) were designed for detection using professional, 3-illumination, coaxial intense lighting equipped professional scanners. But the normal use case right now involves the customer capturing ID and Selfie with his or her phone whenever, wherever, under whatever conditions. Experience shows that picture quality will not suffice to establish reliable detection in the vast majority of cases. The good news is that, although today’s ID documents were not born to the remote identification world, remote capture offers additional verification features by virtue of the devices and software involved.


PEPs (Politically Exposed Persons) and sanctions checks are touted as key tools in preventing money laundering, but are there limitations to their effectiveness in modern identity fraud scenarios?


PEPs and Sanctions actually make a lot of sense as they flag risk based on confirmed data. The question is how much the available data covers out of all potentially known risk cases. Well, definitely not a dominant part. If financial institutions, law enforcement agencies, and government bodies would make their knowledge bases available for screening (obviously in a controlled, privacy-preserving manner) then the efficiency level would skyrocket. In other words, the PEPs and Sanctions tool is definitely powerful, but the magazine is only partially loaded.


As fraudsters increasingly leverage AI and digital tools to create more sophisticated false identities, how can organizations adapt their security measures to stay ahead of these threats?


Organizations may benefit from regarding identity verification and identity authentication, i.e. onboarding and access management, the same way they do Cyber defense. Early AI manipulations may be detectable at the visual or auditory level,  but maturing AI, especially generative AI, requires proper automation. Get used to the idea that you cannot trust your senses any longer. Detection has gone digital. This means that organizations need to adopt a two-layered AI attack detection approach (case-level and traffic-level), and begin to examine detection methodology. AI that “learns” big data of fake vs. real images won’t get you far for long. Simply learn from how cyber attack detection has evolved. Using this experience, Gen-AI powered impersonation attacks can be dealt with much faster. It's time for cyber security and identity verification to shake hands.

bottom of page