According to security experts, a simple trick to detect a deepfake is to ask the person to turn their face to the side.
This trick works because deepfake AI models are not good at doing side-on or profile views like those seen in mug shot.
Martin Anderson of Metaphysics.ai noted that most of the deepfakes it generated apparently failed when the head reached 90 degrees, revealing elements of the person’s actual side profile.
Anderson explained that restoring the profile view failed due to a lack of high-quality training data about the profile, so the deepfake model had to invent or “inpaint” a lot about what was missing.
The technique works because part of the problem with deep fake software is that it has to recognize landmarks on a person’s face to recreate a face. When turned side on, the algorithms only have half the landmarks available for detection compared to the front-on view.
“This weakness in deepfakes offers a potential way of uncovering ‘simulated’ correspondents in live video calls, recently classified as an emergent risk by the FBI: if you suspect that the person, you’re talking to might be a ‘deepfake clone, you could ask them to turn sideways for more than a second or two, and see if you’re still convinced by their appearance,” Anderson explained.
The researcher said the weakness was possible because most people other than Hollywood stars had no profile data.
The report by Metaphysics.ai complements an earlier warning issued by the FBI in June, warning of the increased use of deepfake audio and video by fraudsters taking part in online job interviews.