Deepfakes Day 2022, held online last week, was organized by the CERT Division of the Carnegie Mellon University Software Engineering Institute (SEI), which partners with government, industry, law enforcement, and academia to improve the security and resilience of computer systems and networks, to examine the growing threat of deepfakes.
CERT describes a deepfake as a “media file, typically videos, images, or speech representing a human subject, that has been modified deceptively using deep neural networks to alter a person’s identity. Advances in machine learning have accelerated the availability and sophistication of tools for making deepfake content. As deepfake creation increases, so too do the risks to privacy and security.”
During the opening segment, two experts from the Coordination Centre of the Computer Emergency Response Team (CERT) – data scientist Shannon Gallagher and Thomas Scanlon, a technical engineer – took their audience through an exploratory tour of a growing security threat that shows no sign of waning.
“Part of our doing research in this area and raising awareness for deep fakes is to protect folks from some of the cyber challenges and personal security and privacy challenges that deepfakes present,” said Scanlon.
An SEI blog posted in March stated that the “existence of a wide range of video-manipulation tools means that video discovered online can’t always be trusted. What’s more, as the idea of deepfakes has gained visibility in popular media, the press, and social media, a parallel threat has emerged from the so-called liar’s dividend—challenging the authenticity or veracity of legitimate information through a false claim that something is a deepfake even if it isn’t.
“Determining the authenticity of video content can be an urgent priority when a video pertains to national-security concerns. Evolutionary improvements in video-generation methods are enabling relatively low-budget adversaries to use off-the-shelf machine-learning software to generate fake content with increasing scale and realism.”
The seminar included a discussion on the criminal use of deepfakes, citing examples including malicious actors convincing a CEO to wire US$243,000 to a scammer’s bank account by using a deep fake audio, and politicians from the U.K., Latvia, Estonia, and Lithuania being duped into fake meetings with opposition figures.
“Politicians have been tricked,” said Scanlon. “This is one that has resurfaced again and again. They are on a conference call with somebody and not realizing that the person they are talking to is not a counterpart dignitary from another country.”
Key takeaways provided by the two cybersecurity experts included the following:
- Good news: Even using tools that are already built (Faceswap, DeepFace Lab etc.) it still takes considerable time and graphics processing unit (GPU) resources to create even lower quality deepfakes
- Bad news: Well-funded actors can commit the resources to making higher quality deepfakes, particularly for high-value targets.
- Good news: Deepfakes are principally only face swaps and facial re-enactments.
- Bad news: Eventually, the technology capabilities will expand beyond faces.
- Good news: Advancements are being made in detecting deepfakes.
- Bad news: Technology for deepfake creation continues to advance; it will likely be a never-ending battle similar to that of anti-virus software vs malware.
In terms of what an organization can do to prevent becoming a victim, the key, said Scanlon, lies in understanding the current capabilities for both creation and detection, and the crafting of training and awareness programs.
It is also important, he said, to be able to detect a deepfake, and “practical clues” include flickering, unnatural movements and expressions, lack of blinking, and unnatural hair and skin colours.
“If you are in a cybersecurity role in your organization, there is a good chance that you will be asked about this technology,” said Scanlon.
As for tools that are capable of detecting deepfakes, he added, these include:
- Microsoft’s Video Authenticator Tool, which detects blending boundaries and grayscale elements that are undetectable to the human eye.
- Facebook Reverse Engineering, which detects fingerprints left behind by a generative AI model.
- Quantum Integrity, a Swiss company that is developing an AI-based offering that determines if images or videos have been manipulated.
In a two year old blog post that ended up being prophetic, Microsoft stated that it expects that “methods for generating synthetic media will continue to grow in sophistication. As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media.
“No single organization is going to be able to have meaningful impact on combating disinformation and harmful deepfakes. We will do what we can to help, but the nature of the challenge requires that multiple technologies be widely adopted, that educational efforts reach consumers everywhere consistently and that we keep learning more about the challenge as it evolves.”