A cybersecurity expert is puzzled by recent actions taken by a group of researchers working at the Samsung AI Centre in Moscow, saying their work might inevitably end up doing more harm than good.
In a research paper, they wrote that they have invented something called Mega Portraits, which is short for megapixel portraits, based on a concept called neural head avatars, which, they said, “offer a new fascinating way of creating virtual head models. They bypass the complexity of realistic physics-based modeling of human avatars by learning the shape and appearance directly from the videos of talking people.”
Lou Steinberg, the founder of CTM Insights, a New York City-based cybersecurity research lab and incubator, said intentionally edited images, also known as deepfakes, are a growing and troubling issue with possibilities that include editing a picture of someone to cause reputational/brand damage, often with AI tools that are becoming more capable.
“We see this today with revenge porn images and videos, fraudulent payment audio and videos claiming to be from the CEO, fake-news pictures and videos posted to social media, and nation state disinformation campaigns (e.g., faked images have been circulating re the war in Ukraine),” he said.
There are also cases of editing an image to circumvent “fingerprinting” so it can’t be matched against a database of known images as well as faking images in medical and scientific research papers.
“We have seen attacks against information in places like municipal water systems, critical infrastructure – dam control systems where data values open or close valves – and voter registration database changes,” said Steinberg, the former chief technology officer (CTO) of TD Ameritrade.
In the case of Samsung, he said, researchers potentially made a mistake by publicly releasing their findings before reaching out to “people like us and others who build defences and suggesting ‘you guys might want to start thinking about this because it’s coming.’
“I had a similar, ‘oh my goodness reaction’ a few years back when cybersecurity researchers in Israel did a proof of exploit much like Samsung’s.”
The researchers from Ben-Gurion University of the Negev, located in Beersheba, Israel, had their findings published in the 2019 edition of the USENIX Security Symposium.
“In 2018, clinics and hospitals were hit with numerous attacks leading to significant data breaches and interruptions in medical services,” they wrote in it. “An attacker with access to medical records can do much more than hold the data for ransom or sell it on the black market.
“In this paper, we show how an attacker can use deep learning to add or remove evidence of medical conditions from volumetric (3D) medical scans. An attacker may perform this act in order to stop a political candidate, sabotage research, commit insurance fraud, perform an act of terrorism, or even commit murder.”
According to Steinberg, the researchers’ proof of exploit was that they could hack MRIs and computerized tomography (CT) scan images and insert fake cancer using a generative adversarial network (GAN).
Of note, he added, is that the test was so successful that a radiologist would end up misdiagnosing cancer upwards of 95 per cent of the time: “Imagine what the ransomware attack would be against a hospital if they (the perpetrators) called up one day and said 90 per cent of your radiology reports have been faked and you don’t know which ones they are. We will tell you if you pay.
“You see the Mega Portraits from Samsung and say, ‘what happens if they get in the wrong hands?’ We said the same thing when we saw the Israeli cancer images that were successful in fooling radiologists and just said, we have to fix this, we can’t wait.”
CTM ended up creating a way to use “overlapping micro-fingerprints to not only detect that something has been changed, but to isolate where the untrustworthy change was.”
It is, said Steinberg, a mistake to assume that if the team of researchers from Ben Gurion University or Samsung had not discovered this then the “bad guys” would never have been able to figure it out: “Do you want to figure it out first? Probably, but the question is, what are you going to do with it, because you can’t put the genie back in the bottle. If you figure it out first, do something with the results.
“We know that technology can be used for good or bad – almost every technology has had that capability. I want the good guys to invent it first, if you are going to use it to build defences. If all they are going to do is give it to the bad guys, now we have a problem.
“We took the images from the Israelis and built our initial system around being able to detect fake cancer and expand it out from there and we proved that we could. But if you don’t take that critical step, now you are building weapons of mass disruption and that is a problem.”