People who work in information technology are often optimistic about the benefits the Internet will bring to society. But a Canadian who works for a Silicon Valley.-based cyber security firm says the world should now be wary of what they see.
“We should be very skeptical of all content we get online.” Shuman Ghosemajumder, chief technology officer for Shape Security, told a Toronto panel discussion on technology and democracy this morning. “We should realize it’s going to get harder to identify fake information, disinformation. And that applies to things we thought were very difficult for anyone to falsify. We should think about the way we use technology. We should think about way we get our news, share and receive content from friends. If we’re more proactive about that, about the data we put into information retrieval mechanisms, we can protect ourselves and get mainly positive benefits from the technology platforms we use today.”
Asked if he has confidence that technology like artificial intelligence will improve information verification, he replied, “No”.
However, later he clarified that AI alone shouldn’t decide if a piece of information should be trusted, but there is a role for AI to run in background to users and social media platforms help identify fake content.
Ghosenmajumder, who is from London, Ont., was one one of three experts speaking at a Royal Bank RBC Disruptors panel at the University of Toronto’s Rotman School of Business. RBC Disruptors is a regular series on current events.
Others on the panel were Zeynep Tufekci, associate professor at the University of North Carolina’s School of Information and Library Science, and Kevin Chan, Facebook Canada’s head of public policy.
Tufekci said one of the biggest threats to democracy is the advertising-based models of social media platforms, which encourages misinformation and “outrageous stuff.”
Social media companies trying to take down worrisome content and governments mandating the creation of political ad registries are band-aids, she said, as long as sites like YouTube have content recommendation algorithms that push “conspiracy theories, white supremacy, videos about how the moon landing never happened — stuff designed to grab attention.
“That’s how YouTube makes its money.”
The solution, she said, is forcing social media companies to move to a paid model, which would still allow them to be “perfectly profitable.”
Chan defended Facebook’s efforts to make its platform more attentive to being abused, noting the platform is working closely with Elections Canada and Canadian intelligence agencies to identify problematic content. On Sunday Facebook Canada will release a tool to help eligible people register for the October 21 federal election. In the first quarter of this year Facebook globally removed 1 billion fake accounts, he pointed out.
Internally Facebook debated refusing to accept political ads during elections, he said.
“But on balance we believe that digital marketing has been enormously empowering for people who want to have a voice during an election. It’s not just the incumbents who have access to significant budgets and other intermediaries. We think a concerned citizen . . . should be able to run an ad and tell their local candidates how they feel.”
In an interview before the session Ghosemajumder noted Canadians increasingly get news from each other through social media rather than by directly going to mainstream news sites. However, what he called “synthetically generated” content that alters real news is increasingly becoming a problem.
At one end of the scale, he said, is real news that is made to look more popular than it is through registering millions of fake accounts that ‘Like’ the article the news, making it look like a “grassroots movement.” At the other end is the creation of fake articles and videos.
Asked if social media sites are doing enough to combat abuse during elections, Ghosemajumder said platforms are making big efforts. “The effect,” he added, “is another thing. It’s a very difficult problem to deal with. There’s no hard and fast rule on how social media can affect people’s point of view. You can have articles that are not a black-and-white fake — say, it has a biased point of view from a legitimate news source — that goes viral.” But that’s the choice of users, he said, regardless of how others feel about the issue.
On the other hand major social media sites are removing phony accounts. Still, he admitted that they aren’t catching them fast enough. Content created by users, he added, get more views than sponsored content and paid political ads.
His biggest fear about the upcoming election is foreign online manipulation of social media that isn’t detected by all the efforts platforms say they are devoting to watching for. “The creation of fake accounts and transactions is extremely difficult to detect.”
He also worries about the increasing ability to create fake videos, in which a real person’s image is altered to make them say something they never uttered. What will be needed to fight that is a way to tag a piece of content so readers will know its source.