AI now and in the next five years: Eric Schmidt talks AI at Collision 2022

At Collison 2022 in Toronto last week, former Google chief executive officer Eric Schmidt shared his thoughts on where artificial intelligence (AI) is headed, and what dangers it’s facing in the path of its rapid development.

He opened the discussion by highlighting the evolving focus of the AI landscape.

“If you look at the biggest thing 10 years ago… [it] was imaging, and in particular the fact that vision through things like image net and others would become better on computers than for humans.”

Schmidt said there are now two major pursuits in the current AI cycle. One is to understand the underlying principles of multi-dimensional input and output mapping. And the other is to continually improve large language models.

Furthermore, as AIs has grown more powerful, two emergent AI paradigms are taking off. One is multimodal AI that can learn using multiple types of data, including audio, video, number,s and images. In the future, multimodal AI can fluidly solve problems involving different or multiple data types and provide richer answers.

The other is generative design. Generative design systems are already being used in designing machine components and converting text to images. The engineer defines a set of criteria and the software produces a model that satisfies the functions. It can save cost, optimize reliability, and can sometimes produce completely new, one-of-a-kind solutions. Generative design in image generation is still in its infancy, but as OpenAI’s DALL·E and Google’s Imagen have demonstrated, AI systems can successfully produce an image based on even obscure descriptions. Soon, it will be able to do something more, said Schmidt.

One example of how engineers use generative design to create products. Source: Creo YouTube channel.

“My favourite example of DALL·E is ‘show me a picture of two baby dinosaurs, with backpacks going to kindergarten on their first day,’ and it produces a picture of two little dinosaurs going up two steps into kindergarten with their backpacks. And smiling. How do they know that dinosaurs smile on the first day of school? It learned it somehow. That’s very powerful,” said Schmidt.

Looking into the next five years, Schmidt believes that people will have an “AI second self” that can speak and represent them in certain situations, and vastly improved AI assistants that can not only feed the user hard information but also give much deeper recommendations, like whether to travel to a specific country and if a person is good or bad. But regardless of how smart AI becomes, Schmidt emphasized some limitations and dangers lurking in the AI field.

“If you talk to the researchers [and ask them] Where are the limits? And there’s always a limit of technologies,” said Schmidt. “Most people now believe that these systems have trouble with common facts; They don’t have a notion of time, they are brittle. they’re easily attacked in adversarial ways. All of those issues have got to get addressed.

A few of the images generated by Imagen based on description text. It understands the request using a deep textual language model combined with a cascaded diffusion image generation model. Image credit: Google

“If they’re addressed, then you’re going to have systems which are human-like, but not identical to humans as our peer intelligence, and when I say they’re human like, I mean, you’ll say boy, that thing is smart. But this is really important. They’re not emulating human intelligence.”

Most humans exhibit a pattern of behaviour; people can loosely guess how their friends will behave. AI systems, however, are cold and mysterious. For a system that’s constantly learning and evolving, Schmidt worries about the consequences if AI learns from the wrong data.

“Imagine I have a computer over here. I have no concept of its theory of mind. I have no idea if it can flip to be evil,” said Schmidt. “Can it invent something new that would hurt me? I’m always going to be worried that that box over there has some emergent behaviour that no one has seen yet. Now, why am I focused on emergent behaviour is because the systems are learning, and they’re not just trained to a static outcome? They’re continuously learning. What happens if you learn something which is structurally wrong, and it’s in some important aspect of my life? And that’s where I think the issue is.”

Elaborating on his concerns, Schmidt raised the importance of the trust humans may eventually place in AIs.

“The oracles, the leaders, the mentors that we all have are human, and we have a theory of mind for who they are. What happens when my best friend is not a human but one of these boxes?” Schmidt asked when talking about the dangers AI systems could pose if it feeds misinformation. “I am very worried that on a global scale, this will change the way we experience our daily lives. It’s already hard enough with everyone being connected. But when you’ve got these non-sentient but emergent behaviours, generative design systems, it’s going to be a pretty rough case, you’re going to be more paranoid than you are now. Because you’re not going to know what it’s going to do.”

Misinformation aside, Schmidt warned that the problem can get even more dangerous when AI is involved in government and military decisions. In future cybernetic warfare, the pace of attack and defence is too fast for human decision-making. In a blistering exchange of blows in bits, there are no rules for what is a proportional response. Schmidt said that AI systems should respond in a way that doesn’t amplify the conflict.

“There are so many examples where we literally don’t have the people, we don’t have the conversations, and we don’t have the agreement on how to handle this,” said Schmidt. “And the reason I’m calling the alarm right now is I know what’s going to happen, because we’ve seen it with previous technologies in the 1950s. All of the things that have caused us to be alive today– mutually-assured assured destruction, containment–all of those were invented in the fifties after the horror of the invention of nuclear bombs. And the reason they haven’t been used since is that we came up with some workable agreements among people who don’t even like each other.”

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Tom Li
Tom Li
Telecommunication and consumer hardware are Tom's main beats at IT World Canada. He loves to talk about Canada's network infrastructure, semiconductor products, and of course, anything hot and new in the consumer technology space. You'll also occasionally see his name appended to articles on cloud, security, and SaaS-related news. If you're ever up for a lengthy discussion about the nuances of each of the above sectors or have an upcoming product that people will love, feel free to drop him a line at tli@itwc.ca.

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now