TORONTO – When we see artificial intelligence (AI) in fiction, it usually encompasses the AI functioning just like a human. That’s called ‘strong AI’ and well, we aren’t there yet. But we will be.
There are two types of AI: strong AI, the aforementioned AI that function as a human would; and weak AI, the type of AI we see today. For example: robotics in a manufacturing plant that function autonomously to complete one task is an example of weak AI.
“That’s where we are [with weak AI], but I think we are trending towards strong AI,” said Wayne Thompson, the chief data scientist at SAS, at the analytics vendor’s event in Toronto. “We are trending towards what I consider modern machine learning.”
When Thompson talks about AI, he talks about its applications in data analytics – perhaps the most relevant area AI can contribute to in 2017. Specifically, he dials in on machine learning as “that’s where you’ll be making most of your money.”
The importance of data analytics can be the area that AI proves itself. If anything, the ability to manage huge swaths of data has become a necessity for businesses, and it’s not being done any differently than it was decades ago. With AI and machine learning, we can just do it much faster.
Data analytics proved that there is a great need for statistics, for sampling, and for predictive models such as logistics regression models. Machine learning can do all of that. Whether it be statistics, optics theory, or simulated annealing from physics, automated systems can learn to understand data in faster ways that ever before.
Thompson uses a demo with Amazon’s Alexa as an example. By using SAS Viya, he wrote a skill for Alexa using D3 graphics running against SAS Viya. “You can use Java, Scala, and Python to write skills with SAS. Open source is part of the message,” said Thompson.
Once a skill is written on Alexa, Thompson was able to ask her to take a look at the analytics server to forecast revenue, analyze customer satisfaction based on survey results, and run a sentiment analysis by teaching Alexa how to understand the data. “You have to teach Alexa to understand,” he said.
For now, with weak AI, we have to teach these AI programs how to understand for themselves, and then keep reinforcing that learning. To reach strong AI, Alexa, for instance, is going to need to function and adapt all on her own.
It starts with machine learning, but that’s just the first step. In the development of AI and self-automation, machine learning is just that first layer.
“You can view machine learning algorithms as programs that can write their own programs,” said Joseph Geraci, data scientist at Equifax. “It’s like learning how to walk. The machine in your brain is equipped to learn how to do this – to wire itself and make new neural networks, or to ‘write your own software’. The same goes for machine learning. When we interact we pick up on queues – data essentially. Machines will have to do the same.”
The next step is deep learning, which has multiple layers so that it can continue to learn and progress. Deep learning allows processes like the natural language processing that we see in speech-to-text functions.
“Every layer you go through of deep learning describes a different set of features, which alginates from the layers below it,” said Lovell Hodge, vice president of North American fraud analytics financial crimes and fraud management group at TD Bank Group. “It’s an area of fuzzy logic. It allows you to write rules that are more akin to how a human thinks. We say the room is hot. If you can get the computer to understand what it means by hot, then over time it could self-learn what hot means.”
The final step, or the highest point we have reached so far, is combining natural language processing plus deep learning to get cognitive computing. It takes that speech-to-text to processing data, and integration with learning to generate insight. With cognitive computing, the machine could learn to understand what is being asked with the natural language integration, then dive into a machine learning skill in order to come up with a forecast.
After that – what comes next? Well according to Thompson, organizations need to collect even more data, considering data is the foundation on which AI will begin to learn on its own, in order to reach that strong AI peak.
Additionally, he wants all of SAS’ software to embed AI so that “it becomes a little more automated.” Thompson uses the SAS surveillance data fraud bot in its fraud suite as an example. Through automation, the bot can identity fraud quicker and do it in a continuous learning fashion, or as Thompson puts it, “life long learning.”
But SAS doesn’t just plan on updating and growing its current software. According to Thompson, SAS will be releasing new deep learning algorithms later this year with a huge library that includes autoencoders, convolutional networks, and recurrent networks. “We’re trying to build a complete deep learning platform,” said Thompson.
That platform will also include new natural language processing libraries with expressive analytics and narrative summaries, it will be entirely embeddable as open APIs, contain a software developer network strategy, and put a renewed focus on GPUs over CPUs. The GPU point came last, but it might be the most important one.
The SAS goal to support GPUs arises because these huge deep learning networks require more power. Training deep learning on a CPU takes much more time, so GPUs using Nvidia chips is the route SAS will be taking.
Ultimately, SAS has outlined four goals with its continued support in AI development.
- Leverage data of any type
- Support human and machine interactions
- Tell an actionable story
- Continually learn and update
“If I’m having a conversation about sales with Alexa, I may need to interject and ask a machine to adjust. That’s the true form of cognitive computing that we need,” said Thompson.