If the artificial intelligence trend is a growing bubble, then Stephen Piron, co-founder of Toronto-based startup Dessa, says it started inflating in 2012.
That’s when team SuperVision, led by the University of Toronto’s Geoffrey Hinton, handily won the Image Net competition using a type of AI model previously not tested. Described as a “large, deep convolutional neural network trained on raw RGB pixel values” that contained “60 million parameters and 650,000 neurons,” it was so much better than any previous attempt at the competition that everyone else took notice. In every year since, every other AI researcher has adopted the same neural network approach to cognitive vision.
Six years later and Hinton is known colloquially as the Godfather of deep learning. He splits his time between working with Google and at the University of Toronto, where he’s also involved with the influential Vector Institute. It’s one reason that Piron says Canada is a good place to develop AI technology. But he’s also wary of the long journey ahead and its pitfalls along the way.
“I’m just old enough to remember the dot-com bubble,” Piron says, referring to the crash of web-related stocks in 2000. On stage at the CanadianCIO Summit Sept 30, he points to a slide comparing Nvidia’s stock price today with that of Intel’s circa the dot-com bust. “This might be a sobering image.”
Nvidia’s parallel processing GPUs and in high demand for AI applications (Intel has its own AI-specific chip in development too.) But just because the raw compute power for the AI engine is being put in place doesn’t mean that anyone knows what vehicle they should install it in.
A McKinsey chart shows how Nvidia’s GPUs have become linked to the demand for deep learning.
Banks are among the first customers of AI, looking to use historical treasure troves of data to feed machine learning algorithms that could steer customers towards the right investments, detect fraudulent transactions with better accuracy, or recommend the right products based on customer profile.
Financial firms overhaul infrastructure for deep learning
To name couple examples of banks pursuing AI projects: the Royal Bank of Canada created Borealis AIÂ in October 2016 (at first it was called RBC Research in Machine Learning). The lab recruited talent over the past six months, even luring some high-profile American researchers. In January, TD Bank Group acquired Toronto-based startup Layer 6 and has integrated it into its digital products team, with the intent to improve personalized and real-time advice to customers.
Dessa, whose mandate is to help businesses adapt and implement artificial intelligence, is currently working with a client in the banking sector, according to Piron. This client, like much of the bank sector, is trying to make the same pivot that Image Net researchers did after 2012 – dump old AI approaches and instead embrace deep learning as the future.
To do so, they have to dump some old infrastructure and start propping up cloud-based GPUs, Piron says. That means looking at the security of these cloud systems banks determine what data is appropriate to move into that environment. (Should it upload a million credit card numbers of its customers to train an algorithm?)
“More data makes the model better,” Piron says. “It gave the bank a use case and it was all about the money that it saved.”
Data is crucial to feed AI algorithms in every sector, not just banking. At connectivity and managed cloud services provider Cogeco Peer 1, headquartered in Toronto but with locations around the world, AI is being used for predictive maintenance. The firm hopes to intervene into problems with routers and switches well before the “bug” ever expresses itself in a service problem for a customer.
“AI is filling in missing pieces of information by looked at historical data and assimilating it,” says Craig Tavares, director of product innovation and technology at Cogeco Peer 1, speaking on a panel at the CanadianCIO summit. “Automation can be put in to measure a result, and trigger an alarm that tells you to perform a certain type of maintenance.”
The challenge in making that function work is avoiding garbage data. Tavares says he’s always looking at Cogeco Peer 1’s data lake and thinking about how to clean it up.
“With day-to-day operations, your data might be in an acceptable state,” he says. “With AI that’s not the case. Understanding how clean your data is, is huge.”
Stay calm and implement AI
So the use cases are starting to materialize, but slowly and surely. Prion says he encounters executives that know they want to use AI, but have no idea where to start.
“We spend a lot of time calming them down,” he says. “We look at their business and map out a multi-year roadmap to take advantage of AI.”
Cogeco Peer 1 takes a similar approach when its customers come knocking, asking how to implement AI. On a separate panel, Bertrand Labelle, vice-president of marketing and innovation at Cogeco Peer 1 says he’s used to guiding these requests wisely.
“We come in and ask ‘what are you trying to achieve? and why?’,” he said. “Then we have a chat about where your workloads should really run.”
While the journey to AI autonomy is still years out and the road there is fraught with data snaffus, now is absolutely the right time to get started, Piron affirms. He points to a McKinsey projection on the likely impact AI will have in the tech sector. It shows that companies that begin AI training now will have a competitive advantage, and those that don’t will get left behind.
“With so much at stake, companies cannot afford to have a nebulous or tentative plan for capturing value,” the article states. “Early entrants can improve and rapidly gain scale to become the standard. Companies should focus on strong solutions that allow them to establish a presence now, rather than striving for perfection. With an early success under their belt, they can then expand to more speculative opportunities.”
Then, even if the AI bubble bursts on us, you may already find you’re outside of it.