TORONTO –Â As artificial intelligence and automation continues to develop, organizations all over the world are going to have to start thinking about digital ethics.
As a senior leader  of Avanade since its conception in 2000, and its CEO since 2008, Adam Warby is specifically equipped to help answer this question. ITWC had a chance to sit down with Warby to discuss accruing talent, the remote workforce, digital ethics and automation, and digital transformation.
In part three of our Q/A with Warby, he discusses digital ethics and the role moral and ethical decisions will play in the development of artificial intelligence and automation.
This is part three in a four-part interview.
Part one on how businesses can tackle the skills gap and accrue talent.
Part two on the remote workforce.
The following is an edited transcript.Â
ITWC: Back in February, Avanade released its 2017 Technology Vision that focused heavily on artificial intelligence. Avanade Canada GM Jeff Gilchrist told me then that every business will be an AI-first business. What’s your take on that statement?
Adam Warby: I think you have to envision that future. It’s one of the hardest aspect when it comes to technology visions or technology predictions – how soon will this be and where will it come from? We talk about dial-up to fully connected and how that took a couple decades, and this will probably be less than that.
The idea is that we will have intelligence built in. I often think that the artificial word is the part that troubles people and confuses the conversation the most. Our focus is on the intelligence part of it, and then you can turn it around and call it intelligent automation, because the other part of it is automation.
All companies will have intelligence built into their systems whether that be at the the point-of-sale, the point-of-customer service, or automation at the factory line. To de-mystify the AI-first world, it’s just about having intelligence and context built in, and that will absolutely happen.
ITWC:Â The introduction of automation and AI brings about one scary complication, digital ethics. As a company really diving into AI, how have to seen the idea of digital ethics develop over the last few years?
Warby: We’ve actually done some research, and we found that of the people who recognize that there are unintended consequences of digital technology, less than half have actually developed some form of guidelines, policies, or practices to deal with it. This is about first recognizing it, like with any issue, and I think that data privacy is one of the things that will be brought to the forefront, particularly with the advent of the GDPR (General Data Protection Regulation) regulations coming out of Europe that companies within less than a year have to comply with.
Ethics basically means choices, just because you can does that mean you should. We are absolutely recommending to people that they develop a position on it. I’m head of our digital ethics and compliance council, and we have a focus on those things. An example would be this technology that is able to surface trending documents within the company. Do I want the company to know that the acquisition plan for Infusion is a trending document? Probably not, that’s private and confidential before the deal is closed. That’s a practical thing rather than maybe an ethical one.
An example that is more ethical is when we first set up the software in the IT organization, one of the documents that started trending was the maternity policy and there was only one woman in the department at that time. So, is it ethical that we should know that this woman is either thinking about getting pregnant or that she is – those sorts of issues. You have to develop your own set of scenarios.
ITWC: How can we reconcile the decisions we as a society are going to have to make with machines? I.e. looking at examples like MIT’s Moral Machine.
Warby: As far as I understand it, there are some of the autonomous vehicle companies that are putting their ideas and scenarios online to gather input from the society at large about what would you do in this situation. I think like with any policy or practice, it will evolve due to practical examples and scenarios.
Now of course, eventually what happens is things like insurance and liability will arrive, and people will have to make decisions to cover risk, and these things become quite specific in a legal context. In the meantime, I think you come back to these examples and it’s about engaging with specific scenarios and developing opinion about what we think. Kill the passengers or the pedestrians. Is that pedestrian old or young? Does that make the decision different? Those are the moral dilemmas.
ITWC:Â Who makes that decision? The government? The creators? And if mistakes happen, who takes the fall?
Warby: The current risk legal frameworks can apply. But they are going to need to evolve to be specific to the types of situations. Ultimately, the technology originator will make some decisions about what the technology is capable of, and what it does or doesn’t do in specific cases.
One of the questions will be is whether or not those decisions are published. If they are private, what does that mean? I think that is not yet clear what the biggest, most challenging scenarios are, but learning together will be part of it. And then law, practice, insurance, and risk management will be part of those decisions as well.
ITWC:Â Is government too slow? Will government bodies be able to move fast enough to make these decisions?
Warby: I think innovation has a history of outstripping any sort of government or legal practice. My point of view is that innovation will outstrip the ability of most governments to legislate ahead of innovation. But this is fine because law is also a practical topic that is built up on experience.
In the case of GDPR for example, the European Union has taken a position, and it took some time to be very thoughtful about it, and those rules will now get interpreted in a practical way. That’s the way law and practice work.