Site icon IT World Canada

Designing durable values: a conversation with Mat Mytka on the ethical intent to action gap in tech

Mustering the energy and expertise to vigilantly protect your privacy as a user of technology is a challenging feat. If it’s even possible to do well, it might necessitate a refusal to use popular apps and websites. Platform monopolies make that choice more extreme (do I want to avoid personal data collection more than I want to stay in touch with international friends and family?), and similar technologies are increasingly a part of essential services, such as immigration, grocery shopping, or contact tracing. Leaving the consumer to donate their data unconditionally or opt out of digital society is an unfair burden. What might a better system look like? 

Mat Mytka and I met on Slack, in a conversation hosted by All Tech is Human. Founder David Ryan Polgar was featured in the Information and Communications Technology Council (ICTC) Tech and Human Rights interview series last year. The organization connects individuals working to improve the social impact of technology.

After checking out ICTC’s recent Responsible Innovation report, Mat reached out to talk about his experience as an educator in data ethics. Mat’s Greater than Learning platform helps businesses work data ethics into their key performance indicators (KPIs) to help reduce the ethical intention-to-action gap (the difference between what people plan to do and what they actually do) in corporate values. Across radically different time zones—Mat is based in Australia—we got into a conversation on operationalizing data ethics principles, the challenges facing small and medium sized enterprises (SMEs), and Mat’s pilot of a new corporate structure, a platform cooperative, to reduce the conflicts of interest in responsible innovation work. 

Faun: Thanks so much for taking the time to chat today. Before we get into some of these case studies, could we hear a bit about your journey to working in the responsible tech field? 

Mat: About 13 years ago, I returned to university as a mature student. I got interested in how the digital technology age influences the way that we think, behave, and interact. This was 2008, and Facebook was only a few years old. At the time, you sounded a little bit like a conspiracy theorist if you were talking about the subtleties of digital technologies collecting information and creating feedback loops, and how these influence people’s behaviour, actually changing the way a person interacts and thinks about themselves. 

Fast forward a few years, I co-built a service business called Greater than X that was really operating at the intersection of privacy, trust, and ethics in the context of the information sharing economy. Through this work, I developed a much more nuanced understanding of some of the challenges that businesses face, including how they are constrained by shareholder primacy, as well as the types of metrics and incentives that they have. And this really piqued my intellectual curiosity, because these are systemic problems that are really hard to solve. It requires a lot of systems thinking to even understand the problems in the first place. We got a little bit deeper into the areas of responsible innovation, working with big Fortune 500 companies, and national governments on information sharing initiatives. And that’s helped me to empathize a lot, actually, with decision-makers who wanted to build technology that helped people and didn’t create externalities or downstream consequences, be that in machine learning or in the way that privacy needs to be baked into information sharing initiatives in a kind of national context. 

That spurred on a whole bunch of thinking around developing a learning platform to help in some way. So that’s the trajectory I’ve been on, which has led me to be interested in this area of responsible innovation and technology. 

Faun: Now you run this educational platform, Greater than Learning, which helps companies to operationalize data ethics. Could you share some thoughts on why this is needed, and what strategies you use?

Mat: For sure, and I’ll begin with a challenge: one of the problems that we had as an organization was where to start, what empirically validated approaches have been tested in market to help organizations achieve set outcomes. And there is a huge amount of overlap. There’s quite a lot of universality to values, for instance; there’s universal human values and virtues that underpin a lot of the ways people think and interact in society. And there is a lot of duplication of effort, a lot of people saying, “Check out this new approach,” when they’re really regurgitating the same stuff. A lot of these value-based frameworks tend to be hard to take action on. They require a lot of interpretation. 

If you take a principle like openness and transparency, for instance, and go, “Well, we’ve got some action statements around how this might apply to our organization,” everyone might be aware of this, but where the rubber meets the road it matters if I can’t take action on it. When people are making a decision in an organization, they’re dealing with constraints and dealing with pressure, time, money, budget, KPIs, cultural nuances, conflicting internal and political agendas, and the incentives that drive people’s behaviour.

Figure 1: From Responsible Innovation in Canada and Beyond, ICTC, 2021. The report examines high-level frameworks, their shared values, and their critiques (such as a dearth of incentives and operational guidelines). It then presents a set of more specific rubrics for achieving principles like technology user agency.

I’m motivated by designing things that are going to be helpful for people. When you’re trying to help people learn to operationalize principles, you need to try and simulate and mirror some of those constraints. You can teach people all these frameworks and techniques and tools, but ultimately, if there’s no translation to practical action, all that stuff falls apart. You’ll see it vanish when the stress comes on in a meeting or in product development workflow. People default to past behaviours when they’re making decisions under constraints.

Faun: I know you mentioned the Microsoft case study as one you’d like to unpack a little today for readers. The World Economic Forum and the Markkula Center for Applied Ethics examined Microsoft’s experience of trying to operationalize Responsible AI Principles across what is, of course, a huge company. The whitepaper on this goes over the company’s culture change activities, governance model, impact assessment tools, and other methods—like “community juries” for consultations. What makes this an interesting case study for you?

Mat: It’s one of the better case studies in this area that I’ve come across. It really asks, how do our mindsets, our mental models influence our workplace culture and the way that we’re designing and building technology. Workplace culture is actually really important because professional relationships impact the meaning people attach to their work and organization. 

Faun: I’m reminded of an MIT article that compares value statements with Glassdoor reviews and maps out the gaps between stated intentions and employee perception—this seems to be a pretty endemic issue! They found that there was no correlation between official values and corporate culture, in the eyes of employees. 

Mat: Right, it really matters what people believe their organization represents, why they are there, what kind of work they are there to do. Microsoft also put a focus on some of the tools that they were using to support more responsible innovation, how they were using privacy impact assessments, for instance, as well as different collaborative activities internally that were driven by board games and card games to help people think through problems in a collaborative way. And then there is the use of an approach called the “community jury,” which is a participatory perspective on how do we bring in people as end users, as customers, as well as maybe regulators, internal staff from different functions within the business to come together and figure out whether this design or this approach that we’re using is aligned to people’s values, people’s perceptions, stated preferences, and then revealed preferences. 

“Technology ethics is fundamentally an applied, practical discipline. It is not about theory, although theory informs practice. Technology ethics, as a practical pursuit, benefits from having very specific and applicable tools to aid thinking, analysis, stakeholder engagement and decision-making. The tools presented here – Judgment Call, Envision AI workshops, impact assessments, community jury, Fairlearn, InterpretML and the error terrain analysis—each address a specific aspect of the ethics of technology development. Most of these tools, guidelines, and resources are available to the public at Microsoft’s Responsible AI Resource Center.”
Source: “Responsible Use of Technology: The Microsoft Case Study,” February 2021, World Economic Forum and Markkula Center for Applied Ethics at Santa Clara University

The case study also enables you to look at workplace culture with regard to incentives, metrics, and KPIs influences outcomes. You’re going to have a conflict if you have business outcome-focused metrics that are revenue-focused, driving activity and sales instead of increasing user autonomy, for example. There’s a real tension in that. 

So there are a number of ways in which the case study is great. But it’s also missing a lot of things because we’re talking about Microsoft with all the resources that it has. Large organizations can resource responsible innovation, like Salesforce has its Office of Ethical and Humane Use. Maybe it’s still challenging even for Microsoft to do this stuff, but how does your average bootstrapped start up coming out of Seattle take action on these things? It’s too hard, so they don’t. And that’s one of the big problems. It’s hard enough for big organizations to do this stuff well, and it’s orders of magnitude more challenging for an SME.

Faun: So how would you go about solving this problem? Does it then merit a regulatory approach?

Mat: One challenge about a regulatory approach is that even the process of developing “hard” regulations—like the consultation process with communities, practitioners, and subject matter experts—I think even that process is a little bit broken. It doesn’t scale as well as it needs to, given the pace of technological change. So you often have to rely on a soft law approach, call it self-regulation, but how do you help organizations do this more systematically, and do it well? And then share those learnings in a way that enables the big guys, your Microsofts and Salesforces, to deliver a public good or public interest technology that also helps SMEs do this stuff more easily. We want SMEs to be able to get interested in ethics and then immediately have access to the insights and wisdom generated from all the learnings by organizations that had the resources to be able to do it in the first place. 

One of the things we’re trying to do at Greater than Learning is bring different stakeholders together to try to close that ethical intent-to-action gap. We think the best way to learn how to do responsible innovation is to actually do it. If you talk about it too much, if you write about it too much, and if it stays too abstract, you miss the learning opportunities that come by making mistakes. We’re trying a new model, making it a community-owned and governed organization, so not something that falls into the issue of shareholder primacy. We’re currently transitioning to a platform cooperative where members own the platform and the organization and are able to work through democratic decision-making so that most of the focus is on ethical, trustworthy technology instead of making money. In the short term, we have courses that lead people through evolving their technology to be more aligned to their values. And then the stretch goal, in the future, is to lead by example and offer a case study of a successful platform cooperative. 

Faun: Amazing. To summarize, I think we’ve talked about how case studies are important to responsible innovation (RI) because they give us examples of how to operationalize values-driven approaches. We touched on how this was a lot easier for bigger organizations, and the next step is to make RI processes more accessible to SMEs, through programs like Greater than Learning. To wrap up, and putting you on the spot a bit, what is the number one piece of advice or guidance you might give a company trying to operationalize their values or closing that ethical intent-to-action gap? 

Mat: At a high level, it’s think big, start tiny. You have to think about things systemically and holistically, and that’s a vision, that’s the longer-term perspective: that might be where the motivations and the intent is. But you can’t do that all at once. So you have to go, “OK, how do we break that into the smallest possible increments? How do we run a consequence-scanning workshop up front for this new product feature?” And that ties back to the piece on workplace culture we were discussing earlier. Building a routine where people think about their decision-making becomes one of the most important things. If you take your product team and you’re asking them to do too many new things, it can cause fatigue from excessive change, and if you have fatigue then people won’t act. They might stick some posters on their office wall, but they won’t change their workflow. So start tiny, and then build out changes within a workplace culture built to support them through habits and routines.

Exit mobile version