CIOs seek to make generative AI tools part of an enterprise-wide strategy that includes choosing the right infrastructure, language model, and navigating workforce worries as well as growing risks.
A new report by MIT Technology Review, based on in-depth interviews with senior executives and experts conducted in April and May 2023, details the key choices CIOs are making as they pounce on the generative AI phenomenon.
The interviewed executives represent giants like Shell, DuPont Water & Protection, Cosmo Energy Holdings, MosaicML, the U.S. Department of Veterans Affairs, Adobe, and University of California, Berkeley.
Here’s a look at some of the findings.
Data infrastructure
A solid data infrastructure, including software and network-related infrastructure, notably cloud or hybrid cloud, and hardware like high-performance GPUs are key to building AI applications.
“The architecture is moving in a way that supports democratization of analytics,” said Richard Spencer Schaefer, chief health informatics officer at the U.S. Department of Veterans Affairs (VA). This means that the infrastructure must support a simple interface that allows users to query data and run complex tasks via natural language.
Data lakehouses are becoming an increasingly popular infrastructure choice, combining two historical approaches — data warehouses and data lakes. Data warehouses could systematize business intelligence but could not offer real-time services or accommodate emerging data formats. Data lakes can support more AI and data science tasks but they are complex and slow to construct, and suffer from inferior data quality controls.
The lakehouse, on the other hand, seeks to offer an open architecture that combines the flexibility and scale of data lakes with the management and data quality of warehouses, the report reveals. It also minimizes the need to move data, which creates privacy risks.
Choosing the right model
Leveraging a general purpose AI platform does not confer competitive advantage and creates competitive risk, explained Michael Carbin, associate professor at MIT.
“You don’t necessarily want to build off an existing model where the data that you’re putting in could be used by that company to compete against your own core products,” said Carbin.
He added, “If you care deeply about a particular problem or you’re going to build a system that is very core for your business, it’s a question of who owns your IP.”
Users also lack visibility into the training data and algorithms that power the models. A number of companies like Samsung, Verizon, Amazon, and JP Morgan Chase have taken steps to limit company use of external generative AI platforms.
Large language models (LLMs) are also tainted with heaps of false information, generating inaccurate and unreliable output. Smaller, focused models, on the other hand, guarantee a more viable alternative, the report noted.
“I believe we’re going to move away from ‘I need half a trillion parameters in a model’ to ‘maybe I need 7, 10, 30, 50 billion parameters on the data that I actually have,” said Carbin. “The reduction in complexity comes by narrowing your focus from an all-purpose model that knows all of human knowledge to very high-quality knowledge just for you, because this is what individuals in businesses actually really need.”
Plus, smaller models do not take much time or money, relatively, to train, the report reveals.
Workforce worries
The CIOs interviewed for this report offer a positive outlook, unlike those concerned that AI will lead to unemployment. They argue that AI could help sectors like health care, where the workforce is stretched, and that human experts will remain essential.
Generative AI will also democratize access to technical capabilities, previously reserved for only a slice of the workforce, the report says. In fact, ideas for AI, one CIO noted, will start coming from the workforce itself, ushering in a self-service and entrepreneurial era.
Generative AI has, furthermore, gained a foothold in software programming, prompting concerns, but the interviewed execs believe that this will only help programmers shift their attention to higher value and less tedious work.
CIO at Adobe, Cynthia Stoddard, said, “Generative AI lets creators at all levels use their own words to generate content. But I don’t think it will ever replace humans. Instead, it’s going to be an assistant to humans. We internally view AI/ML as being a helper, truly helping our people, and then allowing them to spend more time on other value-added activities.”
Addressing risk aversions and cultural factors like fear of failure are also key to driving AI adoption in the workforce, the CIOs concurred.
Risks
Adopting AI without managing the risks – from bias, copyright, and privacy infringement to security breaches – is reckless, the report says. And model explainability is imperative to earning the trust of stakeholders for AI adoption and for proving the technology’s business value.
CIOs also deem unified governance as essential to managing generative AI risks.
“The risk of having non-standardized, non-well-defined data running through a model, and how that could lead to bias and to model drift, has made that a much more important aspect,” Schaefer explained.
Stoddard added that there needs to be a wide range of voices throughout the AI oversight process. Diversity not only in things like ethnicity, gender, and sexual orientation, but also diversity of thought and professional experience need to be mixed into the process, and the AI impact assessment.
Organization-wide visibility also matters. “High on our list is getting governance tools in place that provide a visual overview of models that are in development, so that they can be spoken to by leadership or reviewed by stakeholders at any time,” said Stoddard.
The report also briefly discussed Constitutional AI, an approach currently advocated by the startup Anthropic, which provides LLMs with specific values and principles to adhere to rather than relying on human feedback to guide content production.
“A powerful new technology like generative AI brings with it numerous risks and responsibilities,” the report observed. “Our interviews suggest that a motivated AI community of practitioners, startups and companies will increasingly attend to the governance risks of AI, just as they do with environmental sustainability, through a mixture of public interest concern, good governance, and brand protection.”