Experts talk measuring the impact of AI on climate change at ALL IN

During a panel at ALL IN in Montreal, industry experts from leading institutions like MILA, Hugging Face, Innovobot, Stanford University and others came together to discuss challenges with measuring and reporting the impact of artificial intelligence (AI) on climate change.

Core academic member at MILA David Rolnick kicked off the discussion, detailing the applications of AI in climate change, spanning data collection, forecasting, improving operational efficiency, speeding up simulations, accelerating scientific discovery and more.

He, however, warned, “AI is not a silver bullet. It’s not the answer to climate change – there is no one answer to climate change – we need lots and lots of different tools. AI is one of those tools. It’s not the most important one, but it is a powerful one.”

There are challenges even within AI algorithms in climate-related applications, Rolnick explained. That mainly includes difficulties with collecting data, which is constantly changing, limited, or imbalanced, especially if a researcher is working across multiple geographies. 

Researcher at Hugging Face Sasha Luccioni added that data on GPU emissions, or how much of it is associated with AI, is severely missing, adding, “we don’t have these numbers until someone sues a big tech company, and then it’s part of public record.”

However, the panelists acknowledged that transparency is not akin to greenwashing, whereby a company would make unproven claims about being eco-friendly. Oil giant Chevron’s claims of going net-zero, for instance, do not make sense, noted researcher at Innovobot Sylvain Carle, who argued that their core business of delivering fossil fuels is not being accounted for under scope three emissions.

“Really understanding what numbers are coming from is essential so that we don’t end up in a situation where we have certified sustainable algorithms doing a very non sustainable thing,” said Carle.

Another common misconception, Rolnick explained, is the association of “flashy” large language models with climate-positive applications. 

The panelists stressed that the GPT-derived models are not being used in the impactful climate positive applications, and that it is, in fact, these large language models that have the biggest environmental footprint.

There are also several other avenues for AI to negatively impact climate change, and they could be direct or indirect, Carle explained. Direct impacts, among others, would be emissions from an AI data centre, while indirect impacts would be, for instance, the amount of carbon it requires to make AI chips or the consumption of AI.

Companies, Carle lamented, should stop trying to only consider certain pieces of the impact, or to shove responsibility onto others, and instead think about “the entire value chain, the entire impact all the way down the line, so that we don’t have really misleading numbers.”

Carle added that public agencies also need to do a better job at reporting. “No more emissions report in PDF, please. And we need up-to-date machine-readable, parsable data.”

Further, researcher at Duke University Lee Tiedrich argued that it’s important that technical experts come up with uniform climate impact assessment tools and metrics to further empower policymakers, adding, “It doesn’t help If Europe has one set of standards, North America has another, and Asia has another.”

Implementing data interpretability for the end user in climate-related applications is a key issue, Rolnick concurred. 

“These algorithms are not just going to be isolated code. They’re going to be incorporated into larger pipelines, and used by stakeholders who may have little to no familiarity with AI,” he said. “Understanding what kinds of functionality need to be built in, like uncertainty quantification or interpretability is really essential to making these systems be actually useful.”

Luccioni acknowledged that there are only so many people that know how to use AI and machine learning. The aim, therefore, she said, should be to give ecologists the tools to be successful.

Tiedrich, further, added that policy makers also need to be given a pathway to ensure regulation does not miss the mark and end up being counterproductive.

“It’s that multidisciplinary collaboration – bringing people together in an agile environment to come up with solutions. Because then you can go to regulators and say, ‘Here are tools that work,’ ” said Tiedrich.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Ashee Pamma
Ashee Pamma
Ashee is a writer for ITWC. She completed her degree in Communication and Media Studies at Carleton University in Ottawa. She hopes to become a columnist after further studies in Journalism. You can email her at apamma@itwc.ca

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now