Servers in a server room
An arrow pointing leftHome

If we don’t want AI to emit too much CO2, we need to think about when and where we run the models

  • Mike Pearl

A paper shows that time of day and year may be as important as region in slashing carbon pollution.

According to a recent news brief from Nature Magazine, training an AI in Washington state late at night might please climate hawks, because that’s “when the state’s electricity comes from hydroelectric power alone,” and thus leads to “lower emissions than doing so during the day, when power also comes from gas-fired stations.”

Nature pulled that takeaway from a preprint paper co-authored by Jesse Dodge of the Allen Institute for AI’s natural language processing team, as well as nine other scientists and scholars from around the globe (and it should be noted that The Allen Institute is a supporter of That paper, “Measuring the Carbon Intensity of AI in Cloud Instances,” poses, and attempts to answer two questions about the climate impacts of cloud instances of AI:

  1. “How should we measure and report operational carbon costs of AI workloads?”

  2. “Can we shift computation spatially and temporally to mitigate emissions?”

And progress on this dilemma is sorely needed, as anyone who recalls Emma Strubell, Ananya Ganesh, and Andrew McCallum’s 2019 paper on the topic, in which they concluded that creating one deep learning model emits up to 626,155 pounds of CO2, which is about five cars’ worth of emissions, from the time they roll off the lot to when they’re crushed into cubes.

The paper is unique, it claims, for concluding not only that AI models should be trained on cloud processors in regions where energy is less costly in terms of CO2, but also that these processors should be run at certain times — in other words, taking advantage of existing renewables where they exist, at times when the local grid is most reliant on them. And by “times” the authors mean both times of year, and times of day.

Graph from "Measuring the Carbon Intensity of AI in Cloud Instances"

According to the paper, regional variation in the degree to which a cloud instance of a single model known as BERT (Bidirectional Encoder Representations from Transformers) is carbon-intensive shows that CO2 emissions can be mitigated simply by being choosy about where to run these computations. But, as the illustration above shows, different regions of the planet mix renewables into their energy grids in different proportions, and with different intensities throughout the year, meaning some heavy CO2-spewing areas can become relatively green for months at a time.

Consequently, the paper proposes “a suite of approaches” for dynamically taking advantage of this emissions data: training AI’s using cloud instances where emissions are relatively low at any given time, varying the start times of operations, and pausing operations “when the marginal carbon intensity is above a certain threshold.”

With AI experiencing some growing pains over the last few years — many of its promises for business remain seemingly unfulfilled, just as more students are becoming increasingly interested in the subject — it’s hard to picture anyone seeing now as a convenient time to pump the brakes on any given project in the name of climate impact mitigation.

But as paper co-author Dodge pointed out on Twitter earlier this month, due to heatwaves this past month, “a number of supercomputers in Europe shut down ‘as temperatures started to reach the upper operating threshold for the equipment.’ Climate change and cloud users impact each other!’”

Indeed, if cloud processors all melt because of climate change, that’s going to slow the development of AI down even more than the paper’s “suite of approaches.” So it might just be worth a shot.