(Open)AI needs enormous amounts of energy and compute hardware. Meeting these needs would lead to a huge increase in CO₂ emissions. The only way to avoid catastrophic warming is to drastically reduce CO₂ emissions. In other words, the planned growth of (Open)AI is entirely unsustainable.
Others have written about the energy needs to provide AI services. Here I focus on the impact of the production of the computer chips needed for this.
The urgent need to reduce emissions
To reiterate, according to the UN Emissions Gap Report 2023, the world must cut global greenhouse gas emissions to 20 gigatons CO₂-equivalent per year (GtCO₂e/y) by 2040 from the current level of 60 GtCO₂e/y to avoid catastrophic global warming, where “catastrophic” is meant quite literally: there will be a huge increase in frequency and severity of natural catastrophes if we don’t do this. Large parts of the earth will become unsuitable for habitation and agriculture.
To arrive at a sustainable level of emissions by 2040, global CO₂ emissions should reduced by close to 20% per year. However, currently, emissions are still rising at 1% - 2% per year.
The Emissions Gap Report explains in detail why renewables, carbon dioxide removal and carbon offsetting alone will not be sufficient to meet the targets.
The growth of AI is unsustainable
Many experts have pointed out that the energy required to provide AI services is huge, and that this in itself means the steep growth of AI is unsustainable. Apart from my own estimates about the energy needs and resulting CO₂ emissions from AI, there have been many other recent articles, for example Kate Crawford in Nature, Kate Saenko in The Conversation, Alex de Vries in specialist journal Joule, or this recent article in The Guardian.
In the rest of this article I focus on OpenAI as the main proponent of this unsustainable growth. There are many other such AI companies (e.g. Anthropic), but OpenAI has been most clear in their messaging about their needs for energy and computer chips.
(Open)AI needs a lot more energy in the world
OpenAI is probably the best known AI company. It is responsible for AI products such as DALL·E, ChatGPT and Sora. OpenAI is a private company owned 49% by Microsoft (49% by all other investors, and 2% by the OpenAI non-profit foundation). So for practical purposes, it is a Microsoft-controlled company, the same Microsoft that claims it will be “carbon negative by 2030”.
In an interview with Bloomberg at the WEF in Davos in January 2024, the CEO of OpenAI, Sam Altman, made clear how huge the energy needs of this company are, and admitted that this is goes contary to meeting the global climate targets:
Interviewer: Considering the compute costs and the need for chips, does the development of AI in the path to AGI threaten to take us in the opposite direction on the climate?
Altman: Yes, we do need way more energy in the world than I think we thought we needed before. My my whole model of the world is that the two important currencies of the future are compute/intelligence and energy. You know, the ideas that we want and the ability to make stuff happen and the ability to like run the compute. And I think we still don’t appreciate the energy needs of this technology.
Then Altman goes on to say we need more nuclear and we need fusion, “at massive scale, like a scale that no one is really planning for”.
Interviewer: So I want to just go back to my question in terms of moving in the opposite direction. It sounds like the answer is potentially yes on the demand side, unless we take drastic action on the supply side.
Altman: But there, there is no – I see no way to supply this, to manage the supply side, without a really big breakthrough.
Interviewer: Right. Which is this is, does this frighten you guys? Because you know, the world hasn’t been that versatile when it comes to supply. But AI as you know, as you have pointed out, is not going to take its time until we start generating enough power.
Altman: It motivates us to go invest more in fusion and invest more in new storage. And not only the technology but what it’s going to take to deliver this at the scale that AI needs and that the whole globe needs.
Where can all that energy come from?
There is a question of timescales here. Most companies have horizons of 5-10 years but not much beyond that. Building new nuclear power plants takes at least 20 years, and fusion at scale is still not even viable in the lab, so at best that will also take another 20 years. And in the meanwhile, we desperately need to cut emissions to stop catastrophic warming, and we can only do this by stopping the use of fossil fuels.
What Altman (and therefore Microsoft) is really saying is therefore “keep the fossil fuels” and even “increase fossil fuel electricity generation” because they know that fusion will not be around, and nuclear capacity will not increase substantially, for another several decades. They also know that the growth in renewables is too slow to meet their demands. So for the next decades, the only way to produce the energy they need is by burning more carbon. And two decades more of emissions from fossil fuels is an unmitigated disaster.
But what about energy efficiency?
The energy efficiency of computing is still doubling every 2.6 years. If we assume, very optimistically, that this trend will hold for another 20 years, then by 2040, computers would be 64 times more energy efficient. So in this best-case scenario, we could double compute capacity every 2.6 years without increasing energy consumption.
Which means that Altman wants AI to grow even faster than that. This is borne out by another action of OpenAI that was recently in the news. To do all that additional compute, they needs a lot more computers, and that requires a dramatic increase in chip manufacturing.
And making chips is one of the human activities that releases huge amounts of greenhouse gases.
(Open)AI also needs tremendous amounts of chips
According to an article of February 2024 in the Wall Street Journal, discussed in more detail on CNBC,
OpenAI CEO Sam Altman wants to overhaul the global semiconductor industry with trillions of dollars in investment
Altman has said AI chip limitations hinder OpenAI’s growth, and as this project would increase chip-building capacity globally
Altman wants to raise $7,000 billion. For reference, the new 2 nm TSMC fab will cost $34 billion. Seven trillion would allow to build two hundred such fabs. According to Z2Data, 16 fabs for < 10 nm are currently being built. So Altman’s plan could increase this by more than ten times, even if some of the money is used for other purposes.
TSMC says that the combined capacity of their four GigaFabs exceeded 12 million 12-inch wafers in 2023. As the fabs also produce silicon for older nodes > 28 nm, I have conservatively assumed that the capacity for a new GigaFab would be 1.2 million 12-inch wafers per year. Using data on embodied carbon of chip production from a paper by researchers from Harvard University, such a fab is responsible for 13.6 MtCO₂e/y of embodied carbon in chips.
If there were two hundred such fabs, that would amount to 2.7 GtCO₂e/y.
Considering the planet’s carbon budget by 2040 is 20 GtCO₂e/y, purely making so many chips would take 14% of the global carbon budget; running them could take again as much, so if this estimate is accurate, this plan could see “AI” eating almost 30% of the global carbon budget for 2040.
Refining the estimates
(tl;dr: estimates have considerable uncertainty but the overall conclusions don’t change)
It looks doubtful that production of raw materials, esp. rare earths, can meet this kind of demand. From January 2024 MIT Technology Review article The race to produce rare earth elements:
According to the IEA, demand for rare earths is expected to reach 3 to 7 times current levels by 2040. Delivering on the 2016 Paris Agreement would require the global mineral supply to quadruple within the same time frame. At the current rate, supply is on track to merely double.
The reason why the 2016 Paris Agreement requires the global mineral supply to quadruple is mainly to do with the need to electrification of the global economy, not with chip production. On the other hand, a gigantic investment in the semiconductor industry would most likely increase the rate of production of raw materials. So we can cautiously assume that the chip production might still quadruple.
This is not necessarily good news though, because, although it means that we can’t produce ten times more chips, and so embodied carbon emissions will grow more slowly, it also means there is less scope for moving to more energy-efficient chips. If a company can’t replace its servers by more energy efficient ones, but still wants to grow its compute capacity, it will need to keep using the previous, less efficient generation. The compute capability of that generation will be similar to the next one, the main difference is in energy efficiency. The result is that growth in compute capacity means growth in emissions.
The estimate from Harvard I used might be on the high side. Imec’s analysis of 2020 gives a considerably lower estimate, nearly ten times lower, but their model only counts water, electricity and greenhouse gases from the process, not the emissions from mining, producing the precursor materials, packaging etc., so it is not the full emissions. Nevertheless, even taking those into account, it might still be 3-4 times lower.
On the other hand, I did not include the older fabs that are still producing chips in my estimate, nor the 16 fabs being built at the moment. The emissions for anything < 28 nm are of the same order of those for a 2 nm process. Using the data from Wikipedia’s List of semiconductor fabrication plants, then the total production capacity of all current fabs < 28 nm amounts to about 14 GigaFabs. Taking into account these 30 fabs would increase my estimate by about 10% to 12%.
Also, in the above calculation I assumed that the chips produced were CPUs or GPUs, as this is what the GigaFab produces. In practice, every server will have RAM and SSDs as well. So let’s assume the two hundred new fabs produce these in equal amounts, so instead of producing four CPUs, for every CPU they will produce a GPU, RAM and SSD. Using data from the 2022 paper by Tannu et al, the contribution of each of these is resp. 4%, 11%, 9% and 38%. Compared to the CPU this means that the GPU has 2.7x more embodied carbon, the RAM 2.2x and the SSD nearly 10x. So we’d need to revise our estimate upwards by a factor of (1+2.7+2.2+10)/4 = 4x. We also see from these figures that the chips amount only for 62% of the total embodied carbon, so a closer estimate for the embodied carbon resulting from the envisaged expansion in semiconductor production capacity might be up to six times higher.
Using the Harvard and Imec figures as upper and lower bounds, and assuming at least a doubling and a most a tenfold increase in production, this would mean that the best estimate is an increase in emissions between 0.7 and 20 GtCO₂e/y. The geometric average would be 3.7 GtCO₂e/y, which is in any case a staggeringly high number, about 20% of the world’s 2040 carbon budget, purely due to the embodied carbon in the production of the chips.
Conclusion: the world can’t afford this growth in AI
Both the embodied carbon an the emissions from use entailed purely by the needs of OpenAI are huge.
Even with my most optimistic estimate, they would account for close to 10% of the world’s 2040 carbon budget. OpenAI’s plans would make emissions from ICT grow steeply at a time when we simply can’t afford any rise in emissions. This projected growth will make it incredible hard to reduce global emissions to a sustainable level by 2040.
In the worst case, the embodied emissions of the chips needed for AI compute could already exceed the world’s 2040 carbon budget. Running the computations would make the situation even worse. AI on its own could be responsible for pushing the world into catastrophic warming.