ChatGPT’s Water Usage Is a Fraction of One Farm

Andy Masley ran the numbers on how much water all ChatGPT usage combined would require, factoring in training, inference, power generation, and chip manufacturing. His conclusion: it adds up to roughly the same amount of water used by a modest patch of irrigated farmland. A patch small enough that you could photograph it from a plane and it would not look impressive at all.

The image he posted is exactly that. It looks like a normal flyover shot of some fields. Not a dramatic visualization. Not a massive swath of land. Just a picture of some farmland that would be completely unremarkable if you saw it without context. That is the entire point.

How Masley Built the Estimate

His methodology was a back-of-the-envelope calculation with a few key assumptions. He estimated that inference uses roughly as much water as training did. He also estimated that power generation accounts for about five times the water consumption of the data centers themselves. Chip manufacturing, while often cited as a concern, is marginal compared to the lifetime usage of those chips once they are running in production.

There are two main water streams in any AI infrastructure footprint. The first is on-site cooling at data centers, which use water directly to manage heat. The second is the water consumed upstream during electricity generation, particularly at thermoelectric power plants that use water in their cooling processes. Masley’s estimate that generation accounts for about 5x the data center cooling water is plausible given how water-intensive traditional electricity generation can be, though the exact ratio varies by region and energy mix.

Relative water use breakdown for ChatGPT based on Andy Masley estimate

Power generation is the dominant factor by a wide margin. Chip manufacturing barely registers. That breakdown matters because most of the discourse around AI water usage focuses on data center cooling, which is actually the smaller part of the picture once you account for the full electricity generation chain.

Where the Per-Query Numbers Stand Now

The estimates that have circulated publicly vary quite a bit depending on methodology and which version of the model is being measured. Earlier figures based on GPT-3 training data, from a UC Riverside study that gets cited constantly, came out to something like 500ml per 5 to 50 prompts. That number has been repeated so often that it has become the default reference point for this entire conversation, even though it reflects a model that is now several generations old and outdated.

More recent figures put the number considerably lower per query. Sam Altman cited around 0.3ml per average ChatGPT query. Google disclosed roughly 0.26ml per Gemini query in their 2025 reporting. These figures are not perfectly comparable because they measure different things and use different accounting methods, but the direction is clear: newer models are more efficient, and the per-query water cost has dropped substantially as inference infrastructure has improved.

Masley’s approach of looking at total system water use across all ChatGPT queries is more useful than per-query figures for understanding the actual footprint. Aggregating everything, including training, inference at scale, and the full electricity generation chain, and then comparing it to a reference people can visualize is a more grounded way to give the number actual meaning than citing fractions of a milliliter.

Why This Framing Keeps Circulating

The water usage angle gets recycled regularly as an AI criticism because it sounds alarming in isolation. Hundreds of millions of liters sounds like a lot until you compare it to literally anything else at scale. Agriculture accounts for roughly 70% of global freshwater withdrawals. A single large farm uses more water than the entire ChatGPT footprint Masley calculated. That is not a defense of careless resource use. It is just a sense of proportion that the alarming headlines consistently skip.

The criticism also tends to freeze the efficiency picture at an earlier point in time. The UC Riverside study that produced the most-cited figures was based on GPT-3 era training and inference. The numbers have moved substantially since then, and they will keep moving. AI inference is getting cheaper and more efficient with each generation, which means less energy and less water per unit of useful output. Citing 2023 figures as though they represent the current state of the technology is not accurate.

Water stress is also a regional issue, not an aggregate one. A data center pulling from an already stressed aquifer in the American Southwest is a different situation from one operating in a water-rich region. That is a legitimate concern worth paying attention to, and it is a more productive framing than aggregate footprint comparisons designed to generate alarm. The aggregate footprint of ChatGPT is not the story those headlines want it to be, and Masley’s image makes that clear in a way that a number alone does not.

The post is worth sharing because it did what most of the discourse around this topic does not do: it anchored the number to something you can actually picture. A small field of crops. That is the water cost of all ChatGPT usage. You can have opinions about whether that trade-off is worth it, but the trade-off is not what the alarming framing suggests it is. Credit to Andy Masley for putting a picture to the number.

Links

They're clicky!

Follow me on X Visit Ironwood AI →

Adam Holter

Founder of Ironwood AI. Writing about AI stuff!