Editor’s Note: This article is the next installment of our Symposium on AI and Human Rights, co-hosted with the Promise Institute for Human Rights at UCLA School of Law.

Since ChatGPT launched in 2022, generative AI has gone from the province of computer nerds to part of mainstream discourse. The public conversation lurches between casting generative AI as a silver bullet that will help us cure cancer and colonize Mars, to placing it on par with nuclear weaponry in its potential to destroy humanity. Either way, few are left doubting that generative AI is an existential technology. And that is probably true – but not necessarily for the reasons that the public is hearing about.

The computing power required to build generative AI demands quantities of energy and water on a scale that can cause severe and long-term damage to the environment. This makes the development of generative AI a human rights issue. As normative work by UN Special Rapporteurs has underscored, the UN General Assembly has declared, and human rights courts have increasingly affirmed, all humans have the right to a healthy environment.

If AI development continues on its current trajectory, it will compound the environmental harm we are already doing to our planet, long before it gets the chance to show us whether it can live up to its much-hyped potential. The real question then, is whether such environmental harms – and the human rights violations that are likely to come with them – are worth the gamble.

For a public that is used to seeing AI coverage paired with futuristic imagery, this re-framing around AI’s immediate and tangible environmental harm is as jarring as it is urgent. People must be told that the computational horsepower needed to develop generative AI, demands quantities of energy akin to that consumed by entire nation states. People need to be reminded that “the cloud” is nothing more than a network of warehouses filled with millions of square feet of computers; keeping these computers cool enough to do the data processing generative AI is predicted to require by the year 2027, will consume more than four times as much water as Denmark currently consumes annually.

Earlier this year, data pioneer Kate Crawford published an article in Nature highlighting the latest research on AI’s environmental impact. She pointed out that, as bad as the picture currently looks, it may well be even worse given the lack of transparency on environmental costs throughout the AI sector. To date, the Biden administration has not tried to get oversight in this sphere, but its efforts to understand the environmental impact of bitcoin were blocked by a U.S. district court order, in a reminder of how tough it is for regulators to overcome industry claims of proprietary interest.

Researchers like Crawford, and tech justice leaders like Tinmit Gerbu, have been doing their utmost to raise the alarm. So why does the burgeoning public discourse on generative AI remain so untethered from the hard facts about its environmental costs?

Part of the answer comes from the tendency to turn to AI industry experts to narrate fast-paced developments. Whether due to the line of questioning pursued by journalists, the direction that tech CEOs like Sam Altman steer toward, or some symbiotic dance between them, the resulting reporting features yet-to-happen scenarios of salvation or doom, which then leaves little space to discuss the environmental threat already underway.

Another piece of the puzzle stems from the fact that the data centers housing AI’s computing power might as well be in an actual cloud for the percentage of the population that ever gets to see them. Seeking cheap land and securable sites, major tech companies generally locate their data centers away from the public eye. In lieu of a visit, curious outsiders are invited to a company-narrated tour. “Join us as we venture into places that very few people ever see firsthand,” advertises Google’s own podcast.

What public reporting has come out from these and other data center locations suggests environmental tensions are already in play; some in local communities are questioning whether the economic benefits of having a data center in their backyard are worth the strain on local water resources. Here too human rights inequities are ever-present. As highlighted by work on “sacrifice zones,” the environmental impacts of building out generative AI are unlikely to be evenly distributed across human populations.

The narrow demographic of those who get to speak about AI in the mainstream media, combined with the inaccessibility of data centers, has now enabled a narrative that turns AI’s environmental costs on their head. In January, the World Economic Forum highlighted the “transformative potential” of AI to combat climate change. The United Nations Environmental Program has put out similar coverage.

It is true there are many ways that generative AI could enhance our ability to understand and respond to climate and environmental challenges. But as long as the control of generative AI’s development is primarily in the hands of those grounded in the culture and priorities of Silicon Valley, there is no reason to believe that these tools will be designed around the needs of the most vulnerable populations on the frontlines of the impact of climate change. (Indeed, if the history of social media development has taught us anything, it is that those who “move fast and break things” tend to develop products embedded with assumptions that rarely hold true for the many billions of people who live outside their very narrow segment of the Global North demographic.)

Either way, generative AI’s potential is just that – a future promise that may or may not be realized. On balance, the most likely outcome is that generative AI will bring both enormous gains, especially to those who already have access to political and economic power, as well as harmful unintended consequences. In the meantime, what we know for sure is that the effort to try to deliver on AI’s potential will place escalating demands on an overburdened planet, with impacts felt first by those who are already marginalized – at least until we achieve a clean energy transition for all.

Of course, U.S. tech companies have laudable renewable energy goals and impressive water replenishment targets. But these are undertaken on a voluntary basis. No one should harbor any illusion that in a conflict between self-enforced environmental targets and the cut-throat competition to lead the field of generative AI, the environment – and the human rights of already marginalized communities most affected by environmental degradation – will come in second place.

How then to rewrite the existing narrative? The media has a responsibility here. The next time a podcast devotes an episode to AI, they could headline an earth scientist as their guest. The slew of articles that will inevitably follow OpenAI’s recent launch of Chat GPT-4o, or its next demonstration of Sora, the text-to-video model it launched earlier this year, could each open by reminding readers of the energy and water required to bring these developments to life.

Politicians too must lead by example. Members of Congress can take the opportunities they have to question AI industry leaders, in order to extract answers about the environmental costs of generative AI development. And, more holistically, lawmakers can advance legislation to bring public interest transparency and guardrails to what is presently a self-regulated space. (The Artificial Intelligence Environmental Impacts Act of 2024, introduced by Senator Markey in February, is a step in the right direction.)

Ultimately, all of us can help. When our families, or colleagues, question whether generative AI will see robots taking over from humans, we can redirect the question:  Will our planet be made more or less habitable for humans by the effort to develop generative AI?

IMAGE: A Data Center Manager walks down the aisle of a Facebook server room. (Photo by JONATHAN NACKSTRAND/AFP via Getty Images)