From automatically generated overviews to chatbots on spreadsheets, so-called artificial intelligence is increasingly being integrated into our watches, phones, home assistants and other smart devices.
Artificial intelligence in everything is becoming so normal and everyday that it is easy to ignore it. But this normalization has a dangerous impact on the environment, the planet, and our response to climate change.
The direct environmental costs of artificial intelligence are undeniable. Data centers consume large amounts of electricity and water, and AI queries consume much more energy than a typical internet search.
The same companies that develop and promote consumer AI—including Microsoft, Google, and Amazon—are also using it to help companies find and extract oil and gas as quickly as possible. But when it comes to the indirect effects of AI, the environment remains a big blind spot for most people.
Our research identifies hidden costs and draws attention to how AI is fueling overconsumption and lifestyles with large carbon footprints. We also study how the cultural values embedded in available AI applications emphasize individualism and commodification, while ignoring or downplaying the relevance of environmental issues.
Consumption-based emissions must be reduced to prevent runaway climate change, so how environmental values are expressed is important. Our research shows that many AI companies don’t see the environmental damage caused by their products as anything worth worrying about.

Artificial intelligence is embedded in the digital tools and platforms that people use in their daily lives. Search engines, social media, and online marketplaces all incorporate what they call “artificial intelligence features” into their programs.
These are often default settings that are difficult to disable or opt out of. Many people don’t know how these functions are turned on, let alone that their ultimate goal is to encourage purchases that platform owners can benefit from.
Such a business model does two things simultaneously, generating both financial profit and data to be used as business intelligence. And this means that publications are generated twice: through the direct use of extensive applications and in additional publications encouraged by the content provided to users. This is a double blow for the environment.
As part of our research into big tech, we encouraged Microsoft’s outstanding chatbot Copilot with the simple phrase “kids’ clothes.” It created a list of links to online stores and department stores. Our application did not say that we wanted to buy new baby clothes.
To understand how the chatbot turned our request into a web search, we asked it to explain its decisions. Copilot came up with three terms that all refer to consumption: children’s clothing stores, best places to buy children’s clothing, and popular children’s clothing brands.
The co-pilot’s answer could have been about common materials and colors, sewing, or altering and buying used children’s clothes. In fact, Ecosia, a search engine that uses profits to fund climate action, foregrounds the purchase of sustainable alternatives and shows options for renting, borrowing and buying second-hand.
However, Copilot’s AI quest focused on buying new clothes – indirectly encouraging overconsumption. The same commands in OpenAI’s SearchGPT produced nearly identical results by interpreting the user’s intent as a buyer. We also tested Google’s AI overviews and it gave us the same results, as did another search engine called Perplexity.
No one assumes responsibility for these indirect publications. They do not come from children’s clothing manufacturers or consumers. They are outside most mechanisms for attributing, measuring and reporting environmental impacts.
By naming this phenomenon for the first time, more attention can be drawn to it.
We use the term “algorithm-facilitated releases” – and we believe that platform owners, whose profits depend on connecting producers to consumers and extracting value from their exchanges, should take responsibility for them.
“Acceptable” environmental damage
We can say that most AI developers do not pay attention to the environment by analyzing what these companies allow and limit. We studied the acceptable use policies they have for their AI models, which specify the types of queries, requests and activities that users are not allowed to do with their services. Very few of these AI policies involve the environment or nature – and when they do, it’s usually superficial.
For example, animals were mentioned in only one-sixth of the 30 usage policies we reviewed. If included, animals are listed as posthumans, not as species in need of protection or valuable to ecosystems.
These policies repeatedly mention false information as unacceptable. While policies such as these tend to be anthropocentric, there is a lack of concern for the environment, both in terms of misinformation and general mention. Contributing to climate change or other environmental harm is not identified as a risk to be avoided.
About the authors
Jota Haider is Professor of Information Studies, Swedish Faculty of Library and Information Science and Bjorn Ekström is Lecturer in Information Studies, both at Borås University. James White is a postdoctoral researcher, Sociology and Digital Technology at Lund University. This article from The Conversation is republished under a Creative Commons license. Read the original article.
Technology companies, policymakers, governments, and business organizations must recognize that the continued growth of artificial intelligence has systemic consequences that harm the environment. These include direct effects of energy and resource use, plus indirect effects related to consumption-focused lifestyles and social norms that ignore the environment.
But the normalization of AI in everything helps to bury these consequences – just when environmental awareness is most needed and pressure on governing bodies to enact climate-promoting policies must be maintained.
A new language can help to see, speak and measure these dynamics. Platforms that connect producers and consumers play an important role in deciding what is produced and consumed – yet the way we, as a society, typically think about consumption does not allow for this. New terms, such as algorithm-facilitated publishing, can help people rethink and redesign our information infrastructure.
If artificial intelligence can be built to increase consumption, the opposite is also possible. AI can promote environmental values and reduce consumption – not the other way around.
Don’t have time to read as much about climate change as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that delves a little deeper into just one climate topic. Join over 47,000 readers who have already subscribed.

