Connect with us

The Conversation

When scientific citations go rogue: Uncovering ‘sneaked references’

Published

on

theconversation.com – Lonni Besançon, Assistant Professor in Data Visualization, Linköping University – 2024-07-09 07:22:58

Science is a process of collaboration that depends on accurate citations.

AlexRaths/iStock via Getty Images

Lonni Besançon, Linköping University and Guillaume Cabanac, Institut de Recherche en Informatique de Toulouse

A researcher working alone – apart from the world and the rest of the wider scientific community – is a classic yet misguided image. Research is, in reality, built on continuous exchange within the scientific community: First you understand the work of others, and then you share your findings.

Reading and writing articles published in academic journals and presented at conferences is a central part of being a researcher. When researchers write a scholarly article, they must cite the work of peers to provide context, detail sources of inspiration and explain differences in approaches and results. A positive citation by other researchers is a key measure of visibility for a researcher’s own work.

But what happens when this citation system is manipulated? A recent Journal of the Association for Information Science and Technology article by our team of academic sleuths – which includes information scientists, a computer scientist and a mathematician – has revealed an insidious method to artificially inflate citation counts through metadata manipulations: sneaked references.

Hidden manipulation

People are becoming more aware of scientific publications and how they work, including their potential flaws. Just last year more than 10,000 scientific articles were retracted. The issues around citation gaming and the harm it causes the scientific community, including damaging its credibility, are well documented.

Citations of scientific work abide by a standardized referencing system: Each reference explicitly mentions at least the title, authors’ names, publication year, journal or conference name, and page numbers of the cited publication. These details are stored as metadata, not visible in the article’s text directly, but assigned to a digital object identifier, or DOI – a unique identifier for each scientific publication.

References in a scientific publication allow authors to justify methodological choices or present the results of past studies, highlighting the iterative and collaborative nature of science.

However, we found through a chance encounter that some unscrupulous actors have added extra references, invisible in the text but present in the articles’ metadata, when they submitted the articles to scientific databases. The result? Citation counts for certain researchers or journals have skyrocketed, even though these references were not cited by the authors in their articles.

Chance discovery

The investigation began when Guillaume Cabanac, a professor at the University of Toulouse, wrote a post on PubPeer, a website dedicated to postpublication peer review, in which scientists discuss and analyze publications. In the post, he detailed how he had noticed an inconsistency: a Hindawi journal article that he suspected was fraudulent because it contained awkward phrases had far more citations than downloads, which is very unusual.

The post caught the attention of several sleuths who are now the authors of the JASIST article. We used a scientific search engine to look for articles citing the initial article. Google Scholar found none, but Crossref and Dimensions did find references. The difference? Google Scholar is likely to mostly rely on the article’s main text to extract the references appearing in the bibliography section, whereas Crossref and Dimensions use metadata provided by publishers.

A new type of fraud

To understand the extent of the manipulation, we examined three scientific journals that were published by the Technoscience Academy, the publisher responsible for the articles that contained questionable citations.

Our investigation consisted of three steps:

  1. We listed the references explicitly present in the HTML or PDF versions of an article.

  2. We compared these lists with the metadata recorded by Crossref, discovering extra references added in the metadata but not appearing in the articles.

  3. We checked Dimensions, a bibliometric platform that uses Crossref as a metadata source, finding further inconsistencies.

In the journals published by Technoscience Academy, at least 9% of recorded references were “sneaked references.” These additional references were only in the metadata, distorting citation counts and giving certain authors an unfair advantage. Some legitimate references were also lost, meaning they were not present in the metadata.

In addition, when analyzing the sneaked references, we found that they highly benefited some researchers. For example, a single researcher who was associated with Technoscience Academy benefited from more than 3,000 additional illegitimate citations. Some journals from the same publisher benefited from a couple hundred additional sneaked citations.

We wanted our results to be externally validated, so we posted our study as a preprint, informed both Crossref and Dimensions of our findings and gave them a link to the preprinted investigation. Dimensions acknowledged the illegitimate citations and confirmed that their database reflects Crossref’s data. Crossref also confirmed the extra references in Retraction Watch and highlighted that this was the first time that it had been notified of such a problem in its database. The publisher, based on Crossref’s investigation, has taken action to fix the problem.

Implications and potential solutions

Why is this discovery important? Citation counts heavily influence research funding, academic promotions and institutional rankings. Manipulating citations can lead to unjust decisions based on false data. More worryingly, this discovery raises questions about the integrity of scientific impact measurement systems, a concern that has been highlighted by researchers for years. These systems can be manipulated to foster unhealthy competition among researchers, tempting them to take shortcuts to publish faster or achieve more citations.

To combat this practice we suggest several measures:

  • Rigorous verification of metadata by publishers and agencies like Crossref.

  • Independent audits to ensure data reliability.

  • Increased transparency in managing references and citations.

This study is the first, to our knowledge, to report a manipulation of metadata. It also discusses the impact this may have on the evaluation of researchers. The study highlights, yet again, that the overreliance on metrics to evaluate researchers, their work and their impact may be inherently flawed and wrong.

Such overreliance is likely to promote questionable research practices, including hypothesizing after the results are known, or HARKing; splitting a single set of data into several papers, known as salami slicing; data manipulation; and plagiarism. It also hinders the transparency that is key to more robust and efficient research. Although the problematic citation metadata and sneaked references have now been apparently fixed, the corrections may have, as is often the case with scientific corrections, happened too late.

This article is published in collaboration with Binaire, a blog for understanding digital issues.

This article was originally published in French.The Conversation

Lonni Besançon, Assistant Professor in Data Visualization, Linköping University and Guillaume Cabanac, Professeur des universités, Institut de Recherche en Informatique de Toulouse

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post When scientific citations go rogue: Uncovering ‘sneaked references’ appeared first on theconversation.com

The Conversation

What is AI slop? A technologist explains this new and largely unwelcome form of online content

Published

on

theconversation.com – Adam Nemeroff, Assistant Provost for Innovations in Learning, Teaching, and Technology, Quinnipiac University – 2025-09-02 07:33:00


AI slop refers to low- to mid-quality content—images, videos, audio, text—generated quickly and cheaply by AI tools, often without accuracy. It floods social media and platforms like YouTube, Spotify, and Wikipedia, displacing higher-quality, human-created content. Examples include AI-generated bands, viral images, and videos that exploit internet attention economies for profit. AI slop harms artists by reducing job opportunities and spreads misinformation, as seen during Hurricane Helene with fake images used politically. Platforms struggle to moderate this content, threatening information reliability. Users can report or flag harmful AI slop, but it increasingly degrades the online media environment.

This AI-generated image spread far and wide in the wake of Hurricane Helene in 2024.
AI-generated image circulated on social media

Adam Nemeroff, Quinnipiac University

You’ve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantastical – think Shrimp Jesus – and some are believable at a quick glance – remember the little girl clutching a puppy in a boat during a flood?

These are examples of AI slop, low- to mid-quality content – video, images, audio, text or a mix – created with AI tools, often with little regard for accuracy. It’s fast, easy and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.

AI slop has been increasing over the past few years. As the term “slop” indicates, that’s generally not good for people using the internet.

AI slop’s many forms

The Guardian published an analysis in July 2025 examining how AI slop is taking over YouTube’s fastest-growing channels. The journalists found that nine out of the top 100 fastest-growing channels feature AI-generated content like zombie football and cat soap operas.

This song, allegedly recorded by a band called The Velvet Sundown, was AI-generated.

Listening to Spotify? Be skeptical of that new band, The Velvet Sundown, that appeared on the streaming service with a creative backstory and derivative tracks. It’s AI-generated.

In many cases, people submit AI slop that’s just good enough to attract and keep users’ attention, allowing the submitter to profit from platforms that monetize streaming and view-based content.

The ease of generating content with AI enables people to submit low-quality articles to publications. Clarkesworld, an online science fiction magazine that accepts user submissions and pays contributors, stopped taking new submissions in 2024 because of the flood of AI-generated writing it was getting.

These aren’t the only places where this happens — even Wikipedia is dealing with AI-generated low-quality content that strains its entire community moderation system. If the organization is not successful in removing it, a key information resource people depend on is at risk.

This episode of ‘Last Week Tonight with John Oliver’ delves into AI slop. (NSFW)

Harms of AI slop

AI-driven slop is making its way upstream into people’s media diets as well. During Hurricane Helene, opponents of President Joe Biden cited AI-generated images of a displaced child clutching a puppy as evidence of the administration’s purported mishandling of the disaster response. Even when it’s apparent that content is AI-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it.

AI slop also harms artists by causing job and financial losses and crowding out content made by real creators. The placement of this lower-quality AI-generated content is often not distinguished by the algorithms that drive social media consumption, and it displace entire classes of creators who previously made their livelihood from online content.

Wherever it’s enabled, you can flag content that’s harmful or problematic. On some platforms, you can add community notes to the content to provide context. For harmful content, you can try to report it.

Along with forcing us to be on guard for deepfakes and “inauthentic” social media accounts, AI is now leading to piles of dreck degrading our media environment. At least there’s a catchy name for it.The Conversation

Adam Nemeroff, Assistant Provost for Innovations in Learning, Teaching, and Technology, Quinnipiac University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post What is AI slop? A technologist explains this new and largely unwelcome form of online content appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content presents a balanced and factual discussion about the rise of low-quality AI-generated content (“AI slop”) and its impacts on media, misinformation, and creators. It references examples involving both political figures and general media platforms without taking a partisan stance or promoting a specific political agenda. The focus is on the technological and social implications rather than ideological viewpoints, resulting in a centrist perspective.

Continue Reading

The Conversation

Adding more green space to a campus is a simple, cheap and healthy way to help millions of stressed and depressed college students

Published

on

theconversation.com – Chanam Lee, Professor of Landscape Architecture and Urban Planning, Texas A&M University – 2025-09-02 07:32:00


College students face significant stress from academics, social pressures, and finances, contributing to rising anxiety, depression, and suicide rates. The 2024 National College Health Assessment found 30% of students report anxiety harming academics, with 20% at risk of severe distress. While counseling services have expanded, creating healthier campus environments by increasing green spaces offers another solution. Research, including a Texas A&M study, shows access to greenery, nature views, and walkable paths reduces stress, improves mood, and fosters belonging. Outdoor areas like Aggie Park provide mental health benefits and encourage physical activity, which lowers anxiety and depression. Smaller schools and those with religious affiliations also report better student mental health. Enhancing campus green spaces is a cost-effective way to support student well-being and academic success.

Green space at schools can benefit generations of students.
AzmanL/E+ via Getty Images

Chanam Lee, Texas A&M University; Li Deng, Texas A&M University, and Yizhen Ding, Texas A&M University

Stress on college students can be palpable, and it hits them from every direction: academic challenges, social pressures and financial burdens, all intermingled with their first taste of independence. It’s part of the reason why anxiety and depression are common among the 19 million students now enrolled in U.S. colleges and universities, and why incidents of suicide and suicidal ideation are rising.

In the 2024 National College Health Assessment Report, 30% of the 30,000 students surveyed said anxiety negatively affected their academic performance, with 20% at risk for symptoms that suggest severe psychological distress, such as feelings of sadness, nervousness and hopelessness. No wonder the demand for mental health services has been increasing for about a decade.

Many schools have rightfully responded to this demand by offering students more counseling. That is important, of course, but there’s another approach that could help alleviate the need for counseling: Creating a campus environment that promotes health. Simply put, add more green space.

We are scholars who study the impact that the natural environment has on students, particularly in the place where they spend much of their time – the college campus. Decades of research show that access to green spaces can lower stress and foster a stronger sense of belonging – benefits that are particularly critical for students navigating the pressures of higher education.

Making campuses green

In 2020, our research team at Texas A&M University launched a Green Campus Initiative to promote a healthier campus environment. Our goal was to find ways to design, plan and manage such an environment by developing evidence-based strategies.

Our survey of more than 400 Texas A&M students showed that abundant greenery, nature views and quality walking paths can help with mental health issues.

More than 80% of the students we surveyed said they already have their favorite outdoor places on campus. One of them is Aggie Park, 20 acres of green space with exercise trails, walking and bike paths and rocking chairs by a lake. Many students noted that such green spaces are a break from daily routines, a positive distraction from negative thoughts and a place to exercise.

Our survey confirms other research that shows students who spend time outdoors – particularly in places with mature trees, open fields, parks, gardens and water – report better moods and lower stress. More students are physically active when on a campus with good walkability and plenty of sidewalks, trails and paths. Just the physical activity itself is linked to many mental health benefits, including reduced anxiety and depression.

Outdoor seating, whether rocking chairs or park benches, also has numerous benefits. More time spent talking to others is one of them, but what might be surprising is that enhanced reading performance is another. More trees and plants mean more shaded areas, particularly during hot summers, and that too encourages students to spend more time outside and be active.

A bird’s eye view of the turquoise lakes and greenery at Aggie Park.
Aggie Park, a designated green space on the campus of Texas A&M University, opened in September 2022.
Texas A&M University

Less anxiety, better academic performance

In short, the surrounding environment matters, but not just for college students or those living or working on a campus. Across different groups and settings, research shows that being near green spaces reduces stress, anxiety and depression.

Even a garden or tree-lined street helps.

In Philadelphia, researchers transformed 110 vacant lot clusters into green spaces. That led to improvements in mental health for residents living nearby. Those using the green spaces reported lower levels of stress and anxiety, but just viewing nature from a window was helpful too.

Our colleagues discovered similar findings when conducting a randomized trial with high school students who took a test before and after break periods in classrooms with different window views: no window, a window facing a building or parking lot, or a window overlooking green landscapes. Students with views of greenery recovered faster from mental fatigue and performed significantly better on attention tasks.

It’s still unclear exactly why green spaces are good places to go when experiencing stress and anxiety; nevertheless, it is clear that spending time in nature is beneficial for mental well-being.

Small can be better

It’s critical to note that enhancing your surroundings isn’t just about green space. Other factors play a role. After analyzing data from 13 U.S. universities, our research shows that school size, locale, region and religious affiliation all make a difference and are significant predictors of mental health.

Specifically, we found that students at schools with smaller populations, schools in smaller communities, schools in the southern U.S. or schools with religious affiliations generally had better mental health than students at other schools. Those students had less stress, anxiety and depression, and a lower risk of suicide when compared with peers at larger universities with more than 5,000 students, schools in urban areas, institutions in the Midwest and West or those without religious ties.

No one can change their genes or demographics, but an environment can always be modified – and for the better. For a relatively cheap investment, more green space at a school offers long-term benefits to generations of students. After all, a campus is more than just buildings. No doubt, the learning that takes place inside them educates the mind. But what’s on the outside, research shows, nurtures the soul.The Conversation

Chanam Lee, Professor of Landscape Architecture and Urban Planning, Texas A&M University; Li Deng, Ph.D Candidate in Landscape Architecture & Urban Planning, Texas A&M University, and Yizhen Ding, Ph.D. Candidate in Landscape Architecture & Urban Planning, Texas A&M University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Aggie Park, a designated green space on the campus of Texas A&M University, opened in September 2022.
Texas A&M University

Less anxiety, better academic performance

In short, the surrounding environment matters, but not just for college students or those living or working on a campus. Across different groups and settings, research shows that being near green spaces reduces stress, anxiety and depression.

Even a garden or tree-lined street helps.

In Philadelphia, researchers transformed 110 vacant lot clusters into green spaces. That led to improvements in mental health for residents living nearby. Those using the green spaces reported lower levels of stress and anxiety, but just viewing nature from a window was helpful too.

Our colleagues discovered similar findings when conducting a randomized trial with high school students who took a test before and after break periods in classrooms with different window views: no window, a window facing a building or parking lot, or a window overlooking green landscapes. Students with views of greenery recovered faster from mental fatigue and performed significantly better on attention tasks.

It’s still unclear exactly why green spaces are good places to go when experiencing stress and anxiety; nevertheless, it is clear that spending time in nature is beneficial for mental well-being.

Small can be better

It’s critical to note that enhancing your surroundings isn’t just about green space. Other factors play a role. After analyzing data from 13 U.S. universities, our research shows that school size, locale, region and religious affiliation all make a difference and are significant predictors of mental health.

Specifically, we found that students at schools with smaller populations, schools in smaller communities, schools in the southern U.S. or schools with religious affiliations generally had better mental health than students at other schools. Those students had less stress, anxiety and depression, and a lower risk of suicide when compared with peers at larger universities with more than 5,000 students, schools in urban areas, institutions in the Midwest and West or those without religious ties.

No one can change their genes or demographics, but an environment can always be modified – and for the better. For a relatively cheap investment, more green space at a school offers long-term benefits to generations of students. After all, a campus is more than just buildings. No doubt, the learning that takes place inside them educates the mind. But what’s on the outside, research shows, nurtures the soul.

Read More

The post Adding more green space to a campus is a simple, cheap and healthy way to help millions of stressed and depressed college students appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content focuses on mental health challenges faced by college students and advocates for increasing green spaces on campuses as a way to improve well-being. It relies on scientific research and evidence-based findings without promoting any particular political ideology or partisan agenda. The discussion is centered on public health and environmental design, topics that generally transcend traditional political divides, resulting in a neutral, centrist perspective.

Continue Reading

The Conversation

AI has a hidden water cost − here’s how to calculate yours

Published

on

theconversation.com – Leo S. Lo, Dean of Libraries; Advisor to the Provost for AI Literacy; Professor of Education, University of Virginia – 2025-09-01 07:35:00


Artificial intelligence systems consume significant water—up to 500 milliliters per short interaction—primarily for cooling data center servers and generating electricity. Water use varies greatly by location and climate; for example, dry, hot areas rely heavily on evaporative cooling, which consumes more water. Innovations like immersion cooling and Microsoft’s zero-water cooling design promise to reduce consumption but aren’t yet widespread. AI’s water footprint also depends on the model’s complexity, with newer models like GPT-5 using considerably more water than efficient ones. Despite large aggregate usage, AI’s water consumption remains small compared to everyday activities like lawn watering. Transparency and efficiency improvements are crucial for balancing innovation with sustainability.

How many AI queries does it take to use up a regular plastic water bottle’s worth of water?
kieferpix/iStock/Getty Images Plus

Leo S. Lo, University of Virginia

Artificial intelligence systems are thirsty, consuming as much as 500 milliliters of water – a single-serving water bottle – for each short conversation a user has with the GPT-3 version of OpenAI’s ChatGPT system. They use roughly the same amount of water to draft a 100-word email message.

That figure includes the water used to cool the data center’s servers and the water consumed at the power plants generating the electricity to run them.

But the study that calculated those estimates also pointed out that AI systems’ water usage can vary widely, depending on where and when the computer answering the query is running.

To me, as an academic librarian and professor of education, understanding AI is not just about knowing how to write prompts. It also involves understanding the infrastructure, the trade-offs, and the civic choices that surround AI.

Many people assume AI is inherently harmful, especially given headlines calling out its vast energy and water footprint. Those effects are real, but they’re only part of the story.

When people move from seeing AI as simply a resource drain to understanding its actual footprint, where the effects come from, how they vary, and what can be done to reduce them, they are far better equipped to make choices that balance innovation with sustainability.

2 hidden streams

Behind every AI query are two streams of water use.

The first is on-site cooling of servers that generate enormous amounts of heat. This often uses evaporative cooling towers – giant misters that spray water over hot pipes or open basins. The evaporation carries away heat, but that water is removed from the local water supply, such as a river, a reservoir or an aquifer. Other cooling systems may use less water but more electricity.

The second stream is used by the power plants generating the electricity to power the data center. Coal, gas and nuclear plants use large volumes of water for steam cycles and cooling.

Hydropower also uses up significant amounts of water, which evaporates from reservoirs. Concentrated solar plants, which run more like traditional steam power stations, can be water-intensive if they rely on wet cooling.

By contrast, wind turbines and solar panels use almost no water once built, aside from occasional cleaning.

Large concrete towers emit vapor into the atmosphere.
Cooling towers, like these at a power plant in Florida, use water evaporation to lower the temperature of equipment.
Paul Hennessy/SOPA Images/LightRocket via Getty Images

Climate and timing matter

Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.

Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.

Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.

And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.

These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.

A simple skill you can use

The type of AI model being queried matters, too. That’s because of the different levels of complexity and the hardware and amount of processor power they require. Some models may use far more resources than others. For example, one study found that certain models can consume over 70 times more energy and water than ultra‑efficient ones.

You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.

Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.

Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.

Independent researchers and industry reports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.

Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.

Here’s the one-line formula you’ll need:

Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)

For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.

For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.

If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.

A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.

Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.

Comparisons can add context

To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.

When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.

Using independent estimates and Google’s official reporting gives a sense of the possible range:

  • All Google Gemini median prompts: about 650,000 liters per day.
  • All GPT 4o medium prompts: about 8.8 million liters per day.
  • All GPT 5 medium prompts: about 97.5 million liters per day.
A small black spigot spews a stream of water over a green grass lawn.
Americans use lots of water to keep gardens and lawns looking fresh.
James Carbone/Newsday RM via Getty Images

For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.

Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.

But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.

Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.The Conversation

Leo S. Lo, Dean of Libraries; Advisor to the Provost for AI Literacy; Professor of Education, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Cooling towers, like these at a power plant in Florida, use water evaporation to lower the temperature of equipment.
Paul Hennessy/SOPA Images/LightRocket via Getty Images

Climate and timing matter

Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.

Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.

Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.

And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.

These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.

A simple skill you can use

The type of AI model being queried matters, too. That’s because of the different levels of complexity and the hardware and amount of processor power they require. Some models may use far more resources than others. For example, one study found that certain models can consume over 70 times more energy and water than ultra‑efficient ones.

You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.

Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.

Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.

Independent researchers and industry reports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.

Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.

Here’s the one-line formula you’ll need:

Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)

For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.

For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.

If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.

A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.

Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.

Comparisons can add context

To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.

When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.

Using independent estimates and Google’s official reporting gives a sense of the possible range:

  • All Google Gemini median prompts: about 650,000 liters per day.
  • All GPT 4o medium prompts: about 8.8 million liters per day.
  • All GPT 5 medium prompts: about 97.5 million liters per day.

A small black spigot spews a stream of water over a green grass lawn.

Americans use lots of water to keep gardens and lawns looking fresh.
James Carbone/Newsday RM via Getty Images

For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.

Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.

But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.

Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.

Read More

The post AI has a hidden water cost − here’s how to calculate yours appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content presents a balanced and fact-based analysis of the environmental impact of AI, specifically focusing on water usage. It relies on scientific studies, industry reports, and expert opinions without promoting a particular political agenda. The article acknowledges concerns about resource consumption while also highlighting technological innovations and practical solutions, aiming to inform readers rather than persuade them toward a partisan viewpoint. This neutral and informative approach aligns with a centrist perspective.

Continue Reading

Trending