Connect with us

The Conversation

One large Milky Way galaxy or many galaxies? 100 years ago, a young Edwin Hubble settled astronomy’s ‘Great Debate’

Published

on

theconversation.com – Chris Impey, University Distinguished Professor of Astronomy, University of Arizona – 2025-01-24 07:41:00

The Andromeda galaxy helped Edwin Hubble settle a great debate in astronomy.
Stocktrek Images via Getty Images

Chris Impey, University of Arizona

A hundred years ago, astronomer Edwin Hubble dramatically expanded the size of the known universe. At a meeting of the American Astronomical Society in January 1925, a paper read by one of his colleagues on his behalf reported that the Andromeda nebula, also called M31, was nearly a million light years away – too remote to be a part of the Milky Way.

Hubble’s work opened the door to the study of the universe beyond our galaxy. In the century since Hubble’s pioneering work, astronomers like me have learned that the universe is vast and contains trillions of galaxies.

Nature of the nebulae

In 1610, astronomer Galileo Galilei used the newly invented telescope to show that the Milky Way was composed of a huge number of faint stars. For the next 300 years, astronomers assumed that the Milky Way was the entire universe.

As astronomers scanned the night sky with larger telescopes, they were intrigued by fuzzy patches of light called nebulae. Toward the end of the 18th century, astronomer William Herschel used star counts to map out the Milky Way. He cataloged a thousand new nebulae and clusters of stars. He believed that the nebulae were objects within the Milky Way.

Charles Messier also produced a catalog of over 100 prominent nebulae in 1781. Messier was interested in comets, so his list was a set of fuzzy objects that might be mistaken for comets. He intended for comet hunters to avoid them since they did not move across the sky.

As more data piled up, 19th century astronomers started to see that the nebulae were a mixed bag. Some were gaseous, star-forming regions, such as the Orion nebula, or M42 – the 42nd object in Messier’s catalog – while others were star clusters such as the Pleiades, or M45.

A third category – nebulae with spiral structure – particularly intrigued astronomers. The Andromeda nebula, M31, was a prominent example. It’s visible to the naked eye from a dark site.

The Andromeda galaxy, then known as the Andromeda nebula, is a bright spot in the sky that intrigued early astronomers.

Astronomers as far back as the mid-18th century had speculated that some nebulae might be remote systems of stars or “island universes,” but there was no data to support this hypothesis. Island universes referred to the idea that there could be enormous stellar systems outside the Milky Way – but astronomers now just call these systems galaxies.

In 1920, astronomers Harlow Shapley and Heber Curtis held a Great Debate. Shapley argued that the spiral nebulae were small and in the Milky Way, while Curtis took a more radical position that they were independent galaxies, extremely large and distant.

At the time, the debate was inconclusive. Astronomers now know that galaxies are isolated systems of stars, much smaller than the space between them.

Hubble makes his mark

Edwin Hubble was young and ambitious. At the of age 30, he arrived at Mount Wilson Observatory in Southern California just in time to use the new Hooker 100-inch telescope, at the time the largest in the world.

A black and white photo of a man looking through the lens of a large telescope.
Edwin Hubble uses the telescope at the Mount Wilson Observatory.
Hulton Archives via Getty Images

He began taking photographic plates of the spiral nebulae. These glass plates recorded images of the night sky using a light-sensitive emulsion covering their surface. The telescope’s size let it make images of very faint objects, and its high-quality mirror allowed it to distinguish individual stars in some of the nebulae.

Estimating distances in astronomy is challenging. Think of how hard it is to estimate the distance of someone pointing a flashlight at you on a dark night. Galaxies come in a very wide range of sizes and masses. Measuring a galaxy’s brightness or apparent size is not a good guide to its distance.

Hubble leveraged a discovery made by Henrietta Swan Leavitt 10 years earlier. She worked at the Harvard College Observatory as a “human computer,” laboriously measuring the positions and brightness of thousands of stars on photographic plates.

She was particularly interested in Cepheid variables, which are stars whose brightness pulses regularly, so they get brighter and dimmer with a particular period. She found a relationship between their variation period, or pulse, and their intrinsic brightness or luminosity.

Once you measure a Cepheid’s period, you can calculate its distance from how bright it appears using the inverse square law. The more distant the star is, the fainter it appears.

Hubble worked hard, taking images of spiral nebulae every clear night and looking for the telltale variations of Cepheid variables. By the end of 1924, he had found 12 Cepheids in M31. He calculated M31’s distance as a prodigious 900,000 light years away, though he underestimated its true distance – about 2.5 million light years – by not realizing there were two different types of Cepheid variables.

His measurements marked the end of the Great Debate about the Milky Way’s size and the nature of the nebulae. Hubble wrote about his discovery to Harlow Shapley, who had argued that the Milky Way encompassed the entire universe.

“Here is the letter that destroyed my universe,” Shapley remarked.

Always eager for publicity, Hubble leaked his discovery to The New York Times five weeks before a colleague presented his paper at the astronomers’ annual meeting in Washington, D.C.

An expanding universe of galaxies

But Hubble wasn’t done. His second major discovery also transformed astronomers’ understanding of the universe. As he dispersed the light from dozens of galaxies into a spectrum, which recorded the amount of light at each wavelength, he noticed that the light was always shifted to longer or redder wavelengths.

Light from the galaxy passes through a prism or reflects off a diffraction grating in a telescope, which captures the intensity of light from blue to red.

Astronomers call a shift to longer wavelengths a redshift.

It seemed that these redshifted galaxies were all moving away from the Milky Way.

Hubble’s results suggested the farther away a galaxy was, the faster it was moving away from Earth. Hubble got the lion’s share of the credit for this discovery, but Lowell Observatory astronomer Vesto Slipher, who noticed the same phenomenon but didn’t publish his data, also anticipated that result.

Hubble referred to galaxies having recession velocities, or speeds of moving away from the Earth, but he never figured out that they were moving away from Earth because the universe is getting bigger.

Belgian cosmologist and Catholic priest Georges Lemaitre made that connection by realizing that the theory of general relativity described an expanding universe. He recognized that space expanding in between the galaxies could cause the redshifts, making it seem like they were moving farther away from each other and from Earth.

Lemaitre was the first to argue that the expansion must have begun during the big bang.

The Hubble telescope, which looks like a metal cylinder, floating in space.
Edwin Hubble is the namesake for NASA’s Hubble Space Telescope, which has spent decades observing faraway galaxies.
NASA via AP

NASA named its flagship space observatory after Hubble, and it has been used to study galaxies for 35 years. Astronomers routinely observe galaxies that are thousands of times fainter and more distant than galaxies observed in the 1920s. The James Webb Space Telescope has pushed the envelope even farther.

The current record holder is a galaxy a staggering 34 billion light years away, seen just 200 million years after the big bang, when the universe was 20 times smaller than it is now. Edwin Hubble would be amazed to see such progress.The Conversation

Chris Impey, University Distinguished Professor of Astronomy, University of Arizona

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post One large Milky Way galaxy or many galaxies? 100 years ago, a young Edwin Hubble settled astronomy’s ‘Great Debate’ appeared first on theconversation.com

The Conversation

AI has a hidden water cost − here’s how to calculate yours

Published

on

theconversation.com – Leo S. Lo, Dean of Libraries; Advisor to the Provost for AI Literacy; Professor of Education, University of Virginia – 2025-09-01 07:35:00


Artificial intelligence systems consume significant water—up to 500 milliliters per short interaction—primarily for cooling data center servers and generating electricity. Water use varies greatly by location and climate; for example, dry, hot areas rely heavily on evaporative cooling, which consumes more water. Innovations like immersion cooling and Microsoft’s zero-water cooling design promise to reduce consumption but aren’t yet widespread. AI’s water footprint also depends on the model’s complexity, with newer models like GPT-5 using considerably more water than efficient ones. Despite large aggregate usage, AI’s water consumption remains small compared to everyday activities like lawn watering. Transparency and efficiency improvements are crucial for balancing innovation with sustainability.

How many AI queries does it take to use up a regular plastic water bottle’s worth of water?
kieferpix/iStock/Getty Images Plus

Leo S. Lo, University of Virginia

Artificial intelligence systems are thirsty, consuming as much as 500 milliliters of water – a single-serving water bottle – for each short conversation a user has with the GPT-3 version of OpenAI’s ChatGPT system. They use roughly the same amount of water to draft a 100-word email message.

That figure includes the water used to cool the data center’s servers and the water consumed at the power plants generating the electricity to run them.

But the study that calculated those estimates also pointed out that AI systems’ water usage can vary widely, depending on where and when the computer answering the query is running.

To me, as an academic librarian and professor of education, understanding AI is not just about knowing how to write prompts. It also involves understanding the infrastructure, the trade-offs, and the civic choices that surround AI.

Many people assume AI is inherently harmful, especially given headlines calling out its vast energy and water footprint. Those effects are real, but they’re only part of the story.

When people move from seeing AI as simply a resource drain to understanding its actual footprint, where the effects come from, how they vary, and what can be done to reduce them, they are far better equipped to make choices that balance innovation with sustainability.

2 hidden streams

Behind every AI query are two streams of water use.

The first is on-site cooling of servers that generate enormous amounts of heat. This often uses evaporative cooling towers – giant misters that spray water over hot pipes or open basins. The evaporation carries away heat, but that water is removed from the local water supply, such as a river, a reservoir or an aquifer. Other cooling systems may use less water but more electricity.

The second stream is used by the power plants generating the electricity to power the data center. Coal, gas and nuclear plants use large volumes of water for steam cycles and cooling.

Hydropower also uses up significant amounts of water, which evaporates from reservoirs. Concentrated solar plants, which run more like traditional steam power stations, can be water-intensive if they rely on wet cooling.

By contrast, wind turbines and solar panels use almost no water once built, aside from occasional cleaning.

Large concrete towers emit vapor into the atmosphere.
Cooling towers, like these at a power plant in Florida, use water evaporation to lower the temperature of equipment.
Paul Hennessy/SOPA Images/LightRocket via Getty Images

Climate and timing matter

Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.

Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.

Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.

And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.

These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.

A simple skill you can use

The type of AI model being queried matters, too. That’s because of the different levels of complexity and the hardware and amount of processor power they require. Some models may use far more resources than others. For example, one study found that certain models can consume over 70 times more energy and water than ultra‑efficient ones.

You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.

Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.

Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.

Independent researchers and industry reports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.

Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.

Here’s the one-line formula you’ll need:

Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)

For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.

For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.

If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.

A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.

Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.

Comparisons can add context

To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.

When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.

Using independent estimates and Google’s official reporting gives a sense of the possible range:

  • All Google Gemini median prompts: about 650,000 liters per day.
  • All GPT 4o medium prompts: about 8.8 million liters per day.
  • All GPT 5 medium prompts: about 97.5 million liters per day.
A small black spigot spews a stream of water over a green grass lawn.
Americans use lots of water to keep gardens and lawns looking fresh.
James Carbone/Newsday RM via Getty Images

For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.

Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.

But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.

Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.The Conversation

Leo S. Lo, Dean of Libraries; Advisor to the Provost for AI Literacy; Professor of Education, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Cooling towers, like these at a power plant in Florida, use water evaporation to lower the temperature of equipment.
Paul Hennessy/SOPA Images/LightRocket via Getty Images

Climate and timing matter

Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.

Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.

Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.

And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.

These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.

A simple skill you can use

The type of AI model being queried matters, too. That’s because of the different levels of complexity and the hardware and amount of processor power they require. Some models may use far more resources than others. For example, one study found that certain models can consume over 70 times more energy and water than ultra‑efficient ones.

You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.

Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.

Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.

Independent researchers and industry reports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.

Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.

Here’s the one-line formula you’ll need:

Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)

For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.

For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.

If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.

A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.

Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.

Comparisons can add context

To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.

When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.

Using independent estimates and Google’s official reporting gives a sense of the possible range:

  • All Google Gemini median prompts: about 650,000 liters per day.
  • All GPT 4o medium prompts: about 8.8 million liters per day.
  • All GPT 5 medium prompts: about 97.5 million liters per day.

A small black spigot spews a stream of water over a green grass lawn.

Americans use lots of water to keep gardens and lawns looking fresh.
James Carbone/Newsday RM via Getty Images

For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.

Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.

But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.

Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.

Read More

The post AI has a hidden water cost − here’s how to calculate yours appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content presents a balanced and fact-based analysis of the environmental impact of AI, specifically focusing on water usage. It relies on scientific studies, industry reports, and expert opinions without promoting a particular political agenda. The article acknowledges concerns about resource consumption while also highlighting technological innovations and practical solutions, aiming to inform readers rather than persuade them toward a partisan viewpoint. This neutral and informative approach aligns with a centrist perspective.

Continue Reading

The Conversation

Balancing kratom’s potential benefits and risks − new legislation in Colorado seeks to minimize harm

Published

on

theconversation.com – David Kroll, Professor of Natural Products Pharmacology & Toxicology, University of Colorado Anschutz Medical Campus – 2025-08-29 07:41:00


David Bregger’s son, Daniel, died in 2021 after using kratom, a herbal supplement marketed as a natural anxiety remedy. Unaware of its risks, Daniel consumed a product containing 7-hydroxymitragynine (7-OH), a potent opioid-like chemical. Colorado’s new Daniel Bregger Act regulates kratom potency and restricts sales to adults, addressing deceptive practices around concentrated 7-OH products. While kratom powder has mild effects, concentrated extracts pose overdose risks, especially combined with other sedatives. Despite controversies, kratom shows promise for pain relief and opioid addiction treatment. Ongoing research aims to develop safer, effective medications from kratom compounds, balancing benefits and risks.

Kratom, an herbal supplement, is now being regulated in Colorado.
AR30mm/iStock via Getty Images

David Kroll, University of Colorado Anschutz Medical Campus

David Bregger had never heard of kratom before his son, Daniel, 33, died in Denver in 2021 from using what he thought was a natural and safe remedy for anxiety.

By his father’s account, Daniel didn’t know that the herbal product could kill him. The product listed no ingredients or safe-dosing information on the label. And it had no warning that it should not be combined with other sedating drugs, such as the over-the-counter antihistamine diphenhydramine, which is the active ingredient in Benadryl and other sleep aids.

As the fourth anniversary of Daniel’s death approaches, a recently enacted Colorado law aims to prevent other families from experiencing the heartbreak shared by the Bregger family. Colorado Senate Bill 25-072, known as the Daniel Bregger Act, addresses what the state legislature calls the deceptive trade practices around the sale of concentrated kratom products artificially enriched with a chemical called 7-OH.

The Daniel Breggar Act seeks to limit potency and underage access to kratom, an herbal supplement.

7-OH, known as 7-hydroxymitragynine, has also garnered national attention. On July 29, 2025, the U.S. Food and Drug Administration issued a warning that products containing 7-OH are potent opioids that can pose significant health risks and even death.

As kratom and its constituents are studied in greater detail, the Centers for Disease Control and Prevention and university researchers have documented hundreds of deaths where kratom-derived chemicals were present in postmortem blood tests. But rarely is kratom deadly by itself. In a study of 551 kratom-related deaths in Florida, 93.5% involved other substances such as opioids like fentanyl.

I study pharmaceutical sciences, have taught for over 30 years about herbal supplements like kratom, and I’ve written about kratom’s effects and controversy.

Kratom – one name, many products

Kratom is a broad term used to describe products made from the leaves of a Southeast Asian tree known scientifically as Mitragyna speciosa. The Latin name derives from the shape of its leaves, which resemble a bishop’s miter, the ceremonial, pointed headdress worn by bishops and other church leaders.

Small capsules are full of a green powder made from the dried kratom leaves which are also in the picture.
People report buying kratom powder from online retailers and putting it into capsules or making it into tea for consumption.
Everyday better to do everything you love/iStock via Getty Images

Kratom is made from dried and powdered leaves that can be chewed or made into a tea. Used by rice field workers and farmers in Thailand to increase stamina and productivity, kratom initially alleviates fatigue with an effect like that of caffeine. In larger amounts, it imparts a sense of well-being similar to opioids.

In fact, mitragynine, which is found in small amounts in kratom, partially stimulates opioid receptors in the central nervous system. These are the same type of opioid receptors that trigger the effects of drugs such as morphine and oxycodone. They are also the same receptors that can slow or stop breathing when overstimulated.

In the body, the small amount of mitragynine in kratom powder is converted to 7-OH by liver enzymes, hence the opioid-like effects in the body. 7-OH can also be made in a lab and is used to increase the potency of certain kratom products, including the ones found in gas stations or liquor stores.

And therein lies the controversy over the risks and benefits of kratom.

Natural or lab made: All medicines have risks

Because kratom is a plant-derived product, it has fallen into a murky enforcement area. It is sold as an herbal supplement, normally by the kilogram from online retailers overseas.

In 2016, I wrote a series of articles for Forbes as the Drug Enforcement Administration proposed to list kratom constituents on the most restrictive Schedule 1 of the Controlled Substances Act. This classification is reserved for drugs the DEA determines to possess “no currently accepted medical use and a high potential for abuse,” such as heroin and LSD.

But readers countered the DEA’s stance and sent me more than 200 messages that primarily documented their use of kratom as an alternative to opioids for pain.

Others described how kratom assisted them in recovery from addiction to alcohol or opioids themselves. Similar stories also flooded the official comments requested by the DEA, and the public pressure presumably led the agency to drop its plan to regulate kratom as a controlled substance.

Kratom is under growing scrutiny.

But not all of the stories pointed to kratom’s benefits. Instead, some people pointed out a major risk: becoming addicted to kratom itself. I learned it is a double-edged sword – remedy to some, recreational risk to others. A national survey of kratom users was consistent with my nonscientific sampling, showing more than half were using the supplement to relieve pain, stress, anxiety or a combination of these.

Natural leaf powder vs. artificially concentrated extracts

After the DEA dropped its 2016 plan to ban the leaf powder, marketers in the U.S. began isolating mitragynine and concentrating it into small bottles that could be taken like those energy shots of caffeine often sold in gas stations and convenience stores. This formula made it easier to ingest more kratom. Slowly, sellers learned they could make the more potent 7-OH from mitragynine and give their products an extra punch. And an extra dose of risk.

People who use kratom in the powder form describe taking 3 to 5 grams, the size of a generous tablespoon. They put the powder in capsules or made it into a tea several times a day to ward off pain, the craving for alcohol or the withdrawal symptoms from long-term prescription opioid use. Since this form of kratom does not contain very much mitragynine – it is only about 1% of the powdered leaf – overdosing on the powder alone does not typically happen.

That, along with pushback from consumers, is why the Food and Drug Administration is proposing to restrict only the availability of 7-OH and not mitragynine or kratom powder. The new Colorado law limits the concentration of kratom ingredients in products and restricts their sales and marketing to consumers over 21.

Even David Bregger supports this distinction. “I’m not anti-kratom, I’m pro-regulation. What I’m after is getting nothing but leaf product,” he told WPRI in Rhode Island last year while demonstrating at a conference of the education and advocacy trade group the American Kratom Association.

Such lobbying with the trade group last year led the American Kratom Association to concur that 7-OH should be regulated as a Schedule 1 controlled substance. The association acknowledges that such regulation is reasonable and based in science.

Benefits amid the ban

Despite the local and national debate over 7-OH, scientists are continuing to explore kratom compounds for their legitimate medical use.

A $3.5 million NIH grant is one of several that is increasing understanding of kratom as a source for new drugs.

Researchers have identified numerous other chemicals called alkaloids from kratom leaf specimens and commercial products. These researchers show that some types of kratom trees make unique chemicals, possibly opening the door to other painkillers. Researchers have also found that compounds from kratom, such as 7-OH, bind to opioid receptors in unique ways. The compounds seem to have an effect more toward pain management and away from potentially deadly suppression of breathing. Of course, this is when the compounds are used alone and not together with other sedating drugs.

Rather than contributing to the opioid crisis, researchers suspect that isolated and safely purified drugs made from kratom could be potential treatments for opioid addiction. In fact, some kratom chemicals such as mitragynine have multiple actions and could potentially replace both medication-assisted therapy, like buprenorphine, in treating opioid addiction and drugs like clonidine for opioid withdrawal symptoms.

Rigorous scientific study has led to this more reasonable juncture in the understanding of kratom and its sensible regulation. Sadly, we cannot bring back Daniel Bregger. But researchers can advance the potential for new and beneficial drugs while legislators help prevent such tragedies from befalling other families.The Conversation

David Kroll, Professor of Natural Products Pharmacology & Toxicology, University of Colorado Anschutz Medical Campus

This article is republished from The Conversation under a Creative Commons license. Read the original article.

People report buying kratom powder from online retailers and putting it into capsules or making it into tea for consumption.
Everyday better to do everything you love/iStock via Getty Images

Kratom is made from dried and powdered leaves that can be chewed or made into a tea. Used by rice field workers and farmers in Thailand to increase stamina and productivity, kratom initially alleviates fatigue with an effect like that of caffeine. In larger amounts, it imparts a sense of well-being similar to opioids.

In fact, mitragynine, which is found in small amounts in kratom, partially stimulates opioid receptors in the central nervous system. These are the same type of opioid receptors that trigger the effects of drugs such as morphine and oxycodone. They are also the same receptors that can slow or stop breathing when overstimulated.

In the body, the small amount of mitragynine in kratom powder is converted to 7-OH by liver enzymes, hence the opioid-like effects in the body. 7-OH can also be made in a lab and is used to increase the potency of certain kratom products, including the ones found in gas stations or liquor stores.

And therein lies the controversy over the risks and benefits of kratom.

Natural or lab made: All medicines have risks

Because kratom is a plant-derived product, it has fallen into a murky enforcement area. It is sold as an herbal supplement, normally by the kilogram from online retailers overseas.

In 2016, I wrote a series of articles for Forbes as the Drug Enforcement Administration proposed to list kratom constituents on the most restrictive Schedule 1 of the Controlled Substances Act. This classification is reserved for drugs the DEA determines to possess “no currently accepted medical use and a high potential for abuse,” such as heroin and LSD.

But readers countered the DEA’s stance and sent me more than 200 messages that primarily documented their use of kratom as an alternative to opioids for pain.

Others described how kratom assisted them in recovery from addiction to alcohol or opioids themselves. Similar stories also flooded the official comments requested by the DEA, and the public pressure presumably led the agency to drop its plan to regulate kratom as a controlled substance.

Kratom is under growing scrutiny.

But not all of the stories pointed to kratom’s benefits. Instead, some people pointed out a major risk: becoming addicted to kratom itself. I learned it is a double-edged sword – remedy to some, recreational risk to others. A national survey of kratom users was consistent with my nonscientific sampling, showing more than half were using the supplement to relieve pain, stress, anxiety or a combination of these.

Natural leaf powder vs. artificially concentrated extracts

After the DEA dropped its 2016 plan to ban the leaf powder, marketers in the U.S. began isolating mitragynine and concentrating it into small bottles that could be taken like those energy shots of caffeine often sold in gas stations and convenience stores. This formula made it easier to ingest more kratom. Slowly, sellers learned they could make the more potent 7-OH from mitragynine and give their products an extra punch. And an extra dose of risk.

People who use kratom in the powder form describe taking 3 to 5 grams, the size of a generous tablespoon. They put the powder in capsules or made it into a tea several times a day to ward off pain, the craving for alcohol or the withdrawal symptoms from long-term prescription opioid use. Since this form of kratom does not contain very much mitragynine – it is only about 1% of the powdered leaf – overdosing on the powder alone does not typically happen.

That, along with pushback from consumers, is why the Food and Drug Administration is proposing to restrict only the availability of 7-OH and not mitragynine or kratom powder. The new Colorado law limits the concentration of kratom ingredients in products and restricts their sales and marketing to consumers over 21.

Even David Bregger supports this distinction. “I’m not anti-kratom, I’m pro-regulation. What I’m after is getting nothing but leaf product,” he told WPRI in Rhode Island last year while demonstrating at a conference of the education and advocacy trade group the American Kratom Association.

Such lobbying with the trade group last year led the American Kratom Association to concur that 7-OH should be regulated as a Schedule 1 controlled substance. The association acknowledges that such regulation is reasonable and based in science.

Benefits amid the ban

Despite the local and national debate over 7-OH, scientists are continuing to explore kratom compounds for their legitimate medical use.

A $3.5 million NIH grant is one of several that is increasing understanding of kratom as a source for new drugs.

Researchers have identified numerous other chemicals called alkaloids from kratom leaf specimens and commercial products. These researchers show that some types of kratom trees make unique chemicals, possibly opening the door to other painkillers. Researchers have also found that compounds from kratom, such as 7-OH, bind to opioid receptors in unique ways. The compounds seem to have an effect more toward pain management and away from potentially deadly suppression of breathing. Of course, this is when the compounds are used alone and not together with other sedating drugs.

Rather than contributing to the opioid crisis, researchers suspect that isolated and safely purified drugs made from kratom could be potential treatments for opioid addiction. In fact, some kratom chemicals such as mitragynine have multiple actions and could potentially replace both medication-assisted therapy, like buprenorphine, in treating opioid addiction and drugs like clonidine for opioid withdrawal symptoms.

Rigorous scientific study has led to this more reasonable juncture in the understanding of kratom and its sensible regulation. Sadly, we cannot bring back Daniel Bregger. But researchers can advance the potential for new and beneficial drugs while legislators help prevent such tragedies from befalling other families.

Read More

The post Balancing kratom’s potential benefits and risks − new legislation in Colorado seeks to minimize harm appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This content maintains a balanced and evidence-based tone, presenting both the potential benefits and risks associated with kratom. It supports reasonable regulation rather than outright prohibition, acknowledging perspectives from regulatory agencies, scientific researchers, consumer advocates, and families affected by kratom-related incidents. The article refrains from partisan language or ideology and focuses on public health, safety, and scientific inquiry, typical of a centrist approach to policy discussions.

Continue Reading

The Conversation

Pregnant women face tough choices about medication use due to lack of safety data − here’s why medical research cuts will make it worse

Published

on

theconversation.com – Almut Winterstein, Distinguished Professor of Pharmaceutical Outcomes & Policy, University of Florida – 2025-08-28 07:03:00


In July 2025, an FDA panel questioned the safety of common antidepressants during pregnancy, highlighting the wider issue of limited knowledge about the risks of many medications used by pregnant women. Most drugs lack conclusive safety data, partly due to historical exclusion of pregnant women from clinical trials after the thalidomide tragedy in the 1960s. About 90% of FDA-approved drugs from 2010-2019 have no human pregnancy data. This knowledge gap causes many women to stop important treatments, risking harm to both mother and fetus. Despite some progress, ongoing cuts to NIH funding threaten research essential for safer maternal and child health outcomes.

More than 9 in 10 women take at least one medication during pregnancy, yet data on prescription drugs’ effects on the fetus are sparse.
Adam Hester/Tetra images via Getty Images

Almut Winterstein, University of Florida and Sonja Rasmussen, Johns Hopkins University

A panel convened in July 2025 by the Food and Drug Administration sparked controversy by casting doubt about the safety of commonly used antidepressants during pregnancy. But it also raised the broader issue of how little is known about the safety of many medications used in pregnancy, considering the implications for both mother and child – and how understudied this topic is.

In the U.S., the average pregnant patient takes four prescription medications, and more than 9 in 10 patients take at least one. But most drugs lack conclusive evidence about their safety during pregnancy. About 1 in 5 women uses a medication during pregnancy that has some preliminary evidence that it could cause harm but for which conclusive studies are missing.

We are researchers in maternal and child health who evaluate the safety of medications during pregnancy. In our work, we identify medications that might raise the risk for birth defects or pregnancy loss and compare the safety of different treatments.

While progress has been slow, researchers and federal agencies have built monitoring systems, databases and tools to accelerate our understanding of medication safety. However, these efforts are now at risk due to ongoing cuts to medical research funding – and with them, so is the knowledge base for determining whether sticking with a therapy or discontinuing it offers the safest choice for both mother and child.

How pregnant women got sidelined

One big reason why so little is known about the effects of medications during pregnancy stretches back more than half a century. In the 1960s, a drug called thalidomide that was widely prescribed to treat morning sickness in pregnant women caused severe birth defects in over 10,000 children around the world. In response, in 1977 the FDA recommended excluding women of childbearing age from participating in early stage clinical trials testing new medications.

A pill bottle with a label carrying the drug's brand name, Kevadon, and its generic name, thalidomide.
Thalidomide, sold under several brand names including Kevadon, was used in many countries to treat morning sickness, though the Food and Drug Administration never approved it for that purpose in the United States.
U.S. Food and Drug Administration

Ethically, there is long-standing tension between concerns about fetal harm and maternal needs. Legal liability and added complexities when conducting studies in pregnant women serve as additional barriers for drug manufacturers.

When drugs are approved, studies about whether they might cause birth defects are typically done only in animals, and they often don’t translate well to humans. So when a new medication comes on the market, nothing is known about how it affects people during pregnancy. Even if animal studies or the medication’s mode of action raised concerns, the drug can still be approved, though companies may be required to conduct studies observing its effects when taken during pregnancy.

Cause and effect

Of 290 drugs approved by the FDA between 2010 and 2019, 90% contain no human data on the risks or benefits for pregnant patients. About 80% of some 1,800 medications in a national database called TERIS, which summarizes evidence on medications’ risks during pregnancy, lack or have limited evidence about the risks for birth defects. Researchers have estimated that it takes 27 years to pin down whether a medication is safe to use in pregnancy.

As a result, many pregnant women stop treating their chronic diseases. In a U.S. study published in 2023, over one-third of women stopped taking a medication during pregnancy, and 36.5% of those did so without advice from a health care provider. More than half cited concerns about birth or developmental defects as the reason.

Yet uncontrolled chronic disease comes with its own toll on both the mother’s and the baby’s health. For example, some medications used to treat seizures are known to cause birth defects, but stopping them may increase seizures, which themselves raise the risk of fetal death.

Women with severe or recurrent depression who abruptly stop their antidepressants risk their depression returning, which is in turn associated with increased risk of substance use, inadequate prenatal care and other negative effects on fetal development. Stopping the use of medications
for treating high blood pressure also causes adverse effects – specifically, a greater risk of pregnancy-related high blood pressure that can cause organ damage, called preeclampsia; a condition called placental abruption, when the placenta detaches from the wall of the uterus too early; preterm birth; and fetal growth restriction. An online resource called Mother to Baby, created by a network of experts on birth defects, provides an excellent summary of the available data on medication safety during pregnancy.

The FDA in some cases requires drug companies to establish registries to track the outcomes of pregnancies exposed to certain medications. These registries can be useful, but they have shortcomings. For example, recruiting pregnant patients into them takes time and considerable effort, resulting in small sample sizes that may not capture rare birth defects. Also, registries typically follow a single medication and rarely include comparisons to alternative treatment approaches – or to no treatment.

What’s more, following the 2022 Dobbs v. Jackson Supreme Court decision overturning the constitutional right to abortion, women might be reluctant to add their names to a pregnancy registry or to provide data on prenatal detection of birth defects due to concerns about privacy and legal risks.

Decades of underfunding

In 2019, a task force established by the 21st Century Cures Act identified a major gap in knowledge about drug safety and effectiveness in pregnant and lactating women and recommended a boost in funding to fill it.

However, little has changed. A 2025 review by the National Academies of Sciences, Engineering and Medicine pointed out that research funding for women’s health topics has remained flat over the past decade, while the overall budget of the National Institutes of Health has steadily increased. The review recommended doubling the NIH funding allocated for such research, but this seems unlikely in light of the recent proposals to cut the overall NIH budget by 40%.

The National Institute of Child Health and Human Development funds the bulk of research on the safety of medications during pregnancy across federal agencies, although the institute has an appreciably smaller budget than most of its sister institutes such as the National Cancer Institute. Grants awarded are typically broad and take four to five years to complete, but they allow the more comprehensive assessments that are needed to support informed decisions considering outcomes for mother and child. For example, NIH-funded researchers have established a clear link between autism and prenatal use of valproate, a potent teratogen used to treat epilepsy and several mental health disorders.

The Centers for Disease Control and Prevention as well as the FDA have also funded specific pregnancy-related research. For example, following the COVID-19 epidemic, the CDC renewed its funding for studies that help expedite pregnancy safety studies for treatments that might be used for newly emerging infections. In response to emerging concerns about a substance called gadolinium, which is often used during MRI procedures, the FDA funded our own work on a study of almost 6,000 pregnant women, which found no elevated risk.

For healthy pregnancies, more research is critical

These efforts have laid a crucial foundation for evaluating medication safety and effectiveness during pregnancy. But keeping pace with the release of new medications and new ways they are used, as well as addressing the backlog of missing evidence for medications that were approved in the past millennium, remain a challenge.

Recent terminations of NIH-funded studies have focused on topics presumably relating to diversity, equity and inclusion. But research on safe and healthy pregnancies and on maternal health – for example, on the safety of COVID-19 vaccines during breastfeeding – has been affected as well.

The NIH has scaled back new grant awards by nearly US$5 billion since the beginning of 2025, and the odds for receiving NIH funding have plummeted. Proposed sweeping budget cuts for the CDC and FDA leave their role in supporting research on healthy pregnancies similarly uncertain.

In our view, removing or reducing ongoing investments in healthy pregnancies poses a danger to much-needed efforts to reduce excessive rates of stillbirths as well as infant and maternal deaths.The Conversation

Almut Winterstein, Distinguished Professor of Pharmaceutical Outcomes & Policy, University of Florida and Sonja Rasmussen, Professor of Genetic Medicine, Johns Hopkins University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Thalidomide, sold under several brand names including Kevadon, was used in many countries to treat morning sickness, though the Food and Drug Administration never approved it for that purpose in the United States.
U.S. Food and Drug Administration

Ethically, there is long-standing tension between concerns about fetal harm and maternal needs. Legal liability and added complexities when conducting studies in pregnant women serve as additional barriers for drug manufacturers.

When drugs are approved, studies about whether they might cause birth defects are typically done only in animals, and they often don’t translate well to humans. So when a new medication comes on the market, nothing is known about how it affects people during pregnancy. Even if animal studies or the medication’s mode of action raised concerns, the drug can still be approved, though companies may be required to conduct studies observing its effects when taken during pregnancy.

Cause and effect

Of 290 drugs approved by the FDA between 2010 and 2019, 90% contain no human data on the risks or benefits for pregnant patients. About 80% of some 1,800 medications in a national database called TERIS, which summarizes evidence on medications’ risks during pregnancy, lack or have limited evidence about the risks for birth defects. Researchers have estimated that it takes 27 years to pin down whether a medication is safe to use in pregnancy.

As a result, many pregnant women stop treating their chronic diseases. In a U.S. study published in 2023, over one-third of women stopped taking a medication during pregnancy, and 36.5% of those did so without advice from a health care provider. More than half cited concerns about birth or developmental defects as the reason.

Yet uncontrolled chronic disease comes with its own toll on both the mother’s and the baby’s health. For example, some medications used to treat seizures are known to cause birth defects, but stopping them may increase seizures, which themselves raise the risk of fetal death.

Women with severe or recurrent depression who abruptly stop their antidepressants risk their depression returning, which is in turn associated with increased risk of substance use, inadequate prenatal care and other negative effects on fetal development. Stopping the use of medications
for treating high blood pressure also causes adverse effects – specifically, a greater risk of pregnancy-related high blood pressure that can cause organ damage, called preeclampsia; a condition called placental abruption, when the placenta detaches from the wall of the uterus too early; preterm birth; and fetal growth restriction. An online resource called Mother to Baby, created by a network of experts on birth defects, provides an excellent summary of the available data on medication safety during pregnancy.

The FDA in some cases requires drug companies to establish registries to track the outcomes of pregnancies exposed to certain medications. These registries can be useful, but they have shortcomings. For example, recruiting pregnant patients into them takes time and considerable effort, resulting in small sample sizes that may not capture rare birth defects. Also, registries typically follow a single medication and rarely include comparisons to alternative treatment approaches – or to no treatment.

A  gloved hand holding a multi-well plate, with each plate containing a capsule.
New medications are not studied for whether they cause birth defects in people before they get approved.
Westend61/Getty Images

What’s more, following the 2022 Dobbs v. Jackson Supreme Court decision overturning the constitutional right to abortion, women might be reluctant to add their names to a pregnancy registry or to provide data on prenatal detection of birth defects due to concerns about privacy and legal risks.

Decades of underfunding

In 2019, a task force established by the 21st Century Cures Act identified a major gap in knowledge about drug safety and effectiveness in pregnant and lactating women and recommended a boost in funding to fill it.

However, little has changed. A 2025 review by the National Academies of Sciences, Engineering and Medicine pointed out that research funding for women’s health topics has remained flat over the past decade, while the overall budget of the National Institutes of Health has steadily increased. The review recommended doubling the NIH funding allocated for such research, but this seems unlikely in light of the recent proposals to cut the overall NIH budget by 40%.

The National Institute of Child Health and Human Development funds the bulk of research on the safety of medications during pregnancy across federal agencies, although the institute has an appreciably smaller budget than most of its sister institutes such as the National Cancer Institute. Grants awarded are typically broad and take four to five years to complete, but they allow the more comprehensive assessments that are needed to support informed decisions considering outcomes for mother and child. For example, NIH-funded researchers have established a clear link between autism and prenatal use of valproate, a potent teratogen used to treat epilepsy and several mental health disorders.

The Centers for Disease Control and Prevention as well as the FDA have also funded specific pregnancy-related research. For example, following the COVID-19 epidemic, the CDC renewed its funding for studies that help expedite pregnancy safety studies for treatments that might be used for newly emerging infections. In response to emerging concerns about a substance called gadolinium, which is often used during MRI procedures, the FDA funded our own work on a study of almost 6,000 pregnant women, which found no elevated risk.

For healthy pregnancies, more research is critical

These efforts have laid a crucial foundation for evaluating medication safety and effectiveness during pregnancy. But keeping pace with the release of new medications and new ways they are used, as well as addressing the backlog of missing evidence for medications that were approved in the past millennium, remain a challenge.

Recent terminations of NIH-funded studies have focused on topics presumably relating to diversity, equity and inclusion. But research on safe and healthy pregnancies and on maternal health – for example, on the safety of COVID-19 vaccines during breastfeeding – has been affected as well.

The NIH has scaled back new grant awards by nearly US$5 billion since the beginning of 2025, and the odds for receiving NIH funding have plummeted. Proposed sweeping budget cuts for the CDC and FDA leave their role in supporting research on healthy pregnancies similarly uncertain.

In our view, removing or reducing ongoing investments in healthy pregnancies poses a danger to much-needed efforts to reduce excessive rates of stillbirths as well as infant and maternal deaths.

Read More

The post Pregnant women face tough choices about medication use due to lack of safety data − here’s why medical research cuts will make it worse appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

This content emphasizes the importance of scientific research and government funding for maternal and child health, highlighting concerns about budget cuts to agencies like the NIH, CDC, and FDA. It advocates for increased investment in public health research and points out the consequences of underfunding, which aligns with a center-left perspective that supports robust government involvement in healthcare and research. The article maintains a factual tone without strong partisan language, but its focus on the negative impact of funding cuts and the need for expanded research funding reflects a center-left leaning viewpoint.

Continue Reading

Trending