The city of Lancaster, Pennsylvania, was shaken by revelations in December 2023 that two local teenage boys shared hundreds of nude images of girls in their community over a private chat on the social chat platform Discord. Witnesses said the photos easily could have been mistaken for real ones, but they were fake. The boys had used an artificial intelligence tool to superimpose real photos of girls’ faces onto sexually explicit images.
With troves of real photos available on social media platforms, and AI tools becoming more accessible across the web, similar incidents have played out across the country, from California to Texas and Wisconsin. A recent survey by the Center for Democracy and Technology, a Washington D.C.-based nonprofit, found that 15% of students and 11% of teachers knew of at least one deepfake that depicted someone associated with their school in a sexually explicit or intimate manner.
The Supreme Court has implicitly concluded that computer-generated pornographic images that are based on images of real children are illegal. The use of generative AI technologies to make deepfake pornographic images of minors almost certainly falls under the scope of that ruling. As a legal scholar who studies the intersection of constitutional law and emerging technologies, I see an emerging challenge to the status quo: AI-generated images that are fully fake but indistinguishable from real photos.
Policing child sexual abuse material
While the internet’s architecture has always made it difficult to control what is shared online, there are a few kinds of content that most regulatory authorities across the globe agree should be censored. Child pornography is at the top of that list.
For decades, law enforcement agencies have worked with major tech companies to identify and remove this kind of material from the web, and to prosecute those who create or circulate it. But the advent of generative artificial intelligence and easy-to-access tools like the ones used in the Pennsylvania case present a vexing new challenge for such efforts.
In the legal field, child pornography is generally referred to as child sexual abuse material, or CSAM, because the term better reflects the abuse that is depicted in the images and videos and the resulting trauma to the children involved. In 1982, the Supreme Court ruled that child pornography is not protected under the First Amendment because safeguarding the physical and psychological well-being of a minor is a compelling government interest that justifies laws that prohibit child sexual abuse material.
That case, New York v. Ferber, effectively allowed the federal government and all 50 states to criminalize traditional child sexual abuse material. But a subsequent case, Ashcroft v. Free Speech Coalition from 2002, might complicate efforts to criminalize AI-generated child sexual abuse material. In that case, the court struck down a law that prohibited computer-generated child pornography, effectively rendering it legal.
The government’s interest in protecting the physical and psychological well-being of children, the court found, was not implicated when such obscene material is computer generated. “Virtual child pornography is not ‘intrinsically related’ to the sexual abuse of children,” the court wrote.
States move to criminalize AI-generated CSAM
According to the child advocacy organization Enough Abuse, 37 states have criminalized AI-generated or AI-modified CSAM, either by amending existing child sexual abuse material laws or enacting new ones. More than half of those 37 states enacted new laws or amended their existing ones within the past year.
California, for example, enacted Assembly Bill 1831 on Sept. 29, 2024, which amended its penal code to prohibit the creation, sale, possession and distribution of any “digitally altered or artificial-intelligence-generated matter” that depicts a person under 18 engaging in or simulating sexual conduct.
Deepfake child pornography is a growing problem.
While some of these state laws target the use of photos of real people to generate these deep fakes, others go further, defining child sexual abuse material as “any image of a person who appears to be a minor under 18 involved in sexual activity,” according to Enough Abuse. Laws like these that encompass images produced without depictions of real minors might run counter to the Supreme Court’s Ashcroft v. Free Speech Coalition ruling.
Real vs. fake, and telling the difference
Perhaps the most important part of the Ashcroft decision for emerging issues around AI-generated child sexual abuse material was part of the statute that the Supreme Court did not strike down. That provision of the law prohibited “more common and lower tech means of creating virtual (child sexual abuse material), known as computer morphing,” which involves taking pictures of real minors and morphing them into sexually explicit depictions.
The court’s decision stated that these digitally altered sexually explicit depictions of minors “implicate the interests of real children and are in that sense closer to the images in Ferber.” The decision referenced the 1982 case, New York v. Ferber, in which the Supreme Court upheld a New York criminal statute that prohibited persons from knowingly promoting sexual performances by children under the age of 16.
The court’s decisions in Ferber and Ashcroft could be used to argue that any AI-generated sexually explicit image of real minors should not be protected as free speech given the psychological harms inflicted on the real minors. But that argument has yet to be made before the court. The court’s ruling in Ashcroft may permit AI-generated sexually explicit images of fake minors.
But Justice Clarence Thomas, who concurred in Ashcroft, cautioned that “if technological advances thwart prosecution of ‘unlawful speech,’ the Government may well have a compelling interest in barring or otherwise regulating some narrow category of ‘lawful speech’ in order to enforce effectively laws against pornography made through the abuse of real children.”
With the recent significant advances in AI, it can be difficult if not impossible for law enforcement officials to distinguish between images of real and fake children. It’s possible that we’ve reached the point where computer-generated child sexual abuse material will need to be banned so that federal and state governments can effectively enforce laws aimed at protecting real children – the point that Thomas warned about over 20 years ago.
If so, easy access to generative AI tools is likely to force the courts to grapple with the issue.
theconversation.com – Leo S. Lo, Dean of Libraries; Advisor to the Provost for AI Literacy; Professor of Education, University of Virginia – 2025-09-01 07:35:00
Artificial intelligence systems consume significant water—up to 500 milliliters per short interaction—primarily for cooling data center servers and generating electricity. Water use varies greatly by location and climate; for example, dry, hot areas rely heavily on evaporative cooling, which consumes more water. Innovations like immersion cooling and Microsoft’s zero-water cooling design promise to reduce consumption but aren’t yet widespread. AI’s water footprint also depends on the model’s complexity, with newer models like GPT-5 using considerably more water than efficient ones. Despite large aggregate usage, AI’s water consumption remains small compared to everyday activities like lawn watering. Transparency and efficiency improvements are crucial for balancing innovation with sustainability.
But the study that calculated those estimates also pointed out that AI systems’ water usage can vary widely, depending on where and when the computer answering the query is running.
When people move from seeing AI as simply a resource drain to understanding its actual footprint, where the effects come from, how they vary, and what can be done to reduce them, they are far better equipped to make choices that balance innovation with sustainability.
The first is on-site cooling of servers that generate enormous amounts of heat. This often uses evaporative cooling towers – giant misters that spray water over hot pipes or open basins. The evaporation carries away heat, but that water is removed from the local water supply, such as a river, a reservoir or an aquifer. Other cooling systems may use less water but more electricity.
Hydropower also uses up significant amounts of water, which evaporates from reservoirs. Concentrated solar plants, which run more like traditional steam power stations, can be water-intensive if they rely on wet cooling.
Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.
Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.
Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.
And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.
These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.
You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.
Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.
Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.
Independent researchers and industryreports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.
Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.
Here’s the one-line formula you’ll need:
Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)
For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.
For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.
If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.
A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.
Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.
Comparisons can add context
To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.
When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.
Using independent estimates and Google’s official reporting gives a sense of the possible range:
All Google Gemini median prompts: about 650,000 liters per day.
All GPT 4o medium prompts: about 8.8 million liters per day.
All GPT 5 medium prompts: about 97.5 million liters per day.
For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.
Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.
But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.
Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.
Water use shifts dramatically with location. A data center in cool, humid Ireland can often rely on outside air or chillers and run for months with minimal water use. By contrast, a data center in Arizona in July may depend heavily on evaporative cooling. Hot, dry air makes that method highly effective, but it also consumes large volumes of water, since evaporation is the mechanism that removes heat.
Timing matters too. A University of Massachusetts Amherst study found that a data center might use only half as much water in winter as in summer. And at midday during a heat wave, cooling systems work overtime. At night, demand is lower.
Newer approaches offer promising alternatives. For instance, immersion cooling submerges servers in fluids that don’t conduct electricity, such as synthetic oils, reducing water evaporation almost entirely.
And a new design from Microsoft claims to use zero water for cooling, by circulating a special liquid through sealed pipes directly across computer chips. The liquid absorbs heat and then releases it through a closed-loop system without needing any evaporation. The data centers would still use some potable water for restrooms and other staff facilities, but cooling itself would no longer draw from local water supplies.
These solutions are not yet mainstream, however, mainly because of cost, maintenance complexity and the difficulty of converting existing data centers to new systems. Most operators rely on evaporative systems.
You can estimate AI’s water footprint yourself in just three steps, with no advanced math required.
Step 1 – Look for credible research or official disclosures. Independent analyses estimate that a medium-length GPT-5 response, which is about 150 to 200 words of output, or roughly 200 to 300 tokens, uses about 19.3 watt-hours. A response of similar length from GPT-4o uses about 1.75 watt-hours.
Step 2 – Use a practical estimate for the amount of water per unit of electricity, combining the usage for cooling and for power.
Independent researchers and industryreports suggest that a reasonable range today is about 1.3 to 2.0 milliliters per watt-hour. The lower end reflects efficient facilities that use modern cooling and cleaner grids. The higher end represents more typical sites.
Step 3 – Now it’s time to put the pieces together. Take the energy number you found in Step 1 and multiply it by the water factor from Step 2. That gives you the water footprint of a single AI response.
Here’s the one-line formula you’ll need:
Energy per prompt (watt-hours) × Water factor (milliliters per watt-hour) = Water per prompt (in milliliters)
For a medium-length query to GPT-5, that calculation should use the figures of 19.3 watt-hours and 2 milliliters per watt-hour. 19.3 x 2 = 39 milliliters of water per response.
For a medium-length query to GPT-4o, the calculation is 1.75 watt-hours x 2 milliliters per watt-hour = 3.5 milliliters of water per response.
If you assume the data centers are more efficient, and use 1.3 milliliters per watt-hour, the numbers drop: about 25 milliliters for GPT-5 and 2.3 milliliters for GPT-4o.
A recent Google technical report said a median text prompt to its Gemini system uses just 0.24 watt-hours of electricity and about 0.26 milliliters of water – roughly the volume of five drops. However, the report does not say how long that prompt is, so it can’t be compared directly with GPT water usage.
Those different estimates – ranging from 0.26 milliliters to 39 milliliters – demonstrate how much the effects of efficiency, AI model and power-generation infrastructure all matter.
Comparisons can add context
To truly understand how much water these queries use, it can be helpful to compare them to other familiar water uses.
When multiplied by millions, AI queries’ water use adds up. OpenAI reports about 2.5 billion prompts per day. That figure includes queries to its GPT-4o, GPT-4 Turbo, GPT-3.5 and GPT-5 systems, with no public breakdown of how many queries are issued to each particular model.
Using independent estimates and Google’s official reporting gives a sense of the possible range:
All Google Gemini median prompts: about 650,000 liters per day.
All GPT 4o medium prompts: about 8.8 million liters per day.
All GPT 5 medium prompts: about 97.5 million liters per day.
For comparison, Americans use about 34 billion liters per day watering residential lawns and gardens. One liter is about one-quarter of a gallon.
Generative AI does use water, but – at least for now – its daily totals are small compared with other common uses such as lawns, showers and laundry.
But its water demand is not fixed. Google’s disclosure shows what is possible when systems are optimized, with specialized chips, efficient cooling and smart workload management. Recycling water and locating data centers in cooler, wetter regions can help, too.
Transparency matters, as well: When companies release their data, the public, policymakers and researchers can see what is achievable and compare providers fairly.
Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.
Political Bias Rating: Centrist
The content presents a balanced and fact-based analysis of the environmental impact of AI, specifically focusing on water usage. It relies on scientific studies, industry reports, and expert opinions without promoting a particular political agenda. The article acknowledges concerns about resource consumption while also highlighting technological innovations and practical solutions, aiming to inform readers rather than persuade them toward a partisan viewpoint. This neutral and informative approach aligns with a centrist perspective.
theconversation.com – David Kroll, Professor of Natural Products Pharmacology & Toxicology, University of Colorado Anschutz Medical Campus – 2025-08-29 07:41:00
David Bregger’s son, Daniel, died in 2021 after using kratom, a herbal supplement marketed as a natural anxiety remedy. Unaware of its risks, Daniel consumed a product containing 7-hydroxymitragynine (7-OH), a potent opioid-like chemical. Colorado’s new Daniel Bregger Act regulates kratom potency and restricts sales to adults, addressing deceptive practices around concentrated 7-OH products. While kratom powder has mild effects, concentrated extracts pose overdose risks, especially combined with other sedatives. Despite controversies, kratom shows promise for pain relief and opioid addiction treatment. Ongoing research aims to develop safer, effective medications from kratom compounds, balancing benefits and risks.
David Bregger had never heard of kratom before his son, Daniel, 33, died in Denver in 2021 from using what he thought was a natural and safe remedy for anxiety.
By his father’s account, Daniel didn’t know that the herbal product could kill him. The product listed no ingredients or safe-dosing information on the label. And it had no warning that it should not be combined with other sedating drugs, such as the over-the-counter antihistamine diphenhydramine, which is the active ingredient in Benadryl and other sleep aids.
As the fourth anniversary of Daniel’s death approaches, a recently enacted Colorado law aims to prevent other families from experiencing the heartbreak shared by the Bregger family. Colorado Senate Bill 25-072, known as the Daniel Bregger Act, addresses what the state legislature calls the deceptive trade practices around the sale of concentrated kratom products artificially enriched with a chemical called 7-OH.
The Daniel Breggar Act seeks to limit potency and underage access to kratom, an herbal supplement.
7-OH, known as 7-hydroxymitragynine, has also garnered national attention. On July 29, 2025, the U.S. Food and Drug Administration issued a warning that products containing 7-OH are potent opioids that can pose significant health risks and even death.
I study pharmaceutical sciences, have taught for over 30 years about herbal supplements like kratom, and I’ve written about kratom’s effects and controversy.
Kratom – one name, many products
Kratom is a broad term used to describe products made from the leaves of a Southeast Asian tree known scientifically as Mitragyna speciosa. The Latin name derives from the shape of its leaves, which resemble a bishop’s miter, the ceremonial, pointed headdress worn by bishops and other church leaders.
Kratom is made from dried and powdered leaves that can be chewed or made into a tea. Used by rice field workers and farmers in Thailand to increase stamina and productivity, kratom initially alleviates fatigue with an effect like that of caffeine. In larger amounts, it imparts a sense of well-being similar to opioids.
In fact, mitragynine, which is found in small amounts in kratom, partially stimulates opioid receptors in the central nervous system. These are the same type of opioid receptors that trigger the effects of drugs such as morphine and oxycodone. They are also the same receptors that can slow or stop breathing when overstimulated.
In the body, the small amount of mitragynine in kratom powder is converted to 7-OH by liver enzymes, hence the opioid-like effects in the body. 7-OH can also be made in a lab and is used to increase the potency of certain kratom products, including the ones found in gas stations or liquor stores.
And therein lies the controversy over the risks and benefits of kratom.
Natural or lab made: All medicines have risks
Because kratom is a plant-derived product, it has fallen into a murky enforcement area. It is sold as an herbal supplement, normally by the kilogram from online retailers overseas.
But readers countered the DEA’s stance and sent me more than 200 messages that primarily documented their use of kratom as an alternative to opioids for pain.
Others described how kratom assisted them in recovery from addiction to alcohol or opioids themselves. Similar stories also flooded the official comments requested by the DEA, and the public pressure presumably led the agency to drop its plan to regulate kratom as a controlled substance.
Kratom is under growing scrutiny.
But not all of the stories pointed to kratom’s benefits. Instead, some people pointed out a major risk: becoming addicted to kratom itself. I learned it is a double-edged sword – remedy to some, recreational risk to others. A national survey of kratom users was consistent with my nonscientific sampling, showing more than half were using the supplement to relieve pain, stress, anxiety or a combination of these.
Natural leaf powder vs. artificially concentrated extracts
After the DEA dropped its 2016 plan to ban the leaf powder, marketers in the U.S. began isolating mitragynine and concentrating it into small bottles that could be taken like those energy shots of caffeine often sold in gas stations and convenience stores. This formula made it easier to ingest more kratom. Slowly, sellers learned they could make the more potent 7-OH from mitragynine and give their products an extra punch. And an extra dose of risk.
People who use kratom in the powder form describe taking 3 to 5 grams, the size of a generous tablespoon. They put the powder in capsules or made it into a tea several times a day to ward off pain, the craving for alcohol or the withdrawal symptoms from long-term prescription opioid use. Since this form of kratom does not contain very much mitragynine – it is only about 1% of the powdered leaf – overdosing on the powder alone does not typically happen.
That, along with pushback from consumers, is why the Food and Drug Administration is proposing to restrict only the availability of 7-OH and not mitragynine or kratom powder. The new Colorado law limits the concentration of kratom ingredients in products and restricts their sales and marketing to consumers over 21.
Even David Bregger supports this distinction. “I’m not anti-kratom, I’m pro-regulation. What I’m after is getting nothing but leaf product,” he told WPRI in Rhode Island last year while demonstrating at a conference of the education and advocacy trade group the American Kratom Association.
Such lobbying with the trade group last year led the American Kratom Association to concur that 7-OH should be regulated as a Schedule 1 controlled substance. The association acknowledges that such regulation is reasonable and based in science.
Researchers have identified numerous other chemicals called alkaloids from kratom leaf specimens and commercial products. These researchers show that some types of kratom trees make unique chemicals, possibly opening the door to other painkillers. Researchers have also found that compounds from kratom, such as 7-OH, bind to opioid receptors in unique ways. The compounds seem to have an effect more toward pain management and away from potentially deadly suppression of breathing. Of course, this is when the compounds are used alone and not together with other sedating drugs.
Rather than contributing to the opioid crisis, researchers suspect that isolated and safely purified drugs made from kratom could be potential treatments for opioid addiction. In fact, some kratom chemicals such as mitragynine have multiple actions and could potentially replace both medication-assisted therapy, like buprenorphine, in treating opioid addiction and drugs like clonidine for opioid withdrawal symptoms.
Rigorous scientific study has led to this more reasonable juncture in the understanding of kratom and its sensible regulation. Sadly, we cannot bring back Daniel Bregger. But researchers can advance the potential for new and beneficial drugs while legislators help prevent such tragedies from befalling other families.
Kratom is made from dried and powdered leaves that can be chewed or made into a tea. Used by rice field workers and farmers in Thailand to increase stamina and productivity, kratom initially alleviates fatigue with an effect like that of caffeine. In larger amounts, it imparts a sense of well-being similar to opioids.
In fact, mitragynine, which is found in small amounts in kratom, partially stimulates opioid receptors in the central nervous system. These are the same type of opioid receptors that trigger the effects of drugs such as morphine and oxycodone. They are also the same receptors that can slow or stop breathing when overstimulated.
In the body, the small amount of mitragynine in kratom powder is converted to 7-OH by liver enzymes, hence the opioid-like effects in the body. 7-OH can also be made in a lab and is used to increase the potency of certain kratom products, including the ones found in gas stations or liquor stores.
And therein lies the controversy over the risks and benefits of kratom.
Natural or lab made: All medicines have risks
Because kratom is a plant-derived product, it has fallen into a murky enforcement area. It is sold as an herbal supplement, normally by the kilogram from online retailers overseas.
But readers countered the DEA’s stance and sent me more than 200 messages that primarily documented their use of kratom as an alternative to opioids for pain.
Others described how kratom assisted them in recovery from addiction to alcohol or opioids themselves. Similar stories also flooded the official comments requested by the DEA, and the public pressure presumably led the agency to drop its plan to regulate kratom as a controlled substance.
Kratom is under growing scrutiny.
But not all of the stories pointed to kratom’s benefits. Instead, some people pointed out a major risk: becoming addicted to kratom itself. I learned it is a double-edged sword – remedy to some, recreational risk to others. A national survey of kratom users was consistent with my nonscientific sampling, showing more than half were using the supplement to relieve pain, stress, anxiety or a combination of these.
Natural leaf powder vs. artificially concentrated extracts
After the DEA dropped its 2016 plan to ban the leaf powder, marketers in the U.S. began isolating mitragynine and concentrating it into small bottles that could be taken like those energy shots of caffeine often sold in gas stations and convenience stores. This formula made it easier to ingest more kratom. Slowly, sellers learned they could make the more potent 7-OH from mitragynine and give their products an extra punch. And an extra dose of risk.
People who use kratom in the powder form describe taking 3 to 5 grams, the size of a generous tablespoon. They put the powder in capsules or made it into a tea several times a day to ward off pain, the craving for alcohol or the withdrawal symptoms from long-term prescription opioid use. Since this form of kratom does not contain very much mitragynine – it is only about 1% of the powdered leaf – overdosing on the powder alone does not typically happen.
That, along with pushback from consumers, is why the Food and Drug Administration is proposing to restrict only the availability of 7-OH and not mitragynine or kratom powder. The new Colorado law limits the concentration of kratom ingredients in products and restricts their sales and marketing to consumers over 21.
Even David Bregger supports this distinction. “I’m not anti-kratom, I’m pro-regulation. What I’m after is getting nothing but leaf product,” he told WPRI in Rhode Island last year while demonstrating at a conference of the education and advocacy trade group the American Kratom Association.
Such lobbying with the trade group last year led the American Kratom Association to concur that 7-OH should be regulated as a Schedule 1 controlled substance. The association acknowledges that such regulation is reasonable and based in science.
Researchers have identified numerous other chemicals called alkaloids from kratom leaf specimens and commercial products. These researchers show that some types of kratom trees make unique chemicals, possibly opening the door to other painkillers. Researchers have also found that compounds from kratom, such as 7-OH, bind to opioid receptors in unique ways. The compounds seem to have an effect more toward pain management and away from potentially deadly suppression of breathing. Of course, this is when the compounds are used alone and not together with other sedating drugs.
Rather than contributing to the opioid crisis, researchers suspect that isolated and safely purified drugs made from kratom could be potential treatments for opioid addiction. In fact, some kratom chemicals such as mitragynine have multiple actions and could potentially replace both medication-assisted therapy, like buprenorphine, in treating opioid addiction and drugs like clonidine for opioid withdrawal symptoms.
Rigorous scientific study has led to this more reasonable juncture in the understanding of kratom and its sensible regulation. Sadly, we cannot bring back Daniel Bregger. But researchers can advance the potential for new and beneficial drugs while legislators help prevent such tragedies from befalling other families.
Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.
Political Bias Rating: Centrist
This content maintains a balanced and evidence-based tone, presenting both the potential benefits and risks associated with kratom. It supports reasonable regulation rather than outright prohibition, acknowledging perspectives from regulatory agencies, scientific researchers, consumer advocates, and families affected by kratom-related incidents. The article refrains from partisan language or ideology and focuses on public health, safety, and scientific inquiry, typical of a centrist approach to policy discussions.
theconversation.com – Almut Winterstein, Distinguished Professor of Pharmaceutical Outcomes & Policy, University of Florida – 2025-08-28 07:03:00
In July 2025, an FDA panel questioned the safety of common antidepressants during pregnancy, highlighting the wider issue of limited knowledge about the risks of many medications used by pregnant women. Most drugs lack conclusive safety data, partly due to historical exclusion of pregnant women from clinical trials after the thalidomide tragedy in the 1960s. About 90% of FDA-approved drugs from 2010-2019 have no human pregnancy data. This knowledge gap causes many women to stop important treatments, risking harm to both mother and fetus. Despite some progress, ongoing cuts to NIH funding threaten research essential for safer maternal and child health outcomes.
More than 9 in 10 women take at least one medication during pregnancy, yet data on prescription drugs’ effects on the fetus are sparse. Adam Hester/Tetra images via Getty Images
A panel convened in July 2025 by the Food and Drug Administration sparked controversy by casting doubt about the safety of commonly used antidepressants during pregnancy. But it also raised the broader issue of how little is known about the safety of many medications used in pregnancy, considering the implications for both mother and child – and how understudied this topic is.
In the U.S., the average pregnant patient takes four prescription medications, and more than 9 in 10 patients take at least one. But most drugs lack conclusive evidence about their safety during pregnancy. About 1 in 5 women uses a medication during pregnancy that has some preliminary evidence that it could cause harm but for which conclusive studies are missing.
While progress has been slow, researchers and federal agencies have built monitoring systems, databases and tools to accelerate our understanding of medication safety. However, these efforts are now at risk due to ongoing cuts to medical research funding – and with them, so is the knowledge base for determining whether sticking with a therapy or discontinuing it offers the safest choice for both mother and child.
How pregnant women got sidelined
One big reason why so little is known about the effects of medications during pregnancy stretches back more than half a century. In the 1960s, a drug called thalidomide that was widely prescribed to treat morning sickness in pregnant women caused severe birth defects in over 10,000 children around the world. In response, in 1977 the FDA recommended excluding women of childbearing age from participating in early stage clinical trials testing new medications.
Thalidomide, sold under several brand names including Kevadon, was used in many countries to treat morning sickness, though the Food and Drug Administration never approved it for that purpose in the United States. U.S. Food and Drug Administration
Ethically, there is long-standing tension between concerns about fetal harm and maternal needs. Legal liability and added complexities when conducting studies in pregnant women serve as additional barriers for drug manufacturers.
When drugs are approved, studies about whether they might cause birth defects are typically done only in animals, and they often don’t translate well to humans. So when a new medication comes on the market, nothing is known about how it affects people during pregnancy. Even if animal studies or the medication’s mode of action raised concerns, the drug can still be approved, though companies may be required to conduct studies observing its effects when taken during pregnancy.
Cause and effect
Of 290 drugs approved by the FDA between 2010 and 2019, 90% contain no human data on the risks or benefits for pregnant patients. About 80% of some 1,800 medications in a national database called TERIS, which summarizes evidence on medications’ risks during pregnancy, lack or have limited evidence about the risks for birth defects. Researchers have estimated that it takes 27 years to pin down whether a medication is safe to use in pregnancy.
As a result, many pregnant women stop treating their chronic diseases. In a U.S. study published in 2023, over one-third of women stopped taking a medication during pregnancy, and 36.5% of those did so without advice from a health care provider. More than half cited concerns about birth or developmental defects as the reason.
Yet uncontrolled chronic disease comes with its own toll on both the mother’s and the baby’s health. For example, some medications used to treat seizures are known to cause birth defects, but stopping them may increase seizures, which themselves raise the risk of fetal death.
Women with severe or recurrent depression who abruptly stop their antidepressants risk their depression returning, which is in turn associated with increased risk of substance use, inadequate prenatal care and other negative effects on fetal development. Stopping the use of medications
for treating high blood pressure also causes adverse effects – specifically, a greater risk of pregnancy-related high blood pressure that can cause organ damage, called preeclampsia; a condition called placental abruption, when the placenta detaches from the wall of the uterus too early; preterm birth; and fetal growth restriction. An online resource called Mother to Baby, created by a network of experts on birth defects, provides an excellent summary of the available data on medication safety during pregnancy.
The FDA in some cases requires drug companies to establish registries to track the outcomes of pregnancies exposed to certain medications. These registries can be useful, but they have shortcomings. For example, recruiting pregnant patients into them takes time and considerable effort, resulting in small sample sizes that may not capture rare birth defects. Also, registries typically follow a single medication and rarely include comparisons to alternative treatment approaches – or to no treatment.
What’s more, following the 2022 Dobbs v. Jackson Supreme Court decision overturning the constitutional right to abortion, women might be reluctant to add their names to a pregnancy registry or to provide data on prenatal detection of birth defects due to concerns about privacy and legal risks.
However, little has changed. A 2025 review by the National Academies of Sciences, Engineering and Medicine pointed out that research funding for women’s health topics has remained flat over the past decade, while the overall budget of the National Institutes of Health has steadily increased. The review recommended doubling the NIH funding allocated for such research, but this seems unlikely in light of the recent proposals to cut the overall NIH budget by 40%.
The National Institute of Child Health and Human Development funds the bulk of research on the safety of medications during pregnancy across federal agencies, although the institute has an appreciably smaller budget than most of its sister institutes such as the National Cancer Institute. Grants awarded are typically broad and take four to five years to complete, but they allow the more comprehensive assessments that are needed to support informed decisions considering outcomes for mother and child. For example, NIH-funded researchers have established a clear link between autism and prenatal use of valproate, a potent teratogen used to treat epilepsy and several mental health disorders.
The Centers for Disease Control and Prevention as well as the FDA have also funded specific pregnancy-related research. For example, following the COVID-19 epidemic, the CDC renewed its funding for studies that help expedite pregnancy safety studies for treatments that might be used for newly emerging infections. In response to emerging concerns about a substance called gadolinium, which is often used during MRI procedures, the FDA funded our own work on a study of almost 6,000 pregnant women, which found no elevated risk.
For healthy pregnancies, more research is critical
These efforts have laid a crucial foundation for evaluating medication safety and effectiveness during pregnancy. But keeping pace with the release of new medications and new ways they are used, as well as addressing the backlog of missing evidence for medications that were approved in the past millennium, remain a challenge.
In our view, removing or reducing ongoing investments in healthy pregnancies poses a danger to much-needed efforts to reduce excessive rates of stillbirths as well as infant and maternal deaths.
Thalidomide, sold under several brand names including Kevadon, was used in many countries to treat morning sickness, though the Food and Drug Administration never approved it for that purpose in the United States. U.S. Food and Drug Administration
Ethically, there is long-standing tension between concerns about fetal harm and maternal needs. Legal liability and added complexities when conducting studies in pregnant women serve as additional barriers for drug manufacturers.
When drugs are approved, studies about whether they might cause birth defects are typically done only in animals, and they often don’t translate well to humans. So when a new medication comes on the market, nothing is known about how it affects people during pregnancy. Even if animal studies or the medication’s mode of action raised concerns, the drug can still be approved, though companies may be required to conduct studies observing its effects when taken during pregnancy.
Cause and effect
Of 290 drugs approved by the FDA between 2010 and 2019, 90% contain no human data on the risks or benefits for pregnant patients. About 80% of some 1,800 medications in a national database called TERIS, which summarizes evidence on medications’ risks during pregnancy, lack or have limited evidence about the risks for birth defects. Researchers have estimated that it takes 27 years to pin down whether a medication is safe to use in pregnancy.
As a result, many pregnant women stop treating their chronic diseases. In a U.S. study published in 2023, over one-third of women stopped taking a medication during pregnancy, and 36.5% of those did so without advice from a health care provider. More than half cited concerns about birth or developmental defects as the reason.
Yet uncontrolled chronic disease comes with its own toll on both the mother’s and the baby’s health. For example, some medications used to treat seizures are known to cause birth defects, but stopping them may increase seizures, which themselves raise the risk of fetal death.
Women with severe or recurrent depression who abruptly stop their antidepressants risk their depression returning, which is in turn associated with increased risk of substance use, inadequate prenatal care and other negative effects on fetal development. Stopping the use of medications
for treating high blood pressure also causes adverse effects – specifically, a greater risk of pregnancy-related high blood pressure that can cause organ damage, called preeclampsia; a condition called placental abruption, when the placenta detaches from the wall of the uterus too early; preterm birth; and fetal growth restriction. An online resource called Mother to Baby, created by a network of experts on birth defects, provides an excellent summary of the available data on medication safety during pregnancy.
The FDA in some cases requires drug companies to establish registries to track the outcomes of pregnancies exposed to certain medications. These registries can be useful, but they have shortcomings. For example, recruiting pregnant patients into them takes time and considerable effort, resulting in small sample sizes that may not capture rare birth defects. Also, registries typically follow a single medication and rarely include comparisons to alternative treatment approaches – or to no treatment.
New medications are not studied for whether they cause birth defects in people before they get approved. Westend61/Getty Images
What’s more, following the 2022 Dobbs v. Jackson Supreme Court decision overturning the constitutional right to abortion, women might be reluctant to add their names to a pregnancy registry or to provide data on prenatal detection of birth defects due to concerns about privacy and legal risks.
However, little has changed. A 2025 review by the National Academies of Sciences, Engineering and Medicine pointed out that research funding for women’s health topics has remained flat over the past decade, while the overall budget of the National Institutes of Health has steadily increased. The review recommended doubling the NIH funding allocated for such research, but this seems unlikely in light of the recent proposals to cut the overall NIH budget by 40%.
The National Institute of Child Health and Human Development funds the bulk of research on the safety of medications during pregnancy across federal agencies, although the institute has an appreciably smaller budget than most of its sister institutes such as the National Cancer Institute. Grants awarded are typically broad and take four to five years to complete, but they allow the more comprehensive assessments that are needed to support informed decisions considering outcomes for mother and child. For example, NIH-funded researchers have established a clear link between autism and prenatal use of valproate, a potent teratogen used to treat epilepsy and several mental health disorders.
The Centers for Disease Control and Prevention as well as the FDA have also funded specific pregnancy-related research. For example, following the COVID-19 epidemic, the CDC renewed its funding for studies that help expedite pregnancy safety studies for treatments that might be used for newly emerging infections. In response to emerging concerns about a substance called gadolinium, which is often used during MRI procedures, the FDA funded our own work on a study of almost 6,000 pregnant women, which found no elevated risk.
For healthy pregnancies, more research is critical
These efforts have laid a crucial foundation for evaluating medication safety and effectiveness during pregnancy. But keeping pace with the release of new medications and new ways they are used, as well as addressing the backlog of missing evidence for medications that were approved in the past millennium, remain a challenge.
In our view, removing or reducing ongoing investments in healthy pregnancies poses a danger to much-needed efforts to reduce excessive rates of stillbirths as well as infant and maternal deaths.
Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.
Political Bias Rating: Center-Left
This content emphasizes the importance of scientific research and government funding for maternal and child health, highlighting concerns about budget cuts to agencies like the NIH, CDC, and FDA. It advocates for increased investment in public health research and points out the consequences of underfunding, which aligns with a center-left perspective that supports robust government involvement in healthcare and research. The article maintains a factual tone without strong partisan language, but its focus on the negative impact of funding cuts and the need for expanded research funding reflects a center-left leaning viewpoint.