Connect with us

The Conversation

Balancing kratom’s potential benefits and risks − new legislation in Colorado seeks to minimize harm

Published

on

theconversation.com – David Kroll, Professor of Natural Products Pharmacology & Toxicology, University of Colorado Anschutz Medical Campus – 2025-08-29 07:41:00


David Bregger’s son, Daniel, died in 2021 after using kratom, a herbal supplement marketed as a natural anxiety remedy. Unaware of its risks, Daniel consumed a product containing 7-hydroxymitragynine (7-OH), a potent opioid-like chemical. Colorado’s new Daniel Bregger Act regulates kratom potency and restricts sales to adults, addressing deceptive practices around concentrated 7-OH products. While kratom powder has mild effects, concentrated extracts pose overdose risks, especially combined with other sedatives. Despite controversies, kratom shows promise for pain relief and opioid addiction treatment. Ongoing research aims to develop safer, effective medications from kratom compounds, balancing benefits and risks.

Kratom, an herbal supplement, is now being regulated in Colorado.
AR30mm/iStock via Getty Images

David Kroll, University of Colorado Anschutz Medical Campus

David Bregger had never heard of kratom before his son, Daniel, 33, died in Denver in 2021 from using what he thought was a natural and safe remedy for anxiety.

By his father’s account, Daniel didn’t know that the herbal product could kill him. The product listed no ingredients or safe-dosing information on the label. And it had no warning that it should not be combined with other sedating drugs, such as the over-the-counter antihistamine diphenhydramine, which is the active ingredient in Benadryl and other sleep aids.

As the fourth anniversary of Daniel’s death approaches, a recently enacted Colorado law aims to prevent other families from experiencing the heartbreak shared by the Bregger family. Colorado Senate Bill 25-072, known as the Daniel Bregger Act, addresses what the state legislature calls the deceptive trade practices around the sale of concentrated kratom products artificially enriched with a chemical called 7-OH.

The Daniel Breggar Act seeks to limit potency and underage access to kratom, an herbal supplement.

7-OH, known as 7-hydroxymitragynine, has also garnered national attention. On July 29, 2025, the U.S. Food and Drug Administration issued a warning that products containing 7-OH are potent opioids that can pose significant health risks and even death.

As kratom and its constituents are studied in greater detail, the Centers for Disease Control and Prevention and university researchers have documented hundreds of deaths where kratom-derived chemicals were present in postmortem blood tests. But rarely is kratom deadly by itself. In a study of 551 kratom-related deaths in Florida, 93.5% involved other substances such as opioids like fentanyl.

I study pharmaceutical sciences, have taught for over 30 years about herbal supplements like kratom, and I’ve written about kratom’s effects and controversy.

Kratom – one name, many products

Kratom is a broad term used to describe products made from the leaves of a Southeast Asian tree known scientifically as Mitragyna speciosa. The Latin name derives from the shape of its leaves, which resemble a bishop’s miter, the ceremonial, pointed headdress worn by bishops and other church leaders.

Small capsules are full of a green powder made from the dried kratom leaves which are also in the picture.
People report buying kratom powder from online retailers and putting it into capsules or making it into tea for consumption.
Everyday better to do everything you love/iStock via Getty Images

Kratom is made from dried and powdered leaves that can be chewed or made into a tea. Used by rice field workers and farmers in Thailand to increase stamina and productivity, kratom initially alleviates fatigue with an effect like that of caffeine. In larger amounts, it imparts a sense of well-being similar to opioids.

In fact, mitragynine, which is found in small amounts in kratom, partially stimulates opioid receptors in the central nervous system. These are the same type of opioid receptors that trigger the effects of drugs such as morphine and oxycodone. They are also the same receptors that can slow or stop breathing when overstimulated.

In the body, the small amount of mitragynine in kratom powder is converted to 7-OH by liver enzymes, hence the opioid-like effects in the body. 7-OH can also be made in a lab and is used to increase the potency of certain kratom products, including the ones found in gas stations or liquor stores.

And therein lies the controversy over the risks and benefits of kratom.

Natural or lab made: All medicines have risks

Because kratom is a plant-derived product, it has fallen into a murky enforcement area. It is sold as an herbal supplement, normally by the kilogram from online retailers overseas.

In 2016, I wrote a series of articles for Forbes as the Drug Enforcement Administration proposed to list kratom constituents on the most restrictive Schedule 1 of the Controlled Substances Act. This classification is reserved for drugs the DEA determines to possess “no currently accepted medical use and a high potential for abuse,” such as heroin and LSD.

But readers countered the DEA’s stance and sent me more than 200 messages that primarily documented their use of kratom as an alternative to opioids for pain.

Others described how kratom assisted them in recovery from addiction to alcohol or opioids themselves. Similar stories also flooded the official comments requested by the DEA, and the public pressure presumably led the agency to drop its plan to regulate kratom as a controlled substance.

Kratom is under growing scrutiny.

But not all of the stories pointed to kratom’s benefits. Instead, some people pointed out a major risk: becoming addicted to kratom itself. I learned it is a double-edged sword – remedy to some, recreational risk to others. A national survey of kratom users was consistent with my nonscientific sampling, showing more than half were using the supplement to relieve pain, stress, anxiety or a combination of these.

Natural leaf powder vs. artificially concentrated extracts

After the DEA dropped its 2016 plan to ban the leaf powder, marketers in the U.S. began isolating mitragynine and concentrating it into small bottles that could be taken like those energy shots of caffeine often sold in gas stations and convenience stores. This formula made it easier to ingest more kratom. Slowly, sellers learned they could make the more potent 7-OH from mitragynine and give their products an extra punch. And an extra dose of risk.

People who use kratom in the powder form describe taking 3 to 5 grams, the size of a generous tablespoon. They put the powder in capsules or made it into a tea several times a day to ward off pain, the craving for alcohol or the withdrawal symptoms from long-term prescription opioid use. Since this form of kratom does not contain very much mitragynine – it is only about 1% of the powdered leaf – overdosing on the powder alone does not typically happen.

That, along with pushback from consumers, is why the Food and Drug Administration is proposing to restrict only the availability of 7-OH and not mitragynine or kratom powder. The new Colorado law limits the concentration of kratom ingredients in products and restricts their sales and marketing to consumers over 21.

Even David Bregger supports this distinction. “I’m not anti-kratom, I’m pro-regulation. What I’m after is getting nothing but leaf product,” he told WPRI in Rhode Island last year while demonstrating at a conference of the education and advocacy trade group the American Kratom Association.

Such lobbying with the trade group last year led the American Kratom Association to concur that 7-OH should be regulated as a Schedule 1 controlled substance. The association acknowledges that such regulation is reasonable and based in science.

Benefits amid the ban

Despite the local and national debate over 7-OH, scientists are continuing to explore kratom compounds for their legitimate medical use.

A $3.5 million NIH grant is one of several that is increasing understanding of kratom as a source for new drugs.

Researchers have identified numerous other chemicals called alkaloids from kratom leaf specimens and commercial products. These researchers show that some types of kratom trees make unique chemicals, possibly opening the door to other painkillers. Researchers have also found that compounds from kratom, such as 7-OH, bind to opioid receptors in unique ways. The compounds seem to have an effect more toward pain management and away from potentially deadly suppression of breathing. Of course, this is when the compounds are used alone and not together with other sedating drugs.

Rather than contributing to the opioid crisis, researchers suspect that isolated and safely purified drugs made from kratom could be potential treatments for opioid addiction. In fact, some kratom chemicals such as mitragynine have multiple actions and could potentially replace both medication-assisted therapy, like buprenorphine, in treating opioid addiction and drugs like clonidine for opioid withdrawal symptoms.

Rigorous scientific study has led to this more reasonable juncture in the understanding of kratom and its sensible regulation. Sadly, we cannot bring back Daniel Bregger. But researchers can advance the potential for new and beneficial drugs while legislators help prevent such tragedies from befalling other families.The Conversation

David Kroll, Professor of Natural Products Pharmacology & Toxicology, University of Colorado Anschutz Medical Campus

This article is republished from The Conversation under a Creative Commons license. Read the original article.

People report buying kratom powder from online retailers and putting it into capsules or making it into tea for consumption.
Everyday better to do everything you love/iStock via Getty Images

Kratom is made from dried and powdered leaves that can be chewed or made into a tea. Used by rice field workers and farmers in Thailand to increase stamina and productivity, kratom initially alleviates fatigue with an effect like that of caffeine. In larger amounts, it imparts a sense of well-being similar to opioids.

In fact, mitragynine, which is found in small amounts in kratom, partially stimulates opioid receptors in the central nervous system. These are the same type of opioid receptors that trigger the effects of drugs such as morphine and oxycodone. They are also the same receptors that can slow or stop breathing when overstimulated.

In the body, the small amount of mitragynine in kratom powder is converted to 7-OH by liver enzymes, hence the opioid-like effects in the body. 7-OH can also be made in a lab and is used to increase the potency of certain kratom products, including the ones found in gas stations or liquor stores.

And therein lies the controversy over the risks and benefits of kratom.

Natural or lab made: All medicines have risks

Because kratom is a plant-derived product, it has fallen into a murky enforcement area. It is sold as an herbal supplement, normally by the kilogram from online retailers overseas.

In 2016, I wrote a series of articles for Forbes as the Drug Enforcement Administration proposed to list kratom constituents on the most restrictive Schedule 1 of the Controlled Substances Act. This classification is reserved for drugs the DEA determines to possess “no currently accepted medical use and a high potential for abuse,” such as heroin and LSD.

But readers countered the DEA’s stance and sent me more than 200 messages that primarily documented their use of kratom as an alternative to opioids for pain.

Others described how kratom assisted them in recovery from addiction to alcohol or opioids themselves. Similar stories also flooded the official comments requested by the DEA, and the public pressure presumably led the agency to drop its plan to regulate kratom as a controlled substance.

Kratom is under growing scrutiny.

But not all of the stories pointed to kratom’s benefits. Instead, some people pointed out a major risk: becoming addicted to kratom itself. I learned it is a double-edged sword – remedy to some, recreational risk to others. A national survey of kratom users was consistent with my nonscientific sampling, showing more than half were using the supplement to relieve pain, stress, anxiety or a combination of these.

Natural leaf powder vs. artificially concentrated extracts

After the DEA dropped its 2016 plan to ban the leaf powder, marketers in the U.S. began isolating mitragynine and concentrating it into small bottles that could be taken like those energy shots of caffeine often sold in gas stations and convenience stores. This formula made it easier to ingest more kratom. Slowly, sellers learned they could make the more potent 7-OH from mitragynine and give their products an extra punch. And an extra dose of risk.

People who use kratom in the powder form describe taking 3 to 5 grams, the size of a generous tablespoon. They put the powder in capsules or made it into a tea several times a day to ward off pain, the craving for alcohol or the withdrawal symptoms from long-term prescription opioid use. Since this form of kratom does not contain very much mitragynine – it is only about 1% of the powdered leaf – overdosing on the powder alone does not typically happen.

That, along with pushback from consumers, is why the Food and Drug Administration is proposing to restrict only the availability of 7-OH and not mitragynine or kratom powder. The new Colorado law limits the concentration of kratom ingredients in products and restricts their sales and marketing to consumers over 21.

Even David Bregger supports this distinction. “I’m not anti-kratom, I’m pro-regulation. What I’m after is getting nothing but leaf product,” he told WPRI in Rhode Island last year while demonstrating at a conference of the education and advocacy trade group the American Kratom Association.

Such lobbying with the trade group last year led the American Kratom Association to concur that 7-OH should be regulated as a Schedule 1 controlled substance. The association acknowledges that such regulation is reasonable and based in science.

Benefits amid the ban

Despite the local and national debate over 7-OH, scientists are continuing to explore kratom compounds for their legitimate medical use.

A $3.5 million NIH grant is one of several that is increasing understanding of kratom as a source for new drugs.

Researchers have identified numerous other chemicals called alkaloids from kratom leaf specimens and commercial products. These researchers show that some types of kratom trees make unique chemicals, possibly opening the door to other painkillers. Researchers have also found that compounds from kratom, such as 7-OH, bind to opioid receptors in unique ways. The compounds seem to have an effect more toward pain management and away from potentially deadly suppression of breathing. Of course, this is when the compounds are used alone and not together with other sedating drugs.

Rather than contributing to the opioid crisis, researchers suspect that isolated and safely purified drugs made from kratom could be potential treatments for opioid addiction. In fact, some kratom chemicals such as mitragynine have multiple actions and could potentially replace both medication-assisted therapy, like buprenorphine, in treating opioid addiction and drugs like clonidine for opioid withdrawal symptoms.

Rigorous scientific study has led to this more reasonable juncture in the understanding of kratom and its sensible regulation. Sadly, we cannot bring back Daniel Bregger. But researchers can advance the potential for new and beneficial drugs while legislators help prevent such tragedies from befalling other families.

Read More

The post Balancing kratom’s potential benefits and risks − new legislation in Colorado seeks to minimize harm appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This content maintains a balanced and evidence-based tone, presenting both the potential benefits and risks associated with kratom. It supports reasonable regulation rather than outright prohibition, acknowledging perspectives from regulatory agencies, scientific researchers, consumer advocates, and families affected by kratom-related incidents. The article refrains from partisan language or ideology and focuses on public health, safety, and scientific inquiry, typical of a centrist approach to policy discussions.

The Conversation

Scientific objectivity is a myth – cultural values and beliefs always influence science and the people who do it

Published

on

theconversation.com – Sara Giordano, Associate Professor of Interdisciplinary Studies, Kennesaw State University – 2025-09-04 07:53:00


The article explores the myth of scientific objectivity, showing how science is deeply intertwined with cultural values and social context. It challenges traditional views, such as the passive egg and active sperm narrative, revealing that scientific knowledge often reflects societal norms. Science emerged as a quest for objectivity within Western universities over centuries, but the strict division between subjective humanities and objective sciences is arbitrary and hierarchical. Scientists, being cultural beings, influence research choices and interpretations unconsciously. Contemporary controversies, like vaccine debates, highlight the impossibility of bias-free science. Instead, democratic, collaborative processes are advocated to align research with societal values, fostering more honest and inclusive scientific inquiry.

People are at the heart of the scientific enterprise.
Matteo Farinella, CC BY-NC

Sara Giordano, Kennesaw State University

Even if you don’t recall many facts from high school biology, you likely remember the cells required for making babies: egg and sperm. Maybe you can picture a swarm of sperm cells battling each other in a race to be the first to penetrate the egg.

For decades, scientific literature described human conception this way, with the cells mirroring the perceived roles of women and men in society. The egg was thought to be passive while the sperm was active.

The opening credits of the 1989 movie ‘Look Who’s Talking’ animated this popular narrative, with speaking sperm rushing toward the nonverbal egg to be the first to fertilize it.

Over time, scientists realized that sperm are too weak to penetrate the egg and that the union is more mutual, with the two cells working together. It’s no coincidence that these findings were made in the same era when new cultural ideas of more egalitarian gender roles were taking hold.

Scientist Ludwik Fleck is credited with first describing science as a cultural practice in the 1930s. Since then, understanding has continued to build that scientific knowledge is always consistent with the cultural norms of its time.

Despite these insights, across political differences, people strive for and continue to demand scientific objectivity: the idea that science should be unbiased, rational and separable from cultural values and beliefs.

When I entered my Ph.D. program in neuroscience in 2001, I felt the same way. But reading a book by biologist Anne Fausto-Sterling called “Sexing the Body” set me down a different path. It systematically debunked the idea of scientific objectivity, showing how cultural ideas about sex, gender and sexuality were inseparable from the scientific findings. By the time I earned my Ph.D., I began to look more holistically at my research, integrating the social, historical and political context.

From the questions scientists begin with, to the beliefs of the people who conduct the research, to choices in research design, to interpretation of the final results, cultural ideas constantly inform “the science.” What if an unbiased science is impossible?

Emergence of idea of scientific objectivity

Science grew to be synonymous with objectivity in the Western university system only over the past few hundred years.

In the 15th and 16th centuries, some Europeans gained traction in challenging the religiously ordained royal order. Consolidation of the university system led to shifts from trust in religious leaders interpreting the word of “god,” to trust in “man” making one’s own rational decisions, to trust in scientists interpreting “nature.” The university system became an important site for legitimizing claims through theories and studies.

Previously, people created knowledge about their world, but there were not strict boundaries between what are now called the humanities, such as history, English and philosophy, and the sciences, including biology, chemistry and physics. Over time, as questions arose about how to trust political decisions, people split the disciplines into categories: subjective versus objective. The splitting came with the creation of other binary oppositions, including the closely related emotionality/rationality divide. These categories were not simply seen as opposite, but in a hierarchy with objectivity and rationality as superior.

A closer look shows that these binary systems are arbitrary and self-reinforcing.

Science is a human endeavor

The sciences are fields of study conducted by humans. These people, called scientists, are part of cultural systems just like everyone else. We scientists are part of families and have political viewpoints. We watch the same movies and TV shows and listen to the same music as nonscientists. We read the same newspapers, cheer for the same sports teams and enjoy the same hobbies as others.

All of these obviously “cultural” parts of our lives are going to affect how scientists approach our jobs and what we consider “common sense” that does not get questioned when we do our experiments.

Beyond individual scientists, the kinds of studies that get conducted are based on what questions are deemed relevant or not by dominant societal norms.

For example, in my Ph.D. work in neuroscience, I saw how different assumptions about hierarchy could influence specific experiments and even the entire field. Neuroscience focuses on what is called the central nervous system. The name itself describes a hierarchical model, with one part of the body “in charge” of the rest. Even within the central nervous system, there was a conceptual hierarchy with the brain controlling the spinal cord.

My research looked more at what happened peripherally in muscles, but the predominant model had the brain at the top. The taken-for-granted idea that a system needs a boss mirrors cultural assumptions. But I realized we could have analyzed the system differently and asked different questions. Instead of the brain being at the top, a different model could focus on how the entire system communicates and works together at coordination.

Every experiment also has assumptions baked in – things that are taken for granted, including definitions. Scientific experiments can become self-fulfilling prophecies.

For example, billions of dollars have been spent on trying to delineate sex differences. However, the definition of male and female is almost never stated in these research papers. At the same time, evidence mounts that these binary categories are a modern invention not based on clear physical differences.

But the categories are tested so many times that eventually some differences are discovered without putting these results into a statistical model together. Oftentimes, so-called negative findings that don’t identify a significant difference are not even reported. Sometimes, meta-analyses based on multiple studies that investigated the same question reveal these statistical errors, as in the search for sex-related brain differences. Similar patterns of slippery definitions that end up reinforcing taken-for-granted assumptions happen with race, sexuality and other socially created categories of difference.

Finally, the end results of experiments can be interpreted in many different ways, adding another point where cultural values are injected into the final scientific conclusions.

Settling on science when there’s no objectivity

Vaccines. Abortion. Climate change. Sex categories. Science is at the center of most of today’s hottest political debates. While there is much disagreement, the desire to separate politics and science seems to be shared. On both sides of the political divide, there are accusations that the other side’s scientists cannot be trusted because of political bias.

RFK Jr, Donald Trump and Dr. Oz seated at a table with flags behind them
It can be easier to spot built-in bias in scientific perspectives that conflict with your own values.
Jim Watson/AFP via Getty Images

Consider the recent controversy over the U.S. Centers for Disease Control and Prevention’s vaccine advisory panel. Secretary of Health and Human Services Robert F. Kennedy Jr. fired all members of the Advisory Committee on Immunization Practices, saying they were biased, while some Democratic lawmakers argued back that his move put in place those who would be biased in pushing his vaccine-skeptical agenda.

If removing all bias is impossible, then, how do people create knowledge that can be trusted?

The understanding that all knowledge is created through cultural processes does allow for two or more differing truths to coexist. You see this reality in action around many of today’s most controversial subjects. However, this does not mean you must believe all truths equally – that’s called total cultural relativism. This perspective ignores the need for people to come to decisions together about truth and reality.

Instead, critical scholars offer democratic processes for people to determine which values are important and for what purposes knowledge should be developed. For example, some of my work has focused on expanding a 1970s Dutch model of the science shop, where community groups come to university settings to share their concerns and needs to help determine research agendas. Other researchers have documented other collaborative practices between scientists and marginalized communities or policy changes, including processes for more interdisciplinary or democratic input, or both.

I argue a more accurate view of science is that pure objectivity is impossible. Once you leave the myth of objectivity behind, though, the way forward is not simple. Instead of a belief in an all-knowing science, we are faced with the reality that humans are responsible for what is researched, how it is researched and what conclusions are drawn from such research.

With this knowledge, we have the opportunity to intentionally set societal values that inform scientific investigations. This requires decisions about how people come to agreements about these values. These agreements need not always be universal but instead can be dependent on the context of who and what a given study might affect. While not simple, using these insights, gained over decades of studying science from both within and outside, may force a more honest conversation between political positions.The Conversation

Sara Giordano, Associate Professor of Interdisciplinary Studies, Kennesaw State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Alternative views on the relationship between science and culture.
Sara Giordano

Science is a human endeavor

The sciences are fields of study conducted by humans. These people, called scientists, are part of cultural systems just like everyone else. We scientists are part of families and have political viewpoints. We watch the same movies and TV shows and listen to the same music as nonscientists. We read the same newspapers, cheer for the same sports teams and enjoy the same hobbies as others.

All of these obviously “cultural” parts of our lives are going to affect how scientists approach our jobs and what we consider “common sense” that does not get questioned when we do our experiments.

Beyond individual scientists, the kinds of studies that get conducted are based on what questions are deemed relevant or not by dominant societal norms.

For example, in my Ph.D. work in neuroscience, I saw how different assumptions about hierarchy could influence specific experiments and even the entire field. Neuroscience focuses on what is called the central nervous system. The name itself describes a hierarchical model, with one part of the body “in charge” of the rest. Even within the central nervous system, there was a conceptual hierarchy with the brain controlling the spinal cord.

My research looked more at what happened peripherally in muscles, but the predominant model had the brain at the top. The taken-for-granted idea that a system needs a boss mirrors cultural assumptions. But I realized we could have analyzed the system differently and asked different questions. Instead of the brain being at the top, a different model could focus on how the entire system communicates and works together at coordination.

Every experiment also has assumptions baked in – things that are taken for granted, including definitions. Scientific experiments can become self-fulfilling prophecies.

For example, billions of dollars have been spent on trying to delineate sex differences. However, the definition of male and female is almost never stated in these research papers. At the same time, evidence mounts that these binary categories are a modern invention not based on clear physical differences.

But the categories are tested so many times that eventually some differences are discovered without putting these results into a statistical model together. Oftentimes, so-called negative findings that don’t identify a significant difference are not even reported. Sometimes, meta-analyses based on multiple studies that investigated the same question reveal these statistical errors, as in the search for sex-related brain differences. Similar patterns of slippery definitions that end up reinforcing taken-for-granted assumptions happen with race, sexuality and other socially created categories of difference.

Finally, the end results of experiments can be interpreted in many different ways, adding another point where cultural values are injected into the final scientific conclusions.

Settling on science when there’s no objectivity

Vaccines. Abortion. Climate change. Sex categories. Science is at the center of most of today’s hottest political debates. While there is much disagreement, the desire to separate politics and science seems to be shared. On both sides of the political divide, there are accusations that the other side’s scientists cannot be trusted because of political bias.

RFK Jr, Donald Trump and Dr. Oz seated at a table with flags behind them

It can be easier to spot built-in bias in scientific perspectives that conflict with your own values.
Jim Watson/AFP via Getty Images

Consider the recent controversy over the U.S. Centers for Disease Control and Prevention’s vaccine advisory panel. Secretary of Health and Human Services Robert F. Kennedy Jr. fired all members of the Advisory Committee on Immunization Practices, saying they were biased, while some Democratic lawmakers argued back that his move put in place those who would be biased in pushing his vaccine-skeptical agenda.

If removing all bias is impossible, then, how do people create knowledge that can be trusted?

The understanding that all knowledge is created through cultural processes does allow for two or more differing truths to coexist. You see this reality in action around many of today’s most controversial subjects. However, this does not mean you must believe all truths equally – that’s called total cultural relativism. This perspective ignores the need for people to come to decisions together about truth and reality.

Instead, critical scholars offer democratic processes for people to determine which values are important and for what purposes knowledge should be developed. For example, some of my work has focused on expanding a 1970s Dutch model of the science shop, where community groups come to university settings to share their concerns and needs to help determine research agendas. Other researchers have documented other collaborative practices between scientists and marginalized communities or policy changes, including processes for more interdisciplinary or democratic input, or both.

I argue a more accurate view of science is that pure objectivity is impossible. Once you leave the myth of objectivity behind, though, the way forward is not simple. Instead of a belief in an all-knowing science, we are faced with the reality that humans are responsible for what is researched, how it is researched and what conclusions are drawn from such research.

With this knowledge, we have the opportunity to intentionally set societal values that inform scientific investigations. This requires decisions about how people come to agreements about these values. These agreements need not always be universal but instead can be dependent on the context of who and what a given study might affect. While not simple, using these insights, gained over decades of studying science from both within and outside, may force a more honest conversation between political positions.

Read More

The post Scientific objectivity is a myth – cultural values and beliefs always influence science and the people who do it appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

The content emphasizes the influence of cultural and social values on scientific research, challenging the notion of pure scientific objectivity. It highlights themes such as gender equality, critiques of traditional hierarchies, and the social construction of categories like sex and race, which are commonly associated with progressive or center-left perspectives. While it acknowledges political divides and calls for democratic, inclusive approaches to science, the overall framing aligns with a center-left viewpoint that values social context and equity in knowledge production.

Continue Reading

The Conversation

AI is transforming weather forecasting − and that could be a game changer for farmers around the world

Published

on

theconversation.com – Paul Winters, Professor of Sustainable Development, University of Notre Dame – 2025-09-03 07:30:00


Climate change intensifies weather risks for farmers, affecting crop yields and incomes, especially in low- and middle-income countries lacking accurate forecasts due to costly traditional models. AI-powered weather forecasting offers a breakthrough by delivering accurate, localized predictions rapidly and inexpensively, using far less computational power than physics-based systems. Advanced AI models like Pangu-Weather and GraphCast now match or surpass traditional forecasts, enabling timely, high-resolution weather guidance on standard computers. To be effective, AI forecasts must be tailored to local agricultural needs and disseminated through accessible channels. Supported by organizations such as AIM for Scale, AI forecasting can empower developing countries to adapt farming practices and improve resilience amid climate change.

Weather forecasts help farmers figure out when to plant, where to use fertilizer and much more.
Maitreya Shah/Studio India

Paul Winters, University of Notre Dame and Amir Jina, University of Chicago

For farmers, every planting decision carries risks, and many of those risks are increasing with climate change. One of the most consequential is weather, which can damage crop yields and livelihoods. A delayed monsoon, for example, can force a rice farmer in South Asia to replant or switch crops altogether, losing both time and income.

Access to reliable, timely weather forecasts can help farmers prepare for the weeks ahead, find the best time to plant or determine how much fertilizer will be needed, resulting in better crop yields and lower costs.

Yet, in many low- and middle-income countries, accurate weather forecasts remain out of reach, limited by the high technology costs and infrastructure demands of traditional forecasting models.

A new wave of AI-powered weather forecasting models has the potential to change that.

A farmer in a field holds a dried out corn stalk.
A farmer holds dried-up maize stalks in his field in Zimbabwe on March 22, 2024. A drought had caused widespread water shortages and crop failures.
AP Photo/Tsvangirayi Mukwazhi

By using artificial intelligence, these models can deliver accurate, localized predictions at a fraction of the computational cost of conventional physics-based models. This makes it possible for national meteorological agencies in developing countries to provide farmers with the timely, localized information about changing rainfall patterns that the farmers need.

The challenge is getting this technology where it’s needed.

Why AI forecasting matters now

The physics-based weather prediction models used by major meteorological centers around the world are powerful but costly. They simulate atmospheric physics to forecast weather conditions ahead, but they require expensive computing infrastructure. The cost puts them out of reach for most developing countries.

Moreover, these models have mainly been developed by and optimized for northern countries. They tend to focus on temperate, high-income regions and pay less attention to the tropics, where many low- and middle-income countries are located.

A major shift in weather models began in 2022 as industry and university researchers developed deep learning models that could generate accurate short- and medium-range forecasts for locations around the globe up to two weeks ahead.

These models worked at speeds several orders of magnitude faster than physics-based models, and they could run on laptops instead of supercomputers. Newer models, such as Pangu-Weather and GraphCast, have matched or even outperformed leading physics-based systems for some predictions, such as temperature.

A woman in a red sari tosses pellets into a rice field.
A farmer distributes fertilizer in India.
EqualStock IN from Pexels

AI-driven models require dramatically less computing power than the traditional systems.

While physics-based systems may need thousands of CPU hours to run a single forecast cycle, modern AI models can do so using a single GPU in minutes once the model has been trained. This is because the intensive part of the AI model training, which learns relationships in the climate from data, can use those learned relationships to produce a forecast without further extensive computation – that’s a major shortcut. In contrast, the physics-based models need to calculate the physics for each variable in each place and time for every forecast produced.

While training these models from physics-based model data does require significant upfront investment, once the AI is trained, the model can generate large ensemble forecasts — sets of multiple forecast runs — at a fraction of the computational cost of physics-based models.

Even the expensive step of training an AI weather model shows considerable computational savings. One study found the early model FourCastNet could be trained in about an hour on a supercomputer. That made its time to presenting a forecast thousands of times faster than state-of-the-art, physics-based models.

The result of all these advances: high-resolution forecasts globally within seconds on a single laptop or desktop computer.

Research is also rapidly advancing to expand the use of AI for forecasts weeks to months ahead, which helps farmers in making planting choices. AI models are already being tested for improving extreme weather prediction, such as for extratropical cyclones and abnormal rainfall.

Tailoring forecasts for real-world decisions

While AI weather models offer impressive technical capabilities, they are not plug-and-play solutions. Their impact depends on how well they are calibrated to local weather, benchmarked against real-world agricultural conditions, and aligned with the actual decisions farmers need to make, such as what and when to plant, or when drought is likely.

To unlock its full potential, AI forecasting must be connected to the people whose decisions it’s meant to guide.

That’s why groups such as AIM for Scale, a collaboration we work with as researchers in public policy and sustainability, are helping governments to develop AI tools that meet real-world needs, including training users and tailoring forecasts to farmers’ needs. International development institutions and the World Meteorological Organization are also working to expand access to AI forecasting models in low- and middle-income countries.

A man sells grain in Dawanau International Market in Kano, Nigeria on July 14, 2023.
Many low-income countries in Africa face harsh effects from climate change, from severe droughts to unpredictable rain and flooding. The shocks worsen conflict and upend livelihoods.
AP Photo/Sunday Alamba

AI forecasts can be tailored to context-specific agricultural needs, such as identifying optimal planting windows, predicting dry spells or planning pest management. Disseminating those forecasts through text messages, radio, extension agents or mobile apps can then help reach farmers who can benefit. This is especially true when the messages themselves are constantly tested and improved to ensure they meet the farmers’ needs.

A recent study in India found that when farmers there received more accurate monsoon forecasts, they made more informed decisions about what and how much to plant – or whether to plant at all – resulting in better investment outcomes and reduced risk.

A new era in climate adaptation

AI weather forecasting has reached a pivotal moment. Tools that were experimental just five years ago are now being integrated into government weather forecasting systems. But technology alone won’t change lives.

With support, low- and middle-income countries can build the capacity to generate, evaluate and act on their own forecasts, providing valuable information to farmers that has long been missing in weather services.The Conversation

Paul Winters, Professor of Sustainable Development, University of Notre Dame and Amir Jina, Assistant Professor of Public Policy, University of Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A farmer holds dried-up maize stalks in his field in Zimbabwe on March 22, 2024. A drought had caused widespread water shortages and crop failures.
AP Photo/Tsvangirayi Mukwazhi

By using artificial intelligence, these models can deliver accurate, localized predictions at a fraction of the computational cost of conventional physics-based models. This makes it possible for national meteorological agencies in developing countries to provide farmers with the timely, localized information about changing rainfall patterns that the farmers need.

The challenge is getting this technology where it’s needed.

Why AI forecasting matters now

The physics-based weather prediction models used by major meteorological centers around the world are powerful but costly. They simulate atmospheric physics to forecast weather conditions ahead, but they require expensive computing infrastructure. The cost puts them out of reach for most developing countries.

Moreover, these models have mainly been developed by and optimized for northern countries. They tend to focus on temperate, high-income regions and pay less attention to the tropics, where many low- and middle-income countries are located.

A major shift in weather models began in 2022 as industry and university researchers developed deep learning models that could generate accurate short- and medium-range forecasts for locations around the globe up to two weeks ahead.

These models worked at speeds several orders of magnitude faster than physics-based models, and they could run on laptops instead of supercomputers. Newer models, such as Pangu-Weather and GraphCast, have matched or even outperformed leading physics-based systems for some predictions, such as temperature.

A woman in a red sari tosses pellets into a rice field.

A farmer distributes fertilizer in India.
EqualStock IN from Pexels

AI-driven models require dramatically less computing power than the traditional systems.

While physics-based systems may need thousands of CPU hours to run a single forecast cycle, modern AI models can do so using a single GPU in minutes once the model has been trained. This is because the intensive part of the AI model training, which learns relationships in the climate from data, can use those learned relationships to produce a forecast without further extensive computation – that’s a major shortcut. In contrast, the physics-based models need to calculate the physics for each variable in each place and time for every forecast produced.

While training these models from physics-based model data does require significant upfront investment, once the AI is trained, the model can generate large ensemble forecasts — sets of multiple forecast runs — at a fraction of the computational cost of physics-based models.

Even the expensive step of training an AI weather model shows considerable computational savings. One study found the early model FourCastNet could be trained in about an hour on a supercomputer. That made its time to presenting a forecast thousands of times faster than state-of-the-art, physics-based models.

The result of all these advances: high-resolution forecasts globally within seconds on a single laptop or desktop computer.

Research is also rapidly advancing to expand the use of AI for forecasts weeks to months ahead, which helps farmers in making planting choices. AI models are already being tested for improving extreme weather prediction, such as for extratropical cyclones and abnormal rainfall.

Tailoring forecasts for real-world decisions

While AI weather models offer impressive technical capabilities, they are not plug-and-play solutions. Their impact depends on how well they are calibrated to local weather, benchmarked against real-world agricultural conditions, and aligned with the actual decisions farmers need to make, such as what and when to plant, or when drought is likely.

To unlock its full potential, AI forecasting must be connected to the people whose decisions it’s meant to guide.

That’s why groups such as AIM for Scale, a collaboration we work with as researchers in public policy and sustainability, are helping governments to develop AI tools that meet real-world needs, including training users and tailoring forecasts to farmers’ needs. International development institutions and the World Meteorological Organization are also working to expand access to AI forecasting models in low- and middle-income countries.

A man sells grain in Dawanau International Market in Kano, Nigeria on July 14, 2023.

Many low-income countries in Africa face harsh effects from climate change, from severe droughts to unpredictable rain and flooding. The shocks worsen conflict and upend livelihoods.
AP Photo/Sunday Alamba

AI forecasts can be tailored to context-specific agricultural needs, such as identifying optimal planting windows, predicting dry spells or planning pest management. Disseminating those forecasts through text messages, radio, extension agents or mobile apps can then help reach farmers who can benefit. This is especially true when the messages themselves are constantly tested and improved to ensure they meet the farmers’ needs.

A recent study in India found that when farmers there received more accurate monsoon forecasts, they made more informed decisions about what and how much to plant – or whether to plant at all – resulting in better investment outcomes and reduced risk.

A new era in climate adaptation

AI weather forecasting has reached a pivotal moment. Tools that were experimental just five years ago are now being integrated into government weather forecasting systems. But technology alone won’t change lives.

With support, low- and middle-income countries can build the capacity to generate, evaluate and act on their own forecasts, providing valuable information to farmers that has long been missing in weather services.

Read More

The post AI is transforming weather forecasting − and that could be a game changer for farmers around the world appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content presents a factual and balanced discussion on the use of AI in weather forecasting to aid farmers, particularly in low- and middle-income countries. It emphasizes technological innovation, international collaboration, and practical benefits without promoting a specific political ideology. The focus on climate change and development is handled in a neutral, solution-oriented manner, reflecting a centrist perspective that values science and global cooperation.

Continue Reading

The Conversation

What is AI slop? A technologist explains this new and largely unwelcome form of online content

Published

on

theconversation.com – Adam Nemeroff, Assistant Provost for Innovations in Learning, Teaching, and Technology, Quinnipiac University – 2025-09-02 07:33:00


AI slop refers to low- to mid-quality content—images, videos, audio, text—generated quickly and cheaply by AI tools, often without accuracy. It floods social media and platforms like YouTube, Spotify, and Wikipedia, displacing higher-quality, human-created content. Examples include AI-generated bands, viral images, and videos that exploit internet attention economies for profit. AI slop harms artists by reducing job opportunities and spreads misinformation, as seen during Hurricane Helene with fake images used politically. Platforms struggle to moderate this content, threatening information reliability. Users can report or flag harmful AI slop, but it increasingly degrades the online media environment.

This AI-generated image spread far and wide in the wake of Hurricane Helene in 2024.
AI-generated image circulated on social media

Adam Nemeroff, Quinnipiac University

You’ve probably encountered images in your social media feeds that look like a cross between photographs and computer-generated graphics. Some are fantastical – think Shrimp Jesus – and some are believable at a quick glance – remember the little girl clutching a puppy in a boat during a flood?

These are examples of AI slop, low- to mid-quality content – video, images, audio, text or a mix – created with AI tools, often with little regard for accuracy. It’s fast, easy and inexpensive to make this content. AI slop producers typically place it on social media to exploit the economics of attention on the internet, displacing higher-quality material that could be more helpful.

AI slop has been increasing over the past few years. As the term “slop” indicates, that’s generally not good for people using the internet.

AI slop’s many forms

The Guardian published an analysis in July 2025 examining how AI slop is taking over YouTube’s fastest-growing channels. The journalists found that nine out of the top 100 fastest-growing channels feature AI-generated content like zombie football and cat soap operas.

This song, allegedly recorded by a band called The Velvet Sundown, was AI-generated.

Listening to Spotify? Be skeptical of that new band, The Velvet Sundown, that appeared on the streaming service with a creative backstory and derivative tracks. It’s AI-generated.

In many cases, people submit AI slop that’s just good enough to attract and keep users’ attention, allowing the submitter to profit from platforms that monetize streaming and view-based content.

The ease of generating content with AI enables people to submit low-quality articles to publications. Clarkesworld, an online science fiction magazine that accepts user submissions and pays contributors, stopped taking new submissions in 2024 because of the flood of AI-generated writing it was getting.

These aren’t the only places where this happens — even Wikipedia is dealing with AI-generated low-quality content that strains its entire community moderation system. If the organization is not successful in removing it, a key information resource people depend on is at risk.

This episode of ‘Last Week Tonight with John Oliver’ delves into AI slop. (NSFW)

Harms of AI slop

AI-driven slop is making its way upstream into people’s media diets as well. During Hurricane Helene, opponents of President Joe Biden cited AI-generated images of a displaced child clutching a puppy as evidence of the administration’s purported mishandling of the disaster response. Even when it’s apparent that content is AI-generated, it can still be used to spread misinformation by fooling some people who briefly glance at it.

AI slop also harms artists by causing job and financial losses and crowding out content made by real creators. The placement of this lower-quality AI-generated content is often not distinguished by the algorithms that drive social media consumption, and it displace entire classes of creators who previously made their livelihood from online content.

Wherever it’s enabled, you can flag content that’s harmful or problematic. On some platforms, you can add community notes to the content to provide context. For harmful content, you can try to report it.

Along with forcing us to be on guard for deepfakes and “inauthentic” social media accounts, AI is now leading to piles of dreck degrading our media environment. At least there’s a catchy name for it.The Conversation

Adam Nemeroff, Assistant Provost for Innovations in Learning, Teaching, and Technology, Quinnipiac University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post What is AI slop? A technologist explains this new and largely unwelcome form of online content appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content presents a balanced and factual discussion about the rise of low-quality AI-generated content (“AI slop”) and its impacts on media, misinformation, and creators. It references examples involving both political figures and general media platforms without taking a partisan stance or promoting a specific political agenda. The focus is on the technological and social implications rather than ideological viewpoints, resulting in a centrist perspective.

Continue Reading

Trending