Connect with us

The Conversation

Experts alone can’t handle AI – social scientists explain why the public needs a seat at the table

Published

on

Experts alone can’t handle AI – social scientists explain why the public needs a seat at the table

Tech leaders like Alphabet CEO Sundar Picha and OpenAI CEO Sam Altman, seen here entering the White House, are just one piece of the AI regulation puzzle.
AP Photo/Evan Vucci

Dietram A. Scheufele, University of Wisconsin-Madison; Dominique Brossard, University of Wisconsin-Madison, and Todd Newman, University of Wisconsin-Madison

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already emerging at a rate that overwhelms modern democracies’ ability to collectively work through those problems.

Broad public engagement, or the lack of it, has been a long-running challenge in assimilating emerging technologies, and is key to tackling the challenges they bring.

Ready or not, unintended consequences

Striking a balance between the awe-inspiring possibilities of emerging technologies like AI and the need for societies to think through both intended and unintended outcomes is not a new challenge. Almost 50 years ago, scientists and policymakers met in Pacific Grove, California, for what is often referred to as the Asilomar Conference to decide the future of recombinant DNA research, or transplanting genes from one organism into another. Public participation and input into their deliberations was minimal.

Societies are severely limited in their ability to anticipate and mitigate unintended consequences of rapidly emerging technologies like AI without good-faith engagement from broad cross-sections of public and expert stakeholders. And there are real downsides to limited participation. If Asilomar had sought such wide-ranging input 50 years ago, it is likely that the issues of cost and access would have shared the agenda with the science and the ethics of deploying the technology. If that had happened, the lack of affordability of recent CRISPR-based sickle cell treatments, for example, might’ve been avoided.

AI runs a very real risk of creating similar blind spots when it comes to intended and unintended consequences that will often not be obvious to elites like tech leaders and policymakers. If societies fail to ask “the right questions, the ones people care about,” science and technology studies scholar Sheila Jasanoff said in a 2021 interview, “then no matter what the science says, you wouldn’t be producing the right answers or options for society.”

Ethical debates should be central to efforts to regulate AI.

Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

Meanwhile, there is a hunger among the public for helping to shape our collective future. Only about a quarter of U.S. adults in our 2020 AI survey agreed that scientists should be able “to conduct their research without consulting the public” (27.8%). Two-thirds (64.6%) felt that “the public should have a say in how we apply scientific research and technology in society.”

The public’s desire for participation goes hand in hand with a widespread lack of trust in government and industry when it comes to shaping the development of AI. In a 2020 national survey by our team, fewer than one in 10 Americans indicated that they “mostly” or “very much” trusted Congress (8.5%) or Facebook (9.5%) to keep society’s best interest in mind in the development of AI.

Algorithmic bias is just one concern about artificial intelligence.

A healthy dose of skepticism?

The public’s deep mistrust of key regulatory and industry players is not entirely unwarranted. Industry leaders have had a hard time disentangling their commercial interests from efforts to develop an effective regulatory system for AI. This has led to a fundamentally messy policy environment.

Tech firms helping regulators think through the potential and complexities of technologies like AI is not always troublesome, especially if they are transparent about potential conflicts of interest. However, tech leaders’ input on technical questions about what AI can or might be used for is only a small piece of the regulatory puzzle.

Much more urgently, societies need to figure out what types of applications AI should be used for, and how. Answers to those questions can only emerge from public debates that engage a broad set of stakeholders about values, ethics and fairness. Meanwhile, the public is growing concerned about the use of AI.

AI might not wipe out humanity anytime soon, but it is likely to increasingly disrupt life as we currently know it. Societies have a finite window of opportunity to find ways to engage in good-faith debates and collaboratively work toward meaningful AI regulation to make sure that these challenges do not overwhelm them.The Conversation

Dietram A. Scheufele, Professor of Life Sciences Communication, University of Wisconsin-Madison; Dominique Brossard, Professor and Chair of Life Sciences Communication, University of Wisconsin-Madison, and Todd Newman, Assistant Professor of Life Sciences Communication, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

How states are placing guardrails around AI in the absence of strong federal regulation

Published

on

theconversation.com – Anjana Susarla, Professor of Information Systems, Michigan State University – 2025-08-06 07:55:00


U.S. states are actively regulating artificial intelligence (AI) amid minimal federal oversight, with all 50 states proposing AI-related laws in 2025. Key regulatory areas include government use of AI—especially to prevent biases in social services and criminal justice—and AI’s role in healthcare, focusing on transparency, consumer protection, insurers, and clinicians. Facial recognition laws address privacy and racial bias concerns, with 15 states imposing restrictions and requiring bias reporting. Generative AI laws, like those in California and Utah, mandate disclosure about AI use and training data. Despite states’ efforts, federal policy under the Trump administration threatens to limit regulations deemed burdensome, complicating oversight.

The California State Capitol has been the scene of numerous efforts to regulate AI.
AP Photo/Juliana Yamada

Anjana Susarla, Michigan State University

U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap.

Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025.

Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI.

Government use of AI

The oversight and responsible use of AI are especially critical in the public sector. Predictive AI – AI that performs statistical analysis to make forecasts – has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole.

But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases.

Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment.

Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them.

Montana’s new “Right to Compute” law sets requirements that AI developers adopt risk management frameworks – methods for addressing security and privacy in the development process – for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York’s SB 8755 bill.

AI in health care

In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers’ use of AI and clinicians’ use of AI.

Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose.

Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology.

a mannequin wearing a device across the chest with four wires attached to circular pads attached to the torso
Numerous bills in state legislatures aim to regulate the use of AI in health care, including medical devices like this electrocardiogram recorder.
VCG via Getty Images

Bills covering insurers provide oversight of the payers’ use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients.

Facial recognition and surveillance

In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases.

Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces.

Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity.

By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies.

a Black woman with short hair and hoop earrings sits at a conference table
Porcha Woodruff was wrongly arrested for a carjacking in 2023 based on facial recognition technology.
AP Photo/Carlos Osorio

Generative AI and foundation models

The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utah’s Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they’re using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information.

Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training.

AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency.

Trying to fill the gap

In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers’ compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections.

Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations … ”

The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration’s definition of burdensome against needed federal funding for AI.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post How states are placing guardrails around AI in the absence of strong federal regulation appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

This article primarily offers a factual overview of AI regulation efforts at the state level, highlighting concerns around transparency, consumer protections, and biases in AI systems. It emphasizes the risks of algorithmic harms, especially relating to racial and gender biases, which aligns with issues often highlighted by progressive or center-left perspectives. The coverage of civil rights, privacy, and regulatory oversight reflects a cautious stance toward unchecked AI deployment. While it reports on federal actions from the Trump administration critically, the overall tone remains measured and policy-focused rather than overtly ideological, placing it slightly left-of-center.

Continue Reading

The Conversation

2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech

Published

on

theconversation.com – Christopher Palma, Teaching Professor of Astronomy & Astrophysics, Penn State – 2025-08-04 07:41:00


During a solar eclipse, astronomers can study the Sun’s faint corona, usually hidden by the bright Sun. The European Space Agency’s Proba-3 mission creates artificial eclipses using two spacecraft flying in precise formation about 492 feet apart. One spacecraft blocks the Sun’s bright disk, casting a shadow on the second, which photographs the corona. Launched in 2024, Proba-3 orbits between 372 miles and 37,282 miles from Earth, maintaining alignment within one millimeter at high speeds. The mission aids future satellite technologies and studies space weather to improve forecasting of solar storms that affect Earth’s satellites.

The solar corona, as viewed by Proba-3’s ASPIICS coronagraph.
ESA/Proba-3/ASPIICS/WOW algorithm, CC BY-SA

Christopher Palma, Penn State

During a solar eclipse, astronomers who study heliophysics are able to study the Sun’s corona – its outer atmosphere – in ways they are unable to do at any other time.

The brightest part of the Sun is so bright that it blocks the faint light from the corona, so it is invisible to most of the instruments astronomers use. The exception is when the Moon blocks the Sun, casting a shadow on the Earth during an eclipse. But as an astronomer, I know eclipses are rare, they last only a few minutes, and they are visible only on narrow paths across the Earth. So, researchers have to work hard to get their equipment to the right place to capture these short, infrequent events.

In their quest to learn more about the Sun, scientists at the European Space Agency have built and launched a new probe designed specifically to create artificial eclipses.

Meet Proba-3

This probe, called Proba-3, works just like a real solar eclipse. One spacecraft, which is roughly circular when viewed from the front, orbits closer to the Sun, and its job is to block the bright parts of the Sun, acting as the Moon would in a real eclipse. It casts a shadow on a second probe that has a camera capable of photographing the resulting artificial eclipse.

An illustration of two spacecraft, one which is spherical and moves in front of the Sun, another that is box-shaped facing the Sun.
The two spacecraft of Proba-3 fly in precise formation about 492 feet (150 meters) apart.
ESA-P. Carril, CC BY-NC-ND

Having two separate spacecraft flying independently but in such a way that one casts a shadow on the other is a challenging task. But future missions depend on scientists figuring out how to make this precision choreography technology work, and so Proba-3 is a test.

This technology is helping to pave the way for future missions that could include satellites that dock with and deorbit dead satellites or powerful telescopes with instruments located far from their main mirrors.

The side benefit is that researchers get to practice by taking important scientific photos of the Sun’s corona, allowing them to learn more about the Sun at the same time.

An immense challenge

The two satellites launched in 2024 and entered orbits that approach Earth as close as 372 miles (600 kilometers) – that’s about 50% farther from Earth than the International Space Station – and reach more than 37,282 miles (60,000 km) at their most distant point, about one-sixth of the way to the Moon.

During this orbit, the satellites move at speeds between 5,400 miles per hour (8,690 kilometers per hour) and 79,200 mph (127,460 kph). At their slowest, they’re still moving fast enough to go from New York City to Philadelphia in one minute.

While flying at that speed, they can control themselves automatically, without a human guiding them, and fly 492 feet (150 meters) apart – a separation that is longer than the length of a typical football stadium – while still keeping their locations aligned to about one millimeter.

They needed to maintain that precise flying pattern for hours in order to take a picture of the Sun’s corona, and they did it in June 2025.

The Proba-3 mission is also studying space weather by observing high-energy particles that the Sun ejects out into space, sometimes in the direction of the Earth. Space weather causes the aurora, also known as the northern lights, on Earth.

While the aurora is beautiful, solar storms can also harm Earth-orbiting satellites. The hope is that Proba-3 will help scientists continue learning about the Sun and better predict dangerous space weather events in time to protect sensitive satellites.The Conversation

Christopher Palma, Teaching Professor of Astronomy & Astrophysics, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post 2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content is a factual and scientific discussion of the Proba-3 space mission and its efforts to study the Sun’s corona through artificial eclipses. It emphasizes technological achievement and scientific advancement without promoting any political ideology or taking a stance on politically charged issues. The tone is neutral, informative, and focused on space exploration and research, which aligns with a centrist, nonpartisan perspective.

Continue Reading

The Conversation

Are you really allergic to penicillin? A pharmacist explains why there’s a good chance you’re not − and how you can find out for sure

Published

on

theconversation.com – Elizabeth W. Covington, Associate Clinical Professor of Pharmacy, Auburn University – 2025-07-31 07:35:00


About 10–20% of Americans report a penicillin allergy, but fewer than 1% actually are allergic. Many people are labeled allergic due to childhood rashes or mild side effects, which are often unrelated to true allergies. Penicillin, discovered in 1928, is a narrow-spectrum antibiotic used to treat many infections safely and effectively. Incorrect allergy labels lead to use of broader, costlier antibiotics that promote resistance and may cause more side effects. Allergy status can be evaluated through detailed medical history and penicillin skin testing or monitored test dosing, allowing many to safely use penicillin again.

Penicillin is a substance produced by penicillium mold. About 80% of people with a penicillin allergy will lose the allergy after about 10 years.
Clouds Hill Imaging Ltd./Corbis Documentary via Getty Images

Elizabeth W. Covington, Auburn University

Imagine this: You’re at your doctor’s office with a sore throat. The nurse asks, “Any allergies?” And without hesitation you reply, “Penicillin.” It’s something you’ve said for years – maybe since childhood, maybe because a parent told you so. The nurse nods, makes a note and moves on.

But here’s the kicker: There’s a good chance you’re not actually allergic to penicillin. About 10% to 20% of Americans report that they have a penicillin allergy, yet fewer than 1% actually do.

I’m a clinical associate professor of pharmacy specializing in infectious disease. I study antibiotics and drug allergies, including ways to determine whether people have penicillin allergies.

I know from my research that incorrectly being labeled as allergic to penicillin can prevent you from getting the most appropriate, safest treatment for an infection. It can also put you at an increased risk of antimicrobial resistance, which is when an antibiotic no longer works against bacteria.

The good news? It’s gotten a lot easier in recent years to pin down the truth of the matter. More and more clinicians now recognize that many penicillin allergy labels are incorrect – and there are safe, simple ways to find out your actual allergy status.

A steadfast lifesaver

Penicillin, the first antibiotic drug, was discovered in 1928 when a physician named Alexander Fleming extracted it from a type of mold called penicillium. It became widely used to treat infections in the 1940s. Penicillin and closely related antibiotics such as amoxicillin and amoxicillin/clavulanate, which goes by the brand name Augmentin, are frequently prescribed to treat common infections such as ear infections, strep throat, urinary tract infections, pneumonia and dental infections.

Penicillin antibiotics are a class of narrow-spectrum antibiotics, which means they target specific types of bacteria. People who report having a penicillin allergy are more likely to receive broad-spectrum antibiotics. Broad-spectrum antibiotics kill many types of bacteria, including helpful ones, making it easier for resistant bacteria to survive and spread. This overuse speeds up the development of antibiotic resistance. Broad-spectrum antibiotics can also be less effective and are often costlier.

Figuring out whether you’re really allergic to penicillin is easier than it used to be.

Why the mismatch?

People often get labeled as allergic to antibiotics as children when they have a reaction such as a rash after taking one. But skin rashes frequently occur alongside infections in childhood, with many viruses and infections actually causing rashes. If a child is taking an antibiotic at the time, they may be labeled as allergic even though the rash may have been caused by the illness itself.

Some side effects such as nausea, diarrhea or headaches can happen with antibiotics, but they don’t always mean you are allergic. These common reactions usually go away on their own or can be managed. A doctor or pharmacist can talk to you about ways to reduce these side effects.

People also often assume penicillin allergies run in families, but having a relative with an allergy doesn’t mean you’re allergic – it’s not hereditary.

Finally, about 80% of patients with a true penicillin allergy will lose the allergy after about 10 years. That means even if you used to be allergic to this antibiotic, you might not be anymore, depending on the timing of your reaction.

Why does it matter if I have a penicillin allergy?

Believing you’re allergic to penicillin when you’re not can negatively affect your health. For one thing, you are more likely to receive stronger, broad-spectrum antibiotics that aren’t always the best fit and can have more side effects. You may also be more likely to get an infection after surgery and to spend longer in the hospital when hospitalized for an infection. What’s more, your medical bills could end up higher due to using more expensive drugs.

Penicillin and its close cousins are often the best tools doctors have to treat many infections. If you’re not truly allergic, figuring that out can open the door to safer, more effective and more affordable treatment options.

An arm stretched out on an examining table gets pricked with a white needle by the hands of a clinician administering an allergy test.
A penicillin skin test can safely determine whether you have a penicillin allergy, but a health care professional may also be able to tell by asking you some specific questions.
BSIP/Collection Mix: Subjects via Getty Images

How can I tell if I am really allergic to penicillin?

Start by talking to a health care professional such as a doctor or pharmacist. Allergy symptoms can range from a mild, self-limiting rash to severe facial swelling and trouble breathing. A health care professional may ask you several questions about your allergies, such as what happened, how soon after starting the antibiotic did the reaction occur, whether treatment was needed, and whether you’ve taken similar medications since then.

These questions can help distinguish between a true allergy and a nonallergic reaction. In many cases, this interview is enough to determine you aren’t allergic. But sometimes, further testing may be recommended.

One way to find out whether you’re really allergic to penicillin is through penicillin skin testing, which includes tiny skin pricks and small injections under the skin. These tests use components related to penicillin to safely check for a true allergy. If skin testing doesn’t cause a reaction, the next step is usually to take a small dose of amoxicillin while being monitored at your doctor’s office, just to be sure it’s safe.

A study published in 2023 showed that in many cases, skipping the skin test and going straight to the small test dose can also be a safe way to check for a true allergy. In this method, patients take a low dose of amoxicillin and are observed for about 30 minutes to see whether any reaction occurs.

With the right questions, testing and expertise, many people can safely reclaim penicillin as an option for treating common infections.The Conversation

Elizabeth W. Covington, Associate Clinical Professor of Pharmacy, Auburn University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Are you really allergic to penicillin? A pharmacist explains why there’s a good chance you’re not − and how you can find out for sure appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This content is educational and focused on medical information, specifically on penicillin allergies and their impact on health care. It presents scientific research and clinical practices without promoting any political ideology or partisan perspective. The article emphasizes evidence-based medical facts and encourages discussion with health care professionals, maintaining a neutral and informative tone typical of centrist communication.

Continue Reading

Trending