Connect with us

The Conversation

Robots are coming to the kitchen − what that could mean for society and culture

Published

on

theconversation.com – Patrick Lin, Professor of Philosophy, California Polytechnic State University – 2024-08-29 07:49:27

Robotic kitchens aren’t on homemakers’ must-have lists yet, but they are starting to gain traction in restaurants.

Robert Michael/picture alliance via Getty Images

Patrick Lin, California Polytechnic State University

Automating food is unlike automating anything else. Food is fundamental to life – nourishing body and soul – so how it’s accessed, prepared and consumed can change societies fundamentally.

Automated kitchens aren’t sci-fi visions from “The Jetsons” or “Star Trek.” The technology is real and global. Right now, robots are used to flip burgers, fry chicken, create pizzas, make sushi, prepare salads, serve ramen, bake bread, mix cocktails and much more. AI can invent recipes based on the molecular compatibility of ingredients or whatever a kitchen has in stock. More advanced concepts are in the works to automate the entire kitchen for fine dining.

Since technology tends to be expensive at first, the early adopters of AI kitchen technologies are restaurants and other businesses. Over time, prices are likely to fall enough for the home market, possibly changing both home and societal dynamics.

Can food technology really change society? Yes, just consider the seismic impact of the microwave oven. With that technology, it was suddenly possible to make a quick meal for just one person, which can be a benefit but also a social disruptor.

Familiar concerns about the technology include worse nutrition and health from prepackaged meals and microwave-heated plastic containers. Less obviously, that convenience can also transform eating from a communal, cultural and creative event into a utilitarian act of survival – altering relationships, traditions, how people work, the art of cooking and other facets of life for millions of people.

For instance, think about how different life might be without the microwave. Instead of working at your desk over a reheated lunch, you might have to venture out and talk to people, as well as enjoy a break from work. There’s something to be said for living more slowly in a society that’s increasingly frenetic and socially isolated.

Convenience can come at a great cost, so it’s vital to look ahead at the possible ethical and social disruptions that emerging technologies might bring, especially for a deeply human and cultural domain – food – that’s interwoven throughout daily life.

With funding from the U.S. National Science Foundation, my team at California Polytechnic State University is halfway into what we believe is the first study of the effects AI kitchens and robot cooks could have on diverse societies and cultures worldwide. We’ve mapped out three broad areas of benefits and risks to examine.

YouTube video
You aren’t likely to have a robotic home kitchen anytime soon, but several companies are making them and marketing them to early adopters.

Creators and consumers

The benefits of AI kitchens include enabling chefs to be more creative, as well as eliminating repetitive, tedious tasks such as peeling potatoes or standing at a workstation for hours. The technology can free up time. Not having to cook means being able to spend more time with family or focus on more urgent tasks. For personalized eating, AI can cater to countless special diets, allergies and tastes on demand.

However, there are also risks to human well-being. Cooking can be therapeutic and provides opportunities for many things: gratitude, learning, creativity, communication, adventure, self-expression, growth, independence, confidence and more, all of which may be lost if no one needs to cook. Family relationships could be affected if parents and children are no longer working alongside each other in the kitchen – a safe space to chat, in contrast to what can feel like an interrogation at the dining table.

The kitchen is also the science lab of the home, so science education could suffer. The alchemy of cooking involves teaching children and other learners about microbiology, physics, chemistry, materials science, math, cooking techniques and tools, food ingredients and their sourcing, human health and problem-solving. Not having to cook can erode these skills and knowledge.

Community and cultures

AI can help with experimentation and creativity, such as creating elaborate food presentations and novel recipes within the spirit of a culture. Just as AI and robotics help generate new scientific knowledge, they can increase understanding of, say, the properties of food ingredients, their interactions and cooking techniques, including new methods.

But there are risks to culture. For example, AI could bastardize traditional recipes and methods, since AI is prone to stereotyping, for example flattening or oversimplifying cultural details and distinctions. This selection bias could lead to reduced diversity in the kinds of cuisine produced by AI and robot cooks. Technology developers could become gatekeepers for food innovation, if the limits of their machines lead to homogeneity in cuisines and creativity, similar to the weirdly similar feel of AI art images across different apps.

Also, think about your favorite restaurants and favorite dinners. How might the character of those neighborhoods change with automated kitchens? Would it degrade your own gustatory experience if you knew those cooking for you weren’t your friends and family but instead were robots?

a robotic arm behind a glass wall as two people stand in front of the glass watching

Robotic kitchens are beginning to show up in restaurants, particularly fast-food places.

CFOTO/Future Publishing via Getty Images

The hope with technology is that more jobs will be created than jobs lost. Even if there’s a net gain in jobs, the numbers hide the impact on real human lives. Many in the food service industry – one of the most popular occupations in any economy – could find themselves unable to learn new skills for a different job. Not everyone can be an AI developer or robot technician, and it’s far from clear that supervising a robot is a better job than cooking.

Philosophically, it’s still an open question whether AI is capable of genuine creativity, particularly if that implies inspiration and intuition. Assuming so may be the same mistake as thinking that a chatbot understands what it’s saying, instead of merely generating words that statistically follow the previous words. This has implications for aesthetics and authenticity in AI food, similar to ongoing debates about AI art and music.

Safety and responsibility

Because humans are a key disease vector, robot cooks can improve food safety. Precision trimming and other automation can reduce food waste, along with AI recipes that can make the fullest use of ingredients. Customized meals can be a benefit for nutrition and health, for example, in helping people avoid allergens and excess salt and sugar.

The technology is still emerging, so it’s unclear whether those benefits will be realized. Foodborne illnesses are an unknown. Will AI and robots be able to smell, taste or otherwise sense the freshness of an ingredient or the lack thereof and perform other safety checks?

Physical safety is another issue. It’s important to ensure that a robot chef doesn’t accidentally cut, burn or crush someone because of a computer vision failure or other error. AI chatbots have been advising people to eat rocks, glue, gasoline and poisonous mushrooms, so it’s not a stretch to think that AI recipes could be flawed, too. Where legal regimes are still struggling to sort out liability for autonomous vehicles, it may similarly be tricky to figure out liability for robot cooks, including if hacked.

Given the primacy of food, food technologies help shape society. The kitchen has a special place in homes, neighborhoods and cultures, so disrupting that venerable institution requires careful thinking to optimize benefits and reduce risks.The Conversation

Patrick Lin, Professor of Philosophy, California Polytechnic State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Robots are coming to the kitchen − what that could mean for society and culture appeared first on theconversation.com

The Conversation

Science requires ethical oversight – without federal dollars, society’s health and safety are at risk

Published

on

theconversation.com – Christine Coughlin, Professor of Law, Wake Forest University – 2025-05-09 07:51:00


Federal cuts to research funding under the Trump administration threaten both scientific progress and ethical oversight in biomedical research. The National Institutes of Health (NIH) has been pivotal in supporting innovations such as cancer treatments, but cuts and hiring freezes have led to suspended clinical trials and delayed studies. Ethical concerns surrounding emerging biotechnologies like brain organoids underscore the importance of federal research infrastructure in safeguarding scientific integrity. This oversight is vital to prevent exploitation, ensure voluntary consent, and protect participants from harm, maintaining global leadership in biomedical research. The article calls for continued support to sustain medical advancements and safeguard public health.

Brain organoids, pictured here, raise both many medical possibilities and ethical questions.
NIAID/Flickr, CC BY-SA

Christine Coughlin, Wake Forest University and Nancy M. P. King, Wake Forest University

As the Trump administration continues to make significant cuts to NIH budgets and personnel and to freeze billions of dollars of funding to major research universities – citing ideological concerns – there’s more being threatened than just progress in science and medicine. Something valuable but often overlooked is also being hit hard: preventing research abuse.

The National Institutes of Health has been the world’s largest public funder of biomedical research. Its support helps translate basic science into biomedical therapies and technologies, providing funding for nearly all treatments approved by the Food and Drug Administration from 2010 to 2019. This enables the U.S. to lead global research while maintaining transparency and preventing research misconduct.

While the legality of directives to shrink the NIH is unclear, the Trump administration’s actions have already led to suspended clinical trials, institutional hiring freezes and layoffs, rescinded graduate student admissions, and canceled federal grant review meetings. Researchers at affected universities say that funding will delay or possibly eliminate ongoing studies on critical conditions like cancer and Alzheimer’s.

YouTube video
The Trump administration has deeply culled U.S. science across agencies and institutions.

It is clear to us, as legal and bioethics scholars whose research often focuses on the ethical, legal and social implications of emerging biotechnologies, that these directives will have profoundly negative consequences for medical research and human health, with ripple effects that will last decades. Our scholarship demonstrates that in order to contribute to knowledge and, ultimately, to biomedical treatments, medical research at every stage depends on significant infrastructure support and ethical oversight.

Our recent focus on brain organoid research – 3D lab models grown from human stem cells that simulate brain structure and function – shows how federal support for research is key to not only promote innovation, but to protect participants and future patients.

History of NIH and research ethics

The National Institutes of Health began as a one-room laboratory within the Marine Hospital Service in 1887. After World War I, chemists involved in the war effort sought to apply their knowledge to medicine. They partnered with Louisiana Sen. Joseph E. Ransdell who, motivated by the devastation of malaria, yellow fever and the 1928 influenza pandemic, introduced federal legislation to support basic research and fund fellowships focusing on solving medical problems.

By World War II, biomedical advances like surgical techniques and antibiotics had proved vital on the battlefield. Survival rates increased from 4% during World War I to 50% in World War II. Congress passed the 1944 Public Health Services Act to expand NIH’s authority to fund biomedical research at public and private institutions. President Franklin D. Roosevelt called it “as sound an investment as any Government can make; the dividends are payable in human life and health.”

As science advanced, so did the need for guardrails. After World War II, among the top Nazi leaders prosecuted for war crimes were physicians who conducted experiments on people without consent, such as exposure to hypothermia and infectious disease. The verdicts of these Doctors’ Trials included 10 points about ethical human research that became the Nuremberg Code, emphasizing voluntary consent to participation, societal benefit as the goal of human research, and significant limitations on permissible risks of harm. The World Medical Association established complementary international guidelines for physician-researchers in the 1964 Declaration of Helsinki.

White researcher injecting a Black participant in the Tuskegee Study with a syringe
At least 100 participants died in the Tuskegee Untreated Syphilis Study.
National Archives

In the 1970s, information about the Tuskegee study – a deceptive and unethical 40-year study of untreated syphilis in Black men – came to light. The researchers told study participants they would be given treatment but did not give them medication. They also prevented participants from accessing a cure when it became available in order to study the disease as it progressed. The men enrolled in the study experienced significant health problems, including blindness, mental impairment and death.

The public outrage that followed starkly demonstrated that the U.S. couldn’t simply rely on international guidelines but needed federal standards on research ethics. As a result, the National Research Act of 1974 led to the Belmont Report, which identified ethical principles essential to human research: respect for persons, beneficence and justice.

Federal regulations reinforced these principles by requiring all federally funded research to comply with rigorous ethical standards for human research. By prohibiting financial conflicts of interest and by implementing an independent ethics review process, new policies helped ensure that federally supported research has scientific and social value, is scientifically valid, fairly selects and adequately protects participants.

These standards and recommendations guide both federally and nonfederally funded research today. The breadth of NIH’s mandate and budget has provided not only the essential structure for research oversight, but also key resources for ethics consultation and advice.

Brain organoids and the need for ethical inquiry

Biomedical research on cell and animal models requires extensive ethics oversight systems that complement those for human research. Our research on the ethical and policy issues of human brain organoid research provides a good example of the complexities of biomedical research and the infrastructure and oversight mechanisms necessary to support it.

Organoid research is increasing in importance, as the FDA wants to expand its use as an alternative to using animals to test new drugs before administering them to humans. Because these models can simulate brain structure and function, brain organoid research is integral to developing and testing potential treatments for brain diseases and conditions like Alzheimer’s, Parkinson’s and cancer. Brain organoids are also useful for personalized and regenerative medicine, artificial intelligence, brain-computer interfaces and other biotechnologies.

Brain organoids are built on knowledge about the fundamentals of biology that was developed primarily in universities receiving federal funding. Organoid technology began in 1907 with research on sponge cells, and continued in the 1980s with advances in stem cell research. Since researchers generated the first human organoid in 2009, the field has rapidly expanded.

Fluorescent dots forming the outline of a sphere
Brain organoids have come a long way since their beginnings over a century ago.
Madeline Andrews, Arnold Kriegstein’s lab, UCSF, CC BY-ND

These advances were only possible through federally supported research infrastructure, which helps ensure the quality of all biomedical research. Indirect costs cover operational expenses necessary to maintain research safety and ethics, including utilities, administrative support, biohazard handling and regulatory compliance. In these ways, federally supported research infrastructure protects and promotes the scientific and ethical value of biotechnologies like brain organoids.

Brain organoid research requires significant scientific and ethical inquiry to safely reach its future potential. It raises potential moral and legal questions about donor consent, the extent to which organoids should be grown and how they should be disposed, and consciousness and personhood. As science progresses, infrastructure for oversight can help ensure these ethical and societal issues are addressed.

New frontiers in scientific research

Since World War II, there has been bipartisan support for scientific innovation, in part because it is an economic and national security imperative. As Harvard University President Alan Garber recently wrote, “[n]ew frontiers beckon us with the prospect of life-changing advances. … For the government to retreat from these partnerships now risks not only the health and well-being of millions of individuals but also the economic security and vitality of our nation.”

Cuts to research overhead may seem like easy savings, but it fails to account for the infrastructure that provides essential support for scientific innovation. The investment the NIH has put into academic research is significantly paid forward, adding nearly US$95 billion to local economies in fiscal year 2024, or $2.46 for every $1 of grant funding. NIH funding had also supported over 407,700 jobs that year.

President Donald Trump pledged to “unleash the power of American innovation” to battle brain-based diseases when he accepted his second Republican nomination for president. Around 6.7 million Americans live with Alzheimer’s, and over a million more suffer from Parkinson’s. Hundreds of thousands of Americans are diagnosed with aggressive brain cancers each year, and 20% of the population experiences varying forms of mental illness at any one time. These numbers are expected to grow considerably, possibly doubling by 2050.

Organoid research is just one of the essential components in the process of learning about the brain and using that knowledge to find better treatment for diseases affecting the brain.

Science benefits society only if it is rigorous, ethically conducted and fairly funded. Current NIH policy directives and steep cuts to the agency’s size and budget, along with attacks on universities, undermine globally shared goals of increasing understanding and improving human health.

The federal system of overseeing and funding biomedical science may need a scalpel, but to defund efforts based on “efficiency” is to wield a chainsaw.The Conversation

Christine Coughlin, Professor of Law, Wake Forest University and Nancy M. P. King, Emeritus Professor of Social Sciences and Health Policy, Wake Forest University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Science requires ethical oversight – without federal dollars, society’s health and safety are at risk appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

This content reflects a center-left perspective by critically examining the Trump administration’s significant cuts to NIH funding and their potentially harmful effects on medical research and ethical oversight. The article emphasizes the importance of federal support for scientific innovation and ethical standards in biomedical research, portraying the administration’s actions as detrimental. While it acknowledges bipartisan support for science historically, it frames recent conservative-led policies as undermining scientific progress and public health. The tone and focus align with a viewpoint that supports government investment in science and regulation to protect ethical standards.

Continue Reading

The Conversation

Contaminated milk from one plant in Illinois sickened thousands with Salmonella in 1985 − as outbreaks rise in the US, lessons from this one remain true

Published

on

theconversation.com – Michael Petros, Clinical Assistant Professor of Environmental and Occupational Health Sciences, University of Illinois Chicago – 2025-05-07 07:34:00

A valve that mixed raw milk with pasteurized milk at Hillfarm Dairy may have been the source of contamination. This was the milk processing area of the plant.
AP Photo/Mark Elias

Michael Petros, University of Illinois Chicago

In 1985, contaminated milk in Illinois led to a Salmonella outbreak that infected hundreds of thousands of people across the United States and caused at least 12 deaths. At the time, it was the largest single outbreak of foodborne illness in the U.S. and remains the worst outbreak of Salmonella food poisoning in American history.

Many questions circulated during the outbreak. How could this contamination occur in a modern dairy farm? Was it caused by a flaw in engineering or processing, or was this the result of deliberate sabotage? What roles, if any, did politics and failed leadership play?

From my 50 years of working in public health, I’ve found that reflecting on the past can help researchers and officials prepare for future challenges. Revisiting this investigation and its outcome provides lessons on how food safety inspections go hand in hand with consumer protection and public health, especially as hospitalizations and deaths from foodborne illnesses rise.

Contamination, investigation and intrigue

The Illinois Department of Public Health and the U.S. Centers for Disease Control and Prevention led the investigation into the outbreak. The public health laboratories of the city of Chicago and state of Illinois were also closely involved in testing milk samples.

Investigators and epidemiologists from local, state and federal public health agencies found that specific lots of milk with expiration dates up to April 17, 1985, were contaminated with Salmonella. The outbreak may have been caused by a valve at a processing plant that allowed pasteurized milk to mix with raw milk, which can carry several harmful microorganisms, including Salmonella.

Overall, labs and hospitals in Illinois and five other Midwest states – Indiana, Iowa, Michigan, Minnesota and Wisconsin – reported over 16,100 cases of suspected Salmonella poisoning to health officials.

To make dairy products, skimmed milk is usually separated from cream, then blended back together in different levels to achieve the desired fat content. While most dairies pasteurize their products after blending, Hillfarm Dairy in Melrose Park, Illinois, pasteurized the milk first before blending it into various products such as skim milk and 2% milk.

Subsequent examination of the production process suggested that Salmonella may have grown in the threads of a screw-on cap used to seal an end of a mixing pipe. Investigators also found this strain of Salmonella 10 months earlier in a much smaller outbreak in the Chicago area.

Microscopy image of six rod-shaped bacteria against a black background
Salmonella is a common cause of food poisoning.
Volker Brinkmann/Max Planck Institute for Infection Biology via PLoS One, CC BY-SA

Finding the source

The contaminated milk was produced at Hillfarm Dairy in Melrose Park, which was operated at the time by Jewel Companies Inc. During an April 3 inspection of the company’s plant, the Food and Drug Administration found 13 health and safety violations.

The legal fallout of the outbreak expanded when the Illinois attorney general filed suit against Jewel Companies Inc., alleging that employees at as many as 18 stores in the grocery chain violated water pollution laws when they dumped potentially contaminated milk into storm sewers. Later, a Cook County judge found Jewel Companies Inc. in violation of the court order to preserve milk products suspected of contamination and maintain a record of what happened to milk returned to the Hillfarm Dairy.

Political fallout also ensued. The Illinois governor at the time, James Thompson, fired the director of the Illinois Public Health Department when it was discovered that he was vacationing in Mexico at the onset of the outbreak and failed to return to Illinois. Notably, the health director at the time of the outbreak was not a health professional. Following this episode, the governor appointed public health professional and medical doctor Bernard Turnock as director of the Illinois Department of Public Health.

In 1987, after a nine-month trial, a jury determined that Jewel officials did not act recklessly when Salmonella-tainted milk caused one of the largest food poisoning outbreaks in U.S. history. No punitive damages were awarded to victims, and the Illinois Appellate Court later upheld the jury’s decision.

YouTube video
Raw milk is linked to many foodborne illnesses.

Lessons learned

History teaches more than facts, figures and incidents. It provides an opportunity to reflect on how to learn from past mistakes in order to adapt to future challenges. The largest Salmonella outbreak in the U.S. to date provides several lessons.

For one, disease surveillance is indispensable to preventing outbreaks, both then and now. People remain vulnerable to ubiquitous microorganisms such as Salmonella and E. coli, and early detection of an outbreak could stop it from spreading and getting worse.

Additionally, food production facilities can maintain a safe food supply with careful design and monitoring. Revisiting consumer protections can help regulators keep pace with new threats from new or unfamiliar pathogens.

Finally, there is no substitute for professional public health leadership with the competence and expertise to respond effectively to an emergency.The Conversation

Michael Petros, Clinical Assistant Professor of Environmental and Occupational Health Sciences, University of Illinois Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Contaminated milk from one plant in Illinois sickened thousands with Salmonella in 1985 − as outbreaks rise in the US, lessons from this one remain true appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The article provides an analytical, factual recounting of the 1985 Salmonella outbreak, with an emphasis on public health, safety standards, and lessons learned from past mistakes. It critiques the failures in leadership and oversight during the incident but avoids overt ideological framing. While it highlights political accountability, particularly the firing of a public health official and the appointment of a medical professional, it does so in a balanced manner without assigning blame to a specific political ideology. The content stays focused on the public health aspect and the importance of professional leadership, reflecting a centrist perspective in its delivery.

Continue Reading

The Conversation

Predictive policing AI is on the rise − making it accountable to the public could curb its harmful effects

Published

on

theconversation.com – Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia – 2025-05-06 07:35:00

Data like this seven-day crime map from Oakland, Calif., feeds predictive policing AIs.
City of Oakland via CrimeMapping.com

Maria Lungu, University of Virginia

The 2002 sci-fi thriller “Minority Report” depicted a dystopian future where a specialized police unit was tasked with arresting people for crimes they had not yet committed. Directed by Steven Spielberg and based on a short story by Philip K. Dick, the drama revolved around “PreCrime” − a system informed by a trio of psychics, or “precogs,” who anticipated future homicides, allowing police officers to intervene and prevent would-be assailants from claiming their targets’ lives.

The film probes at hefty ethical questions: How can someone be guilty of a crime they haven’t yet committed? And what happens when the system gets it wrong?

While there is no such thing as an all-seeing “precog,” key components of the future that “Minority Report” envisioned have become reality even faster than its creators imagined. For more than a decade, police departments across the globe have been using data-driven systems geared toward predicting when and where crimes might occur and who might commit them.

Far from an abstract or futuristic conceit, predictive policing is a reality. And market analysts are predicting a boom for the technology.

Given the challenges in using predictive machine learning effectively and fairly, predictive policing raises significant ethical concerns. Absent technological fixes on the horizon, there is an approach to addressing these concerns: Treat government use of the technology as a matter of democratic accountability.

Troubling history

Predictive policing relies on artificial intelligence and data analytics to anticipate potential criminal activity before it happens. It can involve analyzing large datasets drawn from crime reports, arrest records and social or geographic information to identify patterns and forecast where crimes might occur or who may be involved.

Law enforcement agencies have used data analytics to track broad trends for many decades. Today’s powerful AI technologies, however, take in vast amounts of surveillance and crime report data to provide much finer-grained analysis.

Police departments use these techniques to help determine where they should concentrate their resources. Place-based prediction focuses on identifying high-risk locations, also known as hot spots, where crimes are statistically more likely to happen. Person-based prediction, by contrast, attempts to flag individuals who are considered at high risk of committing or becoming victims of crime.

These types of systems have been the subject of significant public concern. Under a so-called “intelligence-led policing” program in Pasco County, Florida, the sheriff’s department compiled a list of people considered likely to commit crimes and then repeatedly sent deputies to their homes. More than 1,000 Pasco residents, including minors, were subject to random visits from police officers and were cited for things such as missing mailbox numbers and overgrown grass.

YouTube video
Lawsuits forced the Pasco County, Fla., Sheriff’s Office to end its troubled predictive policing program.

Four residents sued the county in 2021, and last year they reached a settlement in which the sheriff’s office admitted that it had violated residents’ constitutional rights to privacy and equal treatment under the law. The program has since been discontinued.

This is not just a Florida problem. In 2020, Chicago decommissioned its “Strategic Subject List,” a system where police used analytics to predict which prior offenders were likely to commit new crimes or become victims of future shootings. In 2021, the Los Angeles Police Department discontinued its use of PredPol, a software program designed to forecast crime hot spots but was criticized for low accuracy rates and reinforcing racial and socioeconomic biases.

Necessary innovations or dangerous overreach?

The failure of these high-profile programs highlights a critical tension: Even though law enforcement agencies often advocate for AI-driven tools for public safety, civil rights groups and scholars have raised concerns over privacy violations, accountability issues and the lack of transparency. And despite these high-profile retreats from predictive policing, many smaller police departments are using the technology.

Most American police departments lack clear policies on algorithmic decision-making and provide little to no disclosure about how the predictive models they use are developed, trained or monitored for accuracy or bias. A Brookings Institution analysis found that in many cities, local governments had no public documentation on how predictive policing software functioned, what data was used, or how outcomes were evaluated.

YouTube video
Predictive policing can perpetuate racial bias.

This opacity is what’s known in the industry as a “black box.” It prevents independent oversight and raises serious questions about the structures surrounding AI-driven decision-making. If a citizen is flagged as high-risk by an algorithm, what recourse do they have? Who oversees the fairness of these systems? What independent oversight mechanisms are available?

These questions are driving contentious debates in communities about whether predictive policing as a method should be reformed, more tightly regulated or abandoned altogether. Some people view these tools as necessary innovations, while others see them as dangerous overreach.

A better way in San Jose

But there is evidence that data-driven tools grounded in democratic values of due process, transparency and accountability may offer a stronger alternative to today’s predictive policing systems. What if the public could understand how these algorithms function, what data they rely on, and what safeguards exist to prevent discriminatory outcomes and misuse of the technology?

The city of San Jose, California, has embarked on a process that is intended to increase transparency and accountability around its use of AI systems. San Jose maintains a set of AI principles requiring that any AI tools used by city government be effective, transparent to the public and equitable in their effects on people’s lives. City departments also are required to assess the risks of AI systems before integrating them into their operations.

If taken correctly, these measures can effectively open the black box, dramatically reducing the degree to which AI companies can hide their code or their data behind things such as protections for trade secrets. Enabling public scrutiny of training data can reveal problems such as racial or economic bias, which can be mitigated but are extremely difficult if not impossible to eradicate.

Research has shown that when citizens feel that government institutions act fairly and transparently, they are more likely to engage in civic life and support public policies. Law enforcement agencies are likely to have stronger outcomes if they treat technology as a tool – rather than a substitute – for justice.The Conversation

Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Predictive policing AI is on the rise − making it accountable to the public could curb its harmful effects appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

The article provides an analysis of predictive policing, highlighting both the technological potential and ethical concerns surrounding its use. While it presents factual information, it leans towards caution and skepticism regarding the fairness, transparency, and potential racial biases of these systems. The framing of these issues, along with an emphasis on democratic accountability, transparency, and civil rights, aligns more closely with center-left perspectives that emphasize government oversight, civil liberties, and fairness. The critique of predictive policing technologies without overtly advocating for their abandonment reflects a balanced but cautious stance on technology’s role in law enforcement.

Continue Reading

Trending