Connect with us

The Conversation

Taxpayers spend 22% more per patient to support Medicare Advantage – the private alternative to Medicare that promised to cost less

Published

on

theconversation.com – Grace McCormack, Postdoctoral researcher of Health Policy and Economics, University of Southern California – 2024-11-26 07:38:00

Grace McCormack, University of Southern California and Erin Duffy, University of Southern California

Medicare Advantage – the commercial alternative to traditional Medicare – is drawing down federal health care funds, costing taxpayers an extra 22% per enrollee to the tune of US$83 billion a year.

Medicare Advantage, also known as Part C, was supposed to save the government money. The competition among private insurance companies, and with traditional Medicare, to manage patient care was meant to give insurance companies an incentive to find efficiencies. Instead, the program’s payment rules overpay insurance companies on the taxpayer’s dime.

We are health care policy experts who study Medicare, including how the structure of the Medicare payment system is, in the case of Medicare Advantage, working against taxpayers.

Medicare beneficiaries choose an insurance plan when they turn 65. Younger people can also become eligible for Medicare due to chronic conditions or disabilities. Beneficiaries have a variety of options, including the traditional Medicare program administered by the U.S. government, Medigap supplements to that program administered by private companies, and all-in-one Medicare Advantage plans administered by private companies.

Commercial Medicare Advantage plans are increasingly popular – over half of Medicare beneficiaries are enrolled in them, and this share continues to grow. People are attracted to these plans for their extra benefits and out-of-pocket spending limits. But due to a loophole in most states, enrolling in or switching to Medicare Advantage is effectively a one-way street. The Senate Finance Committee has also found that some plans have used deceptive, aggressive and potentially harmful sales and marketing tactics to increase enrollment.

Baked into the plan

Researchers have found that the overpayment to Medicare Advantage companies, which has grown over time, was, intentionally or not, baked into the Medicare Advantage payment system. Medicare Advantage plans are paid more for enrolling people who seem sicker, because these people typically use more care and so would be more expensive to cover in traditional Medicare.

However, differences in how people’s illnesses are recorded by Medicare Advantage plans causes enrollees to seem sicker and costlier on paper than they are in real life. This issue, alongside other adjustments to payments, leads to overpayment with taxpayer dollars to insurance companies.

Some of this extra money is spent to lower cost sharing, lower prescription drug premiums and increase supplemental benefits like vision and dental care. Though Medicare Advantage enrollees may like these benefits, funding them this way is expensive. For every extra dollar that taxpayers pay to Medicare Advantage companies, only roughly 50 to 60 cents goes to beneficiaries in the form of lower premiums or extra benefits.

As Medicare Advantage becomes increasingly expensive, the Medicare program continues to face funding challenges.

In our view, in order for Medicare to survive long term, Medicare Advantage reform is needed. The way the government pays the private insurers who administer Medicare Advantage plans, which may seem like a black box, is key to why the government overpays Medicare Advantage plans relative to traditional Medicare.

Paying Medicare Advantage

Private plans have been a part of the Medicare system since 1966 and have been paid through several different systems. They garnered only a very small share of enrollment until 2006.

The current Medicare Advantage payment system, implemented in 2006 and heavily reformed by the Affordable Care Act in 2010, had two policy goals. It was designed to encourage private plans to offer the same or better coverage than traditional Medicare at equal or lesser cost. And, to make sure beneficiaries would have multiple Medicare Advantage plans to choose from, the system was also designed to be profitable enough for insurers to entice them to offer multiple plans throughout the country.

To accomplish this, Medicare established benchmark estimates for each county. This benchmark calculation begins with an estimate of what the government-administered traditional Medicare plan would spend on the average county resident. This value is adjusted based on several factors, including enrollee location and plan quality ratings, to give each plan its own benchmark.

Medicare Advantage plans then submit bids, or estimates, of what they expect their plans to spend on the average county enrollee. If a plan’s spending estimate is above the benchmark, enrollees pay the difference as a Part C premium.

Most plans’ spending estimates are below the benchmark, however, meaning they project that the plans will provide coverage that is equivalent to traditional Medicare at a lower cost than the benchmark. These plans don’t charge patients a Part C premium. Instead, they receive a portion of the difference between their spending estimate and the benchmark as a rebate that they are supposed to pass on to their enrollees as extras, like reductions in cost-sharing, lower prescription drug premiums and supplemental benefits.

Finally, in a process known as risk adjustment, Medicare payments to Medicare Advantage health plans are adjusted based on the health of their enrollees. The plans are paid more for enrollees who seem sicker.

Two sets of stacked boxes sit below a vertical bar labeled Risk-Adjusted Benchmark. A vertical line bisecting the boxes is labelled what Medicare would actually spend on an enrollee in traditional Medicare

The government pays Medicare Advantage plans based on Medicare’s cost estimates for a given county. The benchmark is an estimate from the Centers for Medicare & Medicaid Services of what it would cost to cover an average county enrollee in traditional Medicare, plus adjustments including quartile payments and quality bonuses. The risk-adjusted benchmark also takes into consideration an enrollee’s health.

Samantha Randall at USC, CC BY-ND

Theory versus reality

In theory, this payment system should save the Medicare system money because the risk-adjusted benchmark that Medicare estimates for each plan should run, on average, equal to what Medicare would actually spend on a plan’s enrollees if they had enrolled in traditional Medicare instead.

In reality, the risk-adjusted benchmark estimates are far above traditional Medicare costs. This causes Medicare – really, taxpayers – to spend more for each person who is enrolled in Medicare Advantage than if that person had enrolled in traditional Medicare.

Why are payment estimates so high? There are two main culprits: benchmark modifications designed to encourage Medicare Advantage plan availability, and risk adjustments that overestimate how sick Medicare Advantage enrollees are.

Two sets of stacked boxes with dotted arrows on the left side of each labeled Medicare Advantage Plan Bid sit below vertical bars labeled Benchmark and Risk-Adjusted Benchmark.

High risk-adjusted benchmarks lead to overpayments from the government to the private companies that administer Medicare Advantage plans.

Samantha Randall at USC, CC BY-ND

Benchmark modifications

Since the current Medicare Advantage payment system started in 2006, policymaker modifications have made Medicare’s benchmark estimates less tied to what the plan spends on each enrollee.

In 2012, as part of the Affordable Care Act, Medicare Advantage benchmark estimates received another layer: “quartile adjustments.” These made the benchmark estimates, and therefore payments to Medicare Advantage companies, higher in areas with low traditional Medicare spending and lower in areas with high traditional Medicare spending. This benchmark adjustment was meant to encourage more equitable access to Medicare Advantage options.

In that same year, Medicare Advantage plans started receiving “quality bonus payments” with plans that have higher “star ratings” based on quality factors such as enrollee health outcomes and care for chronic conditions receiving higher bonuses.

However, research shows that ratings have not necessarily improved quality and may have exacerbated racial inequality.

Even before fully taking into account risk adjustment, recent estimates peg the benchmarks, on average, as 8% higher than average traditional Medicare spending. This means that a Medicare Advantage plan’s spending estimate could be below the benchmark and the plan would still get paid more for its enrollees than it would have cost the government to cover those same enrollees in traditional Medicare.

Overestimating enrollee sickness

The second major source of overpayment is health risk adjustment, which tends to overestimate how sick Medicare Advantage enrollees are.

Each year, Medicare studies traditional Medicare diagnoses, such as diabetes, depression and arthritis, to understand which have higher treatment costs. Medicare uses this information to adjust its payments for Medicare Advantage plans. Payments are lowered for plans with lower predicted costs based on diagnoses and raised for plans with higher predicted costs. This process is known as risk adjustment.

But there is a critical bias baked into risk adjustment. Medicare Advantage companies know that they’re paid more if their enrollees seem more sick, so they diligently make sure each enrollee has as many diagnoses recorded as possible.

This can include legal activities like reviewing enrollee charts to ensure that diagnoses are recorded accurately. It can also occasionally entail outright fraud, where charts are “upcoded” to include diagnoses that patients don’t actually have.

In traditional Medicare, most providers – the exception being Accountable Care Organizations – are not paid more for recording diagnoses. This difference means that the same beneficiary is likely to have fewer recorded diagnoses if they are enrolled in traditional Medicare rather than a private insurer’s Medicare Advantage plan. Policy experts refer to this phenomenon as a difference in “coding intensity” between Medicare Advantage and traditional Medicare.

Human figure with arrows to two boxes. Left box has two plus symbols labelled  recorded diagnoses and one dollar sign. Right box has five symbols and three dollar signs.

The same person is likely to be documented with more illnesses if they enroll in Medicare Advantage rather than traditional Medicare – and cost taxpayers more money.

Samantha Randall at USC, CC BY-ND

In addition, Medicare Advantage plans often try to recruit beneficiaries whose health care costs will be lower than their diagnoses would predict, such as someone with a very mild form of arthritis. This is known as “favorable selection.”

The differences in coding and favorable selection make beneficiaries look sicker when they enroll in Medicare Advantage instead of traditional Medicare. This makes cost estimates higher than they should be. Research shows that this mismatch – and resulting overpayment – is likely only going to get worse as Medicare Advantage grows.

Where the money goes

Some of the excess payments to Medicare Advantage are returned to enrollees through extra benefits, funded by rebates. Extra benefits include cost-sharing reductions for medical care and prescription drugs, lower Part B and D premiums, and extra “supplemental benefits” like hearing aids and dental care that traditional Medicare doesn’t cover.

Medicare Advantage enrollees may enjoy these benefits, which could be considered a reward for enrolling in Medicare Advantage, which, unlike traditional Medicare, has prior authorization requirements and limited provider networks.

However, according to some policy experts, the current means of funding these extra benefits is unnecessarily expensive and inequitable.

It also makes it difficult for traditional Medicare to compete with Medicare Advantage.

Traditional Medicare, which tends to cost the Medicare program less per enrollee, is only allowed to provide the standard Medicare benefits package. If its enrollees want dental coverage or hearing aids, they have to purchase these separately, alongside a Part D plan for prescription drugs and a Medigap plan to lower their deductibles and co-payments.

Page from text document

Medicare Advantage plans offer extras, but at a high cost to the Medicare system – and taxpayers. Only 50-60 cents of a dollar spent is returned to enrollees as decreased costs or increased benefits.

AP Photo/Pablo Martinez Monsivais

The system sets up Medicare Advantage plans to not only be overpaid but also be increasingly popular, all on the taxpayers’ dime. Plans heavily advertise to prospective enrollees who, once enrolled in Medicare Advantage, will likely have difficulty switching into traditional Medicare, even if they decide the extra benefits are not worth the prior authorization hassles and the limited provider networks. In contrast, traditional Medicare typically does not engage in as much direct advertising. The federal government only accounts for 7% of Medicare-related ads.

At the same time, some people who need more health care and are having trouble getting it through their Medicare Advantage plan – and are able to switch back to traditional Medicare – are doing so, according to an investigation by The Wall Street Journal. This leaves taxpayers to pick up care for these patients just as their needs rise.

Where do we go from here?

Many researchers have proposed ways to reduce excess government spending on Medicare Advantage, including expanding risk adjustment audits, reducing or eliminating quality bonus payments or using more data to improve benchmark estimates of enrollee costs. Others have proposed even more fundamental reforms to the Medicare Advantage payment system, including changing the basis of plan payments so that Medicare Advantage plans will compete more with each other.

Reducing payments to plans may have to be traded off with reductions in plan benefits, though projections suggest the reductions would be modest.

There is a long-running debate over what type of coverage should be required under both traditional Medicare and Medicare Advantage. Recently, policy experts have advocated for introducing an out-of-pocket maximum to traditional Medicare. There have also been multiple unsuccessful efforts to make dental, vision, and hearing services part of the standard Medicare benefits package.

Although all older people require regular dental care and many of them require hearing aids, providing these benefits to everyone enrolled in traditional Medicare would not be cheap. One approach to providing these important benefits without significantly raising costs is to make these benefits means-tested. This would allow people with lower incomes to purchase them at a lower price than higher-income people. However, means-testing in Medicare can be controversial.

There is also debate over how much Medicare Advantage plans should be allowed to vary. The average Medicare beneficiary has over 40 Medicare Advantage plans to choose from, making it overwhelming to compare plans. For instance, right now, the average person eligible for Medicare would have to sift through the fine print of dozens of different plans to compare important factors, such as out-of-pocket maximums for medical care, coverage for dental cleanings, cost-sharing for inpatient stays, and provider networks.

Although millions of people are in suboptimal plans, 70% of people don’t even compare plans, let alone switch plans, during the annual enrollment period at the end of the year, likely because the process of comparing plans and switching is difficult, especially for older Americans.

MedPAC, a congressional advising committee, suggests that limiting variation in certain important benefits, like out-of-pocket maximums and dental, vision and hearing benefits, could help the plan selection process work better, while still allowing for flexibility in other benefits. The challenge is figuring out how to standardize without unduly reducing consumers’ options.

The Medicare Advantage program enrolls over half of Medicare beneficiaries. However, the $83-billion-per-year overpayment of plans, which amounts to more than 8% of Medicare’s total budget, is unsustainable. We believe the Medicare Advantage payment system needs a broad reform that aligns insurers’ incentives with the needs of Medicare beneficiaries and American taxpayers.

This article is part of an occasional series examining the U.S. Medicare system.

Past articles in the series:

Medicare vs. Medicare Advantage: sales pitches are often from biased sources, the choices can be overwhelming and impartial help is not equally available to allThe Conversation

Grace McCormack, Postdoctoral researcher of Health Policy and Economics, University of Southern California and Erin Duffy, Research Scientist and Director of Research Training in Health Policy and Economics, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Taxpayers spend 22% more per patient to support Medicare Advantage – the private alternative to Medicare that promised to cost less appeared first on theconversation.com

The Conversation

Science requires ethical oversight – without federal dollars, society’s health and safety are at risk

Published

on

theconversation.com – Christine Coughlin, Professor of Law, Wake Forest University – 2025-05-09 07:51:00


Federal cuts to research funding under the Trump administration threaten both scientific progress and ethical oversight in biomedical research. The National Institutes of Health (NIH) has been pivotal in supporting innovations such as cancer treatments, but cuts and hiring freezes have led to suspended clinical trials and delayed studies. Ethical concerns surrounding emerging biotechnologies like brain organoids underscore the importance of federal research infrastructure in safeguarding scientific integrity. This oversight is vital to prevent exploitation, ensure voluntary consent, and protect participants from harm, maintaining global leadership in biomedical research. The article calls for continued support to sustain medical advancements and safeguard public health.

Brain organoids, pictured here, raise both many medical possibilities and ethical questions.
NIAID/Flickr, CC BY-SA

Christine Coughlin, Wake Forest University and Nancy M. P. King, Wake Forest University

As the Trump administration continues to make significant cuts to NIH budgets and personnel and to freeze billions of dollars of funding to major research universities – citing ideological concerns – there’s more being threatened than just progress in science and medicine. Something valuable but often overlooked is also being hit hard: preventing research abuse.

The National Institutes of Health has been the world’s largest public funder of biomedical research. Its support helps translate basic science into biomedical therapies and technologies, providing funding for nearly all treatments approved by the Food and Drug Administration from 2010 to 2019. This enables the U.S. to lead global research while maintaining transparency and preventing research misconduct.

While the legality of directives to shrink the NIH is unclear, the Trump administration’s actions have already led to suspended clinical trials, institutional hiring freezes and layoffs, rescinded graduate student admissions, and canceled federal grant review meetings. Researchers at affected universities say that funding will delay or possibly eliminate ongoing studies on critical conditions like cancer and Alzheimer’s.

YouTube video
The Trump administration has deeply culled U.S. science across agencies and institutions.

It is clear to us, as legal and bioethics scholars whose research often focuses on the ethical, legal and social implications of emerging biotechnologies, that these directives will have profoundly negative consequences for medical research and human health, with ripple effects that will last decades. Our scholarship demonstrates that in order to contribute to knowledge and, ultimately, to biomedical treatments, medical research at every stage depends on significant infrastructure support and ethical oversight.

Our recent focus on brain organoid research – 3D lab models grown from human stem cells that simulate brain structure and function – shows how federal support for research is key to not only promote innovation, but to protect participants and future patients.

History of NIH and research ethics

The National Institutes of Health began as a one-room laboratory within the Marine Hospital Service in 1887. After World War I, chemists involved in the war effort sought to apply their knowledge to medicine. They partnered with Louisiana Sen. Joseph E. Ransdell who, motivated by the devastation of malaria, yellow fever and the 1928 influenza pandemic, introduced federal legislation to support basic research and fund fellowships focusing on solving medical problems.

By World War II, biomedical advances like surgical techniques and antibiotics had proved vital on the battlefield. Survival rates increased from 4% during World War I to 50% in World War II. Congress passed the 1944 Public Health Services Act to expand NIH’s authority to fund biomedical research at public and private institutions. President Franklin D. Roosevelt called it “as sound an investment as any Government can make; the dividends are payable in human life and health.”

As science advanced, so did the need for guardrails. After World War II, among the top Nazi leaders prosecuted for war crimes were physicians who conducted experiments on people without consent, such as exposure to hypothermia and infectious disease. The verdicts of these Doctors’ Trials included 10 points about ethical human research that became the Nuremberg Code, emphasizing voluntary consent to participation, societal benefit as the goal of human research, and significant limitations on permissible risks of harm. The World Medical Association established complementary international guidelines for physician-researchers in the 1964 Declaration of Helsinki.

White researcher injecting a Black participant in the Tuskegee Study with a syringe
At least 100 participants died in the Tuskegee Untreated Syphilis Study.
National Archives

In the 1970s, information about the Tuskegee study – a deceptive and unethical 40-year study of untreated syphilis in Black men – came to light. The researchers told study participants they would be given treatment but did not give them medication. They also prevented participants from accessing a cure when it became available in order to study the disease as it progressed. The men enrolled in the study experienced significant health problems, including blindness, mental impairment and death.

The public outrage that followed starkly demonstrated that the U.S. couldn’t simply rely on international guidelines but needed federal standards on research ethics. As a result, the National Research Act of 1974 led to the Belmont Report, which identified ethical principles essential to human research: respect for persons, beneficence and justice.

Federal regulations reinforced these principles by requiring all federally funded research to comply with rigorous ethical standards for human research. By prohibiting financial conflicts of interest and by implementing an independent ethics review process, new policies helped ensure that federally supported research has scientific and social value, is scientifically valid, fairly selects and adequately protects participants.

These standards and recommendations guide both federally and nonfederally funded research today. The breadth of NIH’s mandate and budget has provided not only the essential structure for research oversight, but also key resources for ethics consultation and advice.

Brain organoids and the need for ethical inquiry

Biomedical research on cell and animal models requires extensive ethics oversight systems that complement those for human research. Our research on the ethical and policy issues of human brain organoid research provides a good example of the complexities of biomedical research and the infrastructure and oversight mechanisms necessary to support it.

Organoid research is increasing in importance, as the FDA wants to expand its use as an alternative to using animals to test new drugs before administering them to humans. Because these models can simulate brain structure and function, brain organoid research is integral to developing and testing potential treatments for brain diseases and conditions like Alzheimer’s, Parkinson’s and cancer. Brain organoids are also useful for personalized and regenerative medicine, artificial intelligence, brain-computer interfaces and other biotechnologies.

Brain organoids are built on knowledge about the fundamentals of biology that was developed primarily in universities receiving federal funding. Organoid technology began in 1907 with research on sponge cells, and continued in the 1980s with advances in stem cell research. Since researchers generated the first human organoid in 2009, the field has rapidly expanded.

Fluorescent dots forming the outline of a sphere
Brain organoids have come a long way since their beginnings over a century ago.
Madeline Andrews, Arnold Kriegstein’s lab, UCSF, CC BY-ND

These advances were only possible through federally supported research infrastructure, which helps ensure the quality of all biomedical research. Indirect costs cover operational expenses necessary to maintain research safety and ethics, including utilities, administrative support, biohazard handling and regulatory compliance. In these ways, federally supported research infrastructure protects and promotes the scientific and ethical value of biotechnologies like brain organoids.

Brain organoid research requires significant scientific and ethical inquiry to safely reach its future potential. It raises potential moral and legal questions about donor consent, the extent to which organoids should be grown and how they should be disposed, and consciousness and personhood. As science progresses, infrastructure for oversight can help ensure these ethical and societal issues are addressed.

New frontiers in scientific research

Since World War II, there has been bipartisan support for scientific innovation, in part because it is an economic and national security imperative. As Harvard University President Alan Garber recently wrote, “[n]ew frontiers beckon us with the prospect of life-changing advances. … For the government to retreat from these partnerships now risks not only the health and well-being of millions of individuals but also the economic security and vitality of our nation.”

Cuts to research overhead may seem like easy savings, but it fails to account for the infrastructure that provides essential support for scientific innovation. The investment the NIH has put into academic research is significantly paid forward, adding nearly US$95 billion to local economies in fiscal year 2024, or $2.46 for every $1 of grant funding. NIH funding had also supported over 407,700 jobs that year.

President Donald Trump pledged to “unleash the power of American innovation” to battle brain-based diseases when he accepted his second Republican nomination for president. Around 6.7 million Americans live with Alzheimer’s, and over a million more suffer from Parkinson’s. Hundreds of thousands of Americans are diagnosed with aggressive brain cancers each year, and 20% of the population experiences varying forms of mental illness at any one time. These numbers are expected to grow considerably, possibly doubling by 2050.

Organoid research is just one of the essential components in the process of learning about the brain and using that knowledge to find better treatment for diseases affecting the brain.

Science benefits society only if it is rigorous, ethically conducted and fairly funded. Current NIH policy directives and steep cuts to the agency’s size and budget, along with attacks on universities, undermine globally shared goals of increasing understanding and improving human health.

The federal system of overseeing and funding biomedical science may need a scalpel, but to defund efforts based on “efficiency” is to wield a chainsaw.The Conversation

Christine Coughlin, Professor of Law, Wake Forest University and Nancy M. P. King, Emeritus Professor of Social Sciences and Health Policy, Wake Forest University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Science requires ethical oversight – without federal dollars, society’s health and safety are at risk appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

This content reflects a center-left perspective by critically examining the Trump administration’s significant cuts to NIH funding and their potentially harmful effects on medical research and ethical oversight. The article emphasizes the importance of federal support for scientific innovation and ethical standards in biomedical research, portraying the administration’s actions as detrimental. While it acknowledges bipartisan support for science historically, it frames recent conservative-led policies as undermining scientific progress and public health. The tone and focus align with a viewpoint that supports government investment in science and regulation to protect ethical standards.

Continue Reading

The Conversation

Contaminated milk from one plant in Illinois sickened thousands with Salmonella in 1985 − as outbreaks rise in the US, lessons from this one remain true

Published

on

theconversation.com – Michael Petros, Clinical Assistant Professor of Environmental and Occupational Health Sciences, University of Illinois Chicago – 2025-05-07 07:34:00

A valve that mixed raw milk with pasteurized milk at Hillfarm Dairy may have been the source of contamination. This was the milk processing area of the plant.
AP Photo/Mark Elias

Michael Petros, University of Illinois Chicago

In 1985, contaminated milk in Illinois led to a Salmonella outbreak that infected hundreds of thousands of people across the United States and caused at least 12 deaths. At the time, it was the largest single outbreak of foodborne illness in the U.S. and remains the worst outbreak of Salmonella food poisoning in American history.

Many questions circulated during the outbreak. How could this contamination occur in a modern dairy farm? Was it caused by a flaw in engineering or processing, or was this the result of deliberate sabotage? What roles, if any, did politics and failed leadership play?

From my 50 years of working in public health, I’ve found that reflecting on the past can help researchers and officials prepare for future challenges. Revisiting this investigation and its outcome provides lessons on how food safety inspections go hand in hand with consumer protection and public health, especially as hospitalizations and deaths from foodborne illnesses rise.

Contamination, investigation and intrigue

The Illinois Department of Public Health and the U.S. Centers for Disease Control and Prevention led the investigation into the outbreak. The public health laboratories of the city of Chicago and state of Illinois were also closely involved in testing milk samples.

Investigators and epidemiologists from local, state and federal public health agencies found that specific lots of milk with expiration dates up to April 17, 1985, were contaminated with Salmonella. The outbreak may have been caused by a valve at a processing plant that allowed pasteurized milk to mix with raw milk, which can carry several harmful microorganisms, including Salmonella.

Overall, labs and hospitals in Illinois and five other Midwest states – Indiana, Iowa, Michigan, Minnesota and Wisconsin – reported over 16,100 cases of suspected Salmonella poisoning to health officials.

To make dairy products, skimmed milk is usually separated from cream, then blended back together in different levels to achieve the desired fat content. While most dairies pasteurize their products after blending, Hillfarm Dairy in Melrose Park, Illinois, pasteurized the milk first before blending it into various products such as skim milk and 2% milk.

Subsequent examination of the production process suggested that Salmonella may have grown in the threads of a screw-on cap used to seal an end of a mixing pipe. Investigators also found this strain of Salmonella 10 months earlier in a much smaller outbreak in the Chicago area.

Microscopy image of six rod-shaped bacteria against a black background
Salmonella is a common cause of food poisoning.
Volker Brinkmann/Max Planck Institute for Infection Biology via PLoS One, CC BY-SA

Finding the source

The contaminated milk was produced at Hillfarm Dairy in Melrose Park, which was operated at the time by Jewel Companies Inc. During an April 3 inspection of the company’s plant, the Food and Drug Administration found 13 health and safety violations.

The legal fallout of the outbreak expanded when the Illinois attorney general filed suit against Jewel Companies Inc., alleging that employees at as many as 18 stores in the grocery chain violated water pollution laws when they dumped potentially contaminated milk into storm sewers. Later, a Cook County judge found Jewel Companies Inc. in violation of the court order to preserve milk products suspected of contamination and maintain a record of what happened to milk returned to the Hillfarm Dairy.

Political fallout also ensued. The Illinois governor at the time, James Thompson, fired the director of the Illinois Public Health Department when it was discovered that he was vacationing in Mexico at the onset of the outbreak and failed to return to Illinois. Notably, the health director at the time of the outbreak was not a health professional. Following this episode, the governor appointed public health professional and medical doctor Bernard Turnock as director of the Illinois Department of Public Health.

In 1987, after a nine-month trial, a jury determined that Jewel officials did not act recklessly when Salmonella-tainted milk caused one of the largest food poisoning outbreaks in U.S. history. No punitive damages were awarded to victims, and the Illinois Appellate Court later upheld the jury’s decision.

YouTube video
Raw milk is linked to many foodborne illnesses.

Lessons learned

History teaches more than facts, figures and incidents. It provides an opportunity to reflect on how to learn from past mistakes in order to adapt to future challenges. The largest Salmonella outbreak in the U.S. to date provides several lessons.

For one, disease surveillance is indispensable to preventing outbreaks, both then and now. People remain vulnerable to ubiquitous microorganisms such as Salmonella and E. coli, and early detection of an outbreak could stop it from spreading and getting worse.

Additionally, food production facilities can maintain a safe food supply with careful design and monitoring. Revisiting consumer protections can help regulators keep pace with new threats from new or unfamiliar pathogens.

Finally, there is no substitute for professional public health leadership with the competence and expertise to respond effectively to an emergency.The Conversation

Michael Petros, Clinical Assistant Professor of Environmental and Occupational Health Sciences, University of Illinois Chicago

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Contaminated milk from one plant in Illinois sickened thousands with Salmonella in 1985 − as outbreaks rise in the US, lessons from this one remain true appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The article provides an analytical, factual recounting of the 1985 Salmonella outbreak, with an emphasis on public health, safety standards, and lessons learned from past mistakes. It critiques the failures in leadership and oversight during the incident but avoids overt ideological framing. While it highlights political accountability, particularly the firing of a public health official and the appointment of a medical professional, it does so in a balanced manner without assigning blame to a specific political ideology. The content stays focused on the public health aspect and the importance of professional leadership, reflecting a centrist perspective in its delivery.

Continue Reading

The Conversation

Predictive policing AI is on the rise − making it accountable to the public could curb its harmful effects

Published

on

theconversation.com – Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia – 2025-05-06 07:35:00

Data like this seven-day crime map from Oakland, Calif., feeds predictive policing AIs.
City of Oakland via CrimeMapping.com

Maria Lungu, University of Virginia

The 2002 sci-fi thriller “Minority Report” depicted a dystopian future where a specialized police unit was tasked with arresting people for crimes they had not yet committed. Directed by Steven Spielberg and based on a short story by Philip K. Dick, the drama revolved around “PreCrime” − a system informed by a trio of psychics, or “precogs,” who anticipated future homicides, allowing police officers to intervene and prevent would-be assailants from claiming their targets’ lives.

The film probes at hefty ethical questions: How can someone be guilty of a crime they haven’t yet committed? And what happens when the system gets it wrong?

While there is no such thing as an all-seeing “precog,” key components of the future that “Minority Report” envisioned have become reality even faster than its creators imagined. For more than a decade, police departments across the globe have been using data-driven systems geared toward predicting when and where crimes might occur and who might commit them.

Far from an abstract or futuristic conceit, predictive policing is a reality. And market analysts are predicting a boom for the technology.

Given the challenges in using predictive machine learning effectively and fairly, predictive policing raises significant ethical concerns. Absent technological fixes on the horizon, there is an approach to addressing these concerns: Treat government use of the technology as a matter of democratic accountability.

Troubling history

Predictive policing relies on artificial intelligence and data analytics to anticipate potential criminal activity before it happens. It can involve analyzing large datasets drawn from crime reports, arrest records and social or geographic information to identify patterns and forecast where crimes might occur or who may be involved.

Law enforcement agencies have used data analytics to track broad trends for many decades. Today’s powerful AI technologies, however, take in vast amounts of surveillance and crime report data to provide much finer-grained analysis.

Police departments use these techniques to help determine where they should concentrate their resources. Place-based prediction focuses on identifying high-risk locations, also known as hot spots, where crimes are statistically more likely to happen. Person-based prediction, by contrast, attempts to flag individuals who are considered at high risk of committing or becoming victims of crime.

These types of systems have been the subject of significant public concern. Under a so-called “intelligence-led policing” program in Pasco County, Florida, the sheriff’s department compiled a list of people considered likely to commit crimes and then repeatedly sent deputies to their homes. More than 1,000 Pasco residents, including minors, were subject to random visits from police officers and were cited for things such as missing mailbox numbers and overgrown grass.

YouTube video
Lawsuits forced the Pasco County, Fla., Sheriff’s Office to end its troubled predictive policing program.

Four residents sued the county in 2021, and last year they reached a settlement in which the sheriff’s office admitted that it had violated residents’ constitutional rights to privacy and equal treatment under the law. The program has since been discontinued.

This is not just a Florida problem. In 2020, Chicago decommissioned its “Strategic Subject List,” a system where police used analytics to predict which prior offenders were likely to commit new crimes or become victims of future shootings. In 2021, the Los Angeles Police Department discontinued its use of PredPol, a software program designed to forecast crime hot spots but was criticized for low accuracy rates and reinforcing racial and socioeconomic biases.

Necessary innovations or dangerous overreach?

The failure of these high-profile programs highlights a critical tension: Even though law enforcement agencies often advocate for AI-driven tools for public safety, civil rights groups and scholars have raised concerns over privacy violations, accountability issues and the lack of transparency. And despite these high-profile retreats from predictive policing, many smaller police departments are using the technology.

Most American police departments lack clear policies on algorithmic decision-making and provide little to no disclosure about how the predictive models they use are developed, trained or monitored for accuracy or bias. A Brookings Institution analysis found that in many cities, local governments had no public documentation on how predictive policing software functioned, what data was used, or how outcomes were evaluated.

YouTube video
Predictive policing can perpetuate racial bias.

This opacity is what’s known in the industry as a “black box.” It prevents independent oversight and raises serious questions about the structures surrounding AI-driven decision-making. If a citizen is flagged as high-risk by an algorithm, what recourse do they have? Who oversees the fairness of these systems? What independent oversight mechanisms are available?

These questions are driving contentious debates in communities about whether predictive policing as a method should be reformed, more tightly regulated or abandoned altogether. Some people view these tools as necessary innovations, while others see them as dangerous overreach.

A better way in San Jose

But there is evidence that data-driven tools grounded in democratic values of due process, transparency and accountability may offer a stronger alternative to today’s predictive policing systems. What if the public could understand how these algorithms function, what data they rely on, and what safeguards exist to prevent discriminatory outcomes and misuse of the technology?

The city of San Jose, California, has embarked on a process that is intended to increase transparency and accountability around its use of AI systems. San Jose maintains a set of AI principles requiring that any AI tools used by city government be effective, transparent to the public and equitable in their effects on people’s lives. City departments also are required to assess the risks of AI systems before integrating them into their operations.

If taken correctly, these measures can effectively open the black box, dramatically reducing the degree to which AI companies can hide their code or their data behind things such as protections for trade secrets. Enabling public scrutiny of training data can reveal problems such as racial or economic bias, which can be mitigated but are extremely difficult if not impossible to eradicate.

Research has shown that when citizens feel that government institutions act fairly and transparently, they are more likely to engage in civic life and support public policies. Law enforcement agencies are likely to have stronger outcomes if they treat technology as a tool – rather than a substitute – for justice.The Conversation

Maria Lungu, Postdoctoral Researcher of Law and Public Administration, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Predictive policing AI is on the rise − making it accountable to the public could curb its harmful effects appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

The article provides an analysis of predictive policing, highlighting both the technological potential and ethical concerns surrounding its use. While it presents factual information, it leans towards caution and skepticism regarding the fairness, transparency, and potential racial biases of these systems. The framing of these issues, along with an emphasis on democratic accountability, transparency, and civil rights, aligns more closely with center-left perspectives that emphasize government oversight, civil liberties, and fairness. The critique of predictive policing technologies without overtly advocating for their abandonment reflects a balanced but cautious stance on technology’s role in law enforcement.

Continue Reading

Trending