Connect with us

The Conversation

Fake papers are contaminating the world’s scientific literature, fueling a corrupt industry and slowing legitimate lifesaving medical research

Published

on

theconversation.com – Frederik Joelving, Contributing editor, Retraction Watch – 2025-01-29 07:53:00

Fake papers are contaminating the world’s scientific literature, fueling a corrupt industry and slowing legitimate lifesaving medical research

Assistant professor Frank Cackowski, left, and researcher Steven Zielske at Wayne State University in Detroit became suspicious of a paper on cancer research that was eventually retracted.

Amy Sacka, CC BY-ND

Frederik Joelving, Retraction Watch; Cyril Labbé, Université Grenoble Alpes (UGA), and Guillaume Cabanac, Institut de Recherche en Informatique de Toulouse

Over the past decade, furtive commercial entities around the world have industrialized the production, sale and dissemination of bogus scholarly research, undermining the literature that everyone from doctors to engineers rely on to make decisions about human lives.

It is exceedingly difficult to get a handle on exactly how big the problem is. Around 55,000 scholarly papers have been retracted to date, for a variety of reasons, but scientists and companies who screen the scientific literature for telltale signs of fraud estimate that there are many more fake papers circulating – possibly as many as several hundred thousand. This fake research can confound legitimate researchers who must wade through dense equations, evidence, images and methodologies only to find that they were made up.

Even when the bogus papers are spotted – usually by amateur sleuths on their own time – academic journals are often slow to retract the papers, allowing the articles to taint what many consider sacrosanct: the vast global library of scholarly work that introduces new ideas, reviews other research and discusses findings.

These fake papers are slowing down research that has helped millions of people with lifesaving medicine and therapies from cancer to COVID-19. Analysts’ data shows that fields related to cancer and medicine are particularly hard hit, while areas like philosophy and art are less affected. Some scientists have abandoned their life’s work because they cannot keep pace given the number of fake papers they must bat down.

The problem reflects a worldwide commodification of science. Universities, and their research funders, have long used regular publication in academic journals as requirements for promotions and job security, spawning the mantra “publish or perish.”

But now, fraudsters have infiltrated the academic publishing industry to prioritize profits over scholarship. Equipped with technological prowess, agility and vast networks of corrupt researchers, they are churning out papers on everything from obscure genes to artificial intelligence in medicine.

These papers are absorbed into the worldwide library of research faster than they can be weeded out. About 119,000 scholarly journal articles and conference papers are published globally every week, or more than 6 million a year. Publishers estimate that, at most journals, about 2% of the papers submitted – but not necessarily published – are likely fake, although this number can be much higher at some publications.

While no country is immune to this practice, it is particularly pronounced in emerging economies where resources to do bona fide science are limited – and where governments, eager to compete on a global scale, push particularly strong “publish or perish” incentives.

As a result, there is a bustling online underground economy for all things scholarly publishing. Authorship, citations, even academic journal editors, are up for sale. This fraud is so prevalent that it has its own name: paper mills, a phrase that harks back to “term-paper mills”, where students cheat by getting someone else to write a class paper for them.

The impact on publishers is profound. In high-profile cases, fake articles can hurt a journal’s bottom line. Important scientific indexes – databases of academic publications that many researchers rely on to do their work – may delist journals that publish too many compromised papers. There is growing criticism that legitimate publishers could do more to track and blacklist journals and authors who regularly publish fake papers that are sometimes little more than artificial intelligence-generated phrases strung together.

To better understand the scope, ramifications and potential solutions of this metastasizing assault on science, we – a contributing editor at Retraction Watch, a website that reports on retractions of scientific papers and related topics, and two computer scientists at France’s Université Toulouse III–Paul Sabatier and Université Grenoble Alpes who specialize in detecting bogus publications – spent six months investigating paper mills.

This included, by some of us at different times, trawling websites and social media posts, interviewing publishers, editors, research-integrity experts, scientists, doctors, sociologists and scientific sleuths engaged in the Sisyphean task of cleaning up the literature. It also involved, by some of us, screening scientific articles looking for signs of fakery.

Problematic Paper Screener: Trawling for fraud in the scientific literature

What emerged is a deep-rooted crisis that has many researchers and policymakers calling for a new way for universities and many governments to evaluate and reward academics and health professionals across the globe.

Just as highly biased websites dressed up to look like objective reporting are gnawing away at evidence-based journalism and threatening elections, fake science is grinding down the knowledge base on which modern society rests.

As part of our work detecting these bogus publications, co-author Guillaume Cabanac developed the Problematic Paper Screener, which filters 130 million new and old scholarly papers every week looking for nine types of clues that a paper might be fake or contain errors. A key clue is a tortured phrase – an awkward wording generated by software that replaces common scientific terms with synonyms to avoid direct plagiarism from a legitimate paper.

Problematic Paper Screener: Trawling for fraud in the scientific literature

An obscure molecule

Frank Cackowski at Detroit’s Wayne State University was confused.

The oncologist was studying a sequence of chemical reactions in cells to see if they could be a target for drugs against prostate cancer. A paper from 2018 from 2018 in the American Journal of Cancer Research piqued his interest when he read that a little-known molecule called SNHG1 might interact with the chemical reactions he was exploring. He and fellow Wayne State researcher Steven Zielske began a series of experiments to learn more about the link. Surprisingly, they found there wasn’t a link.

Meanwhile, Zielske had grown suspicious of the paper. Two graphs showing results for different cell lines were identical, he noticed, which “would be like pouring water into two glasses with your eyes closed and the levels coming out exactly the same.” Another graph and a table in the article also inexplicably contained identical data.

Zielske described his misgivings in an anonymous post in 2020 at PubPeer, an online forum where many scientists report potential research misconduct, and also contacted the journal’s editor. Shortly thereafter, the journal pulled the paper, citing “falsified materials and/or data.”

“Science is hard enough as it is if people are actually being genuine and trying to do real work,” says Cackowski, who also works at the Karmanos Cancer Institute in Michigan. “And it’s just really frustrating to waste your time based on somebody’s fraudulent publications.”

Two men sitting  across from each other at a table filled with papers

Wayne State scientists Frank Cackowski and Steven Zielske carried out experiments based on a paper they later found to contain false data.

Amy Sacka, CC BY-ND

He worries that the bogus publications are slowing down “legitimate research that down the road is going to impact patient care and drug development.”

The two researchers eventually found that SNHG1 did appear to play a part in prostate cancer, though not in the way the suspect paper suggested. But it was a tough topic to study. Zielske combed through all the studies on SNHG1 and cancer – some 150 papers, nearly all from Chinese hospitals – and concluded that “a majority” of them looked fake. Some reported using experimental reagents known as primers that were “just gibberish,” for instance, or targeted a different gene than what the study said, according to Zielske. He contacted several of the journals, he said, but received little response. “I just stopped following up.”

The many questionable articles also made it harder to get funding, Zielske said. The first time he submitted a grant application to study SNHG1, it was rejected, with one reviewer saying “the field was crowded,” Zielske recalled. The following year, he explained in his application how most of the literature likely came from paper mills. He got the grant.

Today, Zielske said, he approaches new research differently than he used to: “You can’t just read an abstract and have any faith in it. I kind of assume everything’s wrong.”

Legitimate academic journals evaluate papers before they are published by having other researchers in the field carefully read them over. This peer review process is designed to stop flawed research from being disseminated, but is far from perfect.

Reviewers volunteer their time, typically assume research is real and so don’t look for signs of fraud. And some publishers may try to pick reviewers they deem more likely to accept papers, because rejecting a manuscript can mean losing out on thousands of dollars in publication fees.

“Even good, honest reviewers have become apathetic” because of “the volume of poor research coming through the system,” said Adam Day, who directs Clear Skies, a company in London that develops data-based methods to help spot falsified papers and academic journals. “Any editor can recount seeing reports where it’s obvious the reviewer hasn’t read the paper.”

With AI, they don’t have to: New research shows that many reviews are now written by ChatGPT and similar tools.

To expedite the publication of one another’s work, some corrupt scientists form peer review rings. Paper mills may even create fake peer reviewers impersonating real scientists to ensure their manuscripts make it through to publication. Others bribe editors or plant agents on journal editorial boards.

María de los Ángeles Oviedo-García, a professor of marketing at the University of Seville in Spain, spends her spare time hunting for suspect peer reviews from all areas of science, hundreds of which she has flagged on PubPeer. Some of these reviews are the length of a tweet, others ask authors to cite the reviewer’s work even if it has nothing to do with the science at hand, and many closely resemble other peer reviews for very different studies – evidence, in her eyes, of what she calls “review mills.”

Screenshot showing highlighted reports

PubPeer comment from María de los Ángeles Oviedo-García pointing out that a peer review report is very similar to two other reports. She also points out that authors and citations for all three are either anonymous or the same person – both hallmarks of fake papers.

Screen capture by The Conversation, CC BY-ND

“One of the demanding fights for me is to keep faith in science,” says Oviedo-García, who tells her students to look up papers on PubPeer before relying on them too heavily. Her research has been slowed down, she adds, because she now feels compelled to look for peer review reports for studies she uses in her work. Often there aren’t any, because “very few journals publish those review reports,” Oviedo-García says.

An ‘absolutely huge’ problem

It is unclear when paper mills began to operate at scale. The earliest article retracted due to suspected involvement of such agencies was published in 2004, according to the Retraction Watch Database, which contains details about tens of thousands of retractions. (The database is operated by The Center for Scientific Integrity, the parent nonprofit of Retraction Watch.) Nor is it clear exactly how many low-quality, plagiarized or made-up articles paper mills have spawned.

But the number is likely to be significant and growing, experts say. One Russia-linked paper mill in Latvia, for instance, claims on its website to have published “more than 12,650 articles” since 2012.

An analysis of 53,000 papers submitted to six publishers – but not necessarily published – found the proportion of suspect papers ranged from 2% to 46% across journals. And the American publisher Wiley, which has retracted more than 11,300 compromised articles and closed 19 heavily affected journals in its erstwhile Hindawi division, recently said its new paper-mill detection tool flags up to 1 in 7 submissions.

Day, of Clear Skies, estimates that as many as 2% of the several million scientific works published in 2022 were milled. Some fields are more problematic than others. The number is closer to 3% in biology and medicine, and in some subfields, like cancer, it may be much larger, according to Day. Despite increased awareness today, “I do not see any significant change in the trend,” he said. With improved methods of detection, “any estimate I put out now will be higher.”

The paper-mill problem is “absolutely huge,” said Sabina Alam, director of Publishing Ethics and Integrity at Taylor & Francis, a major academic publisher. In 2019, none of the 175 ethics cases that editors escalated to her team was about paper mills, Alam said. Ethics cases include submissions and already published papers. In 2023, “we had almost 4,000 cases,” she said. “And half of those were paper mills.”

Jennifer Byrne, an Australian scientist who now heads up a research group to improve the reliability of medical research, submitted testimony for a hearing of the U.S. House of Representatives’ Committee on Science, Space, and Technology in July 2022. She noted that 700, or nearly 6%, of 12,000 cancer research papers screened had errors that could signal paper mill involvement. Byrne shuttered her cancer research lab in 2017 because the genes she had spent two decades researching and writing about became the target of an enormous number of fake papers. A rogue scientist fudging data is one thing, she said, but a paper mill could churn out dozens of fake studies in the time it took her team to publish a single legitimate one.

“The threat of paper mills to scientific publishing and integrity has no parallel over my 30-year scientific career …. In the field of human gene science alone, the number of potentially fraudulent articles could exceed 100,000 original papers,” she wrote to lawmakers, adding, “This estimate may seem shocking but is likely to be conservative.”

In one area of genetics research – the study of noncoding RNA in different types of cancer – “We’re talking about more than 50% of papers published are from mills,” Byrne said. “It’s like swimming in garbage.”

In 2022, Byrne and colleagues, including two of us, found that suspect genetics research, despite not having an immediate impact on patient care, still informs the work of other scientists, including those running clinical trials. Publishers, however, are often slow to retract tainted papers, even when alerted to obvious signs of fraud. We found that 97% of the 712 problematic genetics research articles we identified remained uncorrected within the literature.

When retractions do happen, it is often thanks to the efforts of a small international community of amateur sleuths like Oviedo-García and those who post on PubPeer.

Jillian Goldfarb, an associate professor of chemical and biomolecular engineering at Cornell University and a former editor of the Elsevier journal Fuel, laments the publisher’s handling of the threat from paper mills.

“I was assessing upwards of 50 papers every day,” she said in an email interview. While she had technology to detect plagiarism, duplicate submissions and suspicious author changes, it was not enough. “It’s unreasonable to think that an editor – for whom this is not usually their full-time job – can catch these things reading 50 papers at a time. The time crunch, plus pressure from publishers to increase submission rates and citations and decrease review time, puts editors in an impossible situation.”

In October 2023, Goldfarb resigned from her position as editor of Fuel. In a LinkedIn post about her decision, she cited the company’s failure to move on dozens of potential paper-mill articles she had flagged; its hiring of a principal editor who reportedly “engaged in paper and citation milling”; and its proposal of candidates for editorial positions “with longer PubPeer profiles and more retractions than most people have articles on their CVs, and whose names appear as authors on papers-for-sale websites.”

“This tells me, our community, and the public, that they value article quantity and profit over science,” Goldfarb wrote.

In response to questions about Goldfarb’s resignation, an Elsevier spokesperson told The Conversation that it “takes all claims about research misconduct in our journals very seriously” and is investigating Goldfarb’s claims. The spokesperson added that Fuel’s editorial team has “been working to make other changes to the journal to benefit authors and readers.”

That’s not how it works, buddy

Business proposals had been piling up for years in the inbox of João de Deus Barreto Segundo, managing editor of six journals published by the Bahia School of Medicine and Public Health in Salvador, Brazil. Several came from suspect publishers on the prowl for new journals to add to their portfolios. Others came from academics suggesting fishy deals or offering bribes to publish their paper.

In one email from February 2024, an assistant professor of economics in Poland explained that he ran a company that worked with European universities. “Would you be interested in collaboration on the publication of scientific articles by scientists who collaborate with me?” Artur Borcuch inquired. “We will then discuss possible details and financial conditions.”

A university administrator in Iraq was more candid: “As an incentive, I am prepared to offer a grant of $500 for each accepted paper submitted to your esteemed journal,” wrote Ahmed Alkhayyat, head of the Islamic University Centre for Scientific Research, in Najaf, and manager of the school’s “world ranking.”

“That’s not how it works, buddy,” Barreto Segundo shot back.

In email to The Conversation, Borcuch denied any improper intent. “My role is to mediate in the technical and procedural aspects of publishing an article,” Borcuch said, adding that, when working with multiple scientists, he would “request a discount from the editorial office on their behalf.” Informed that the Brazilian publisher had no publication fees, Borcuch said a “mistake” had occurred because an “employee” sent the email for him “to different journals.”

Academic journals have different payment models. Many are subscription-based and don’t charge authors for publishing, but have hefty fees for reading articles. Libraries and universities also pay large sums for access.

A fast-growing open-access model – where anyone can read the paper – includes expensive publication fees levied on authors to make up for the loss of revenue in selling the articles. These payments are not meant to influence whether or not a manuscript is accepted.

The Bahia School of Medicine and Public Health, among others, doesn’t charge authors or readers, but Barreto Segundo’s employer is a small player in the scholarly publishing business, which brings in close to $30 billion a year on profit margins as high as 40%. Academic publishers make money largely from subscription fees from institutions like libraries and universities, individual payments to access paywalled articles, and open-access fees paid by authors to ensure their articles are free for anyone to read.

The industry is lucrative enough that it has attracted unscrupulous actors eager to find a way to siphon off some of that revenue.

Ahmed Torad, a lecturer at Kafr El Sheikh University in Egypt and editor-in-chief of the Egyptian Journal of Physiotherapy, asked for a 30% kickback for every article he passed along to the Brazilian publisher. “This commission will be calculated based on the publication fees generated by the manuscripts I submit,” Torad wrote, noting that he specialized “in connecting researchers and authors with suitable journals for publication.”

Screenshot of text with yellow background

Excerpt from Ahmed Torad’s email suggesting a kickback.

Screenshot by The Conversation, CC BY-ND

Apparently, he failed to notice that Bahia School of Medicine and Public Health doesn’t charge author fees.

Like Borcuch, Alkhayyat denied any improper intent. He said there had been a “misunderstanding” on the editor’s part, explaining that the payment he offered was meant to cover presumed article-processing charges. “Some journals ask for money. So this is normal,” Alkhayyat said.

Torad explained that he had sent his offer to source papers in exchange for a commission to some 280 journals, but had not forced anyone to accept the manuscripts. Some had balked at his proposition, he said, despite regularly charging authors thousands of dollars to publish. He suggested that the scientific community wasn’t comfortable admitting that scholarly publishing has become a business like any other, even if it’s “obvious to many scientists.”

The unwelcome advances all targeted one of the journals Barreto Segundo managed, The Journal of Physiotherapy Research, soon after it was indexed in Scopus, a database of abstracts and citations owned by the publisher Elsevier.

Along with Clarivate’s Web of Science, Scopus has become an important quality stamp for scholarly publications globally. Articles in indexed journals are money in the bank for their authors: They help secure jobs, promotions, funding and, in some countries, even trigger cash rewards. For academics or physicians in poorer countries, they can be a ticket to the global north.

Consider Egypt, a country plagued by dubious clinical trials. Universities there commonly pay employees large sums for international publications, with the amount depending on the journal’s impact factor. A similar incentive structure is hardwired into national regulations: To earn the rank of full professor, for example, candidates must have at least five publications in two years, according to Egypt’s Supreme Council of Universities. Studies in journals indexed in Scopus or Web of Science not only receive extra points, but they also are exempt from further scrutiny when applicants are evaluated. The higher a publication’s impact factor, the more points the studies get.

With such a focus on metrics, it has become common for Egyptian researchers to cut corners, according to a physician in Cairo who requested anonymity for fear of retaliation. Authorship is frequently gifted to colleagues who then return the favor later, or studies may be created out of whole cloth. Sometimes an existing legitimate paper is chosen from the literature, and key details such as the type of disease or surgery are then changed and the numbers slightly modified, the source explained.

It affects clinical guidelines and medical care, “so it’s a shame,” the physician said.

Ivermectin, a drug used to treat parasites in animals and humans, is a case in point. When some studies showed that it was effective against COVID-19, ivermectin was hailed as a “miracle drug” early in the pandemic. Prescriptions surged, and along with them calls to U.S. poison centers; one man spent nine days in the hospital after downing an injectable formulation of the drug that was meant for cattle, according to the Centers for Disease Control and Prevention. As it turned out, nearly all of the research that showed a positive effect on COVID-19 had indications of fakery, the BBC and others reported – including a now-withdrawn Egyptian study. With no apparent benefit, patients were left with just side effects.

Research misconduct isn’t limited to emerging economies, having recently felled university presidents and top scientists at government agencies in the United States. Neither is the emphasis on publications. In Norway, for example, the government allocates funding to research institutes, hospitals and universities based on how many scholarly works employees publish, and in which journals. The country has decided to partly halt this practice starting in 2025.

“There’s a huge academic incentive and profit motive,” says Lisa Bero, a professor of medicine and public health at the University of Colorado Anschutz Medical Campus and the senior research-integrity editor at the Cochrane Collaboration, an international nonprofit organization that produces evidence reviews about medical treatments. “I see it at every institution I’ve worked at.”

But in the global south, the publish-or-perish edict runs up against underdeveloped research infrastructures and education systems, leaving scientists in a bind. For a Ph.D., the Cairo physician who requested anonymity conducted an entire clinical trial single-handedly – from purchasing study medication to randomizing patients, collecting and analyzing data and paying article-processing fees. In wealthier nations, entire teams work on such studies, with the tab easily running into the hundreds of thousands of dollars.

“Research is quite challenging here,” the physician said. That’s why scientists “try to manipulate and find easier ways so they get the job done.”

Institutions, too, have gamed the system with an eye to international rankings. In 2011, the journal Science described how prolific researchers in the United States and Europe were offered hefty payments for listing Saudi universities as secondary affiliations on papers. And in 2023, the magazine, in collaboration with Retraction Watch, uncovered a massive self-citation ploy by a top-ranked dental school in India that forced undergraduate students to publish papers referencing faculty work.

The root – and solutions

Such unsavory schemes can be traced back to the introduction of performance-based metrics in academia, a development driven by the New Public Management movement that swept across the Western world in the 1980s, according to Canadian sociologist of science Yves Gingras of the Université du Québec à Montréal. When universities and public institutions adopted corporate management, scientific papers became “accounting units” used to evaluate and reward scientific productivity rather than “knowledge units” advancing our insight into the world around us, Gingras wrote.

This transformation led many researchers to compete on numbers instead of content, which made publication metrics poor measures of academic prowess. As Gingras has shown, the controversial French microbiologist Didier Raoult, who now has more than a dozen retractions to his name, has an h-index – a measure combining publication and citation numbers – that is twice as high as that of Albert Einstein – “proof that the index is absurd,” Gingras said.

Worse, a sort of scientific inflation, or “scientometric bubble,” has ensued, with each new publication representing an increasingly small increment in knowledge. “We publish more and more superficial papers, we publish papers that have to be corrected, and we push people to do fraud,” said Gingras.

In terms of career prospects of individual academics, too, the average value of a publication has plummeted, triggering a rise in the number of hyperprolific authors. One of the most notorious cases is Spanish chemist Rafael Luque, who in 2023 reportedly published a study every 37 hours.

In 2024, Landon Halloran, a geoscientist at the University of Neuchâtel, in Switzerland, received an unusual job application for an opening in his lab. A researcher with a Ph.D. from China had sent him his CV. At 31, the applicant had amassed 160 publications in Scopus-indexed journals, 62 of them in 2022 alone, the same year he obtained his doctorate. Although the applicant was not the only one “with a suspiciously high output,” according to Halloran, he stuck out. “My colleagues and I have never come across anything quite like it in the geosciences,” he said.

According to industry insiders and publishers, there is more awareness now of threats from paper mills and other bad actors. Some journals routinely check for image fraud. A bad AI-generated image showing up in a paper can either be a sign of a scientist taking an ill-advised shortcut, or a paper mill.

The Cochrane Collaboration has a policy excluding suspect studies from its analyses of medical evidence. The organization also has been developing a tool to help its reviewers spot problematic medical trials, just as publishers have begun to screen submissions and share data and technologies among themselves to combat fraud.

Set of six graphical images that resemble lungs, spiked balls, and vials filled with small round balls

This image, generated by AI, is a visual gobbledygook of concepts around transporting and delivering drugs in the body. For instance, the upper left figure is a nonsensical mix of a syringe, an inhaler and pills. And the pH-sensitive carrier molecule on the lower left is huge, rivaling the size of the lungs. After scientist sleuths pointed out that the published image made no sense, the journal issued a correction.

Screen capture by The Conversation, CC BY-ND

Set of six graphical images of lungs and molecules

This graphic is the corrected image that replaced the AI image above. In this case, according to the correction, the journal determined that the paper was legitimate but the scientists had used AI to generate the image describing it.

Screen capture by The Conversation, CC BY-ND

“People are realizing like, wow, this is happening in my field, it’s happening in your field,” said the Cochrane Collaboration’s Bero”. “So we really need to get coordinated and, you know, develop a method and a plan overall for stamping these things out.”

What jolted Taylor & Francis into paying attention, according to Alam, the director of Publishing Ethics and Integrity, was a 2020 investigation of a Chinese paper mill by sleuth Elisabeth Bik and three of her peers who go by the pseudonyms Smut Clyde, Morty and Tiger BB8. With 76 compromised papers, the U.K.-based company’s Artificial Cells, Nanomedicine, and Biotechnology was the most affected journal identified in the probe.

“It opened up a minefield,” says Alam, who also co-chairs United2Act, a project launched in 2023 that brings together publishers, researchers and sleuths in the fight against paper mills. “It was the first time we realized that stock images essentially were being used to represent experiments.”

Taylor & Francis decided to audit the hundreds of articles in its portfolio that contained similar types of images. It doubled Alam’s team, which now has 14.5 positions dedicated to doing investigations, and also began monitoring submission rates. Paper mills, it seemed, weren’t picky customers.

“What they’re trying to do is find a gate, and if they get in, then they just start kind of slamming in the submissions,” Alam said. Seventy-six fake papers suddenly seemed like a drop in the ocean. At one Taylor & Francis journal, for instance, Alam’s team identified nearly 1,000 manuscripts that bore all the marks of coming from a mill, she said.

And in 2023, it rejected about 300 dodgy proposals for special issues. “We’ve blocked a hell of a lot from coming through,” Alam said.

Fraud checkers

A small industry of technology startups has sprung up to help publishers, researchers and institutions spot potential fraud. The website Argos, launched in September 2024 by Scitility, an alert service based in Sparks, Nevada, allows authors to check if new collaborators are trailed by retractions or misconduct concerns. It has flagged tens of thousands of “high-risk” papers, according to the journal Nature.

Red Rejected stamped on white paper

Fraud-checker tools sift through papers to point to those that should be manually checked and possibly rejected.

solidcolours/iStock via Getty Images

Morressier, a scientific conference and communications company based in Berlin, “aims to restore trust in science by improving the way scientific research is published”, according to its website. It offers integrity tools that target the entire research life cycle. Other new paper-checking tools include Signals, by London-based Research Signals, and Clear Skies’ Papermill Alarm.

The fraudsters have not been idle, either. In 2022, when Clear Skies released the Papermill Alarm, the first academic to inquire about the new tool was a paper miller, according to Day. The person wanted access so he could check his papers before firing them off to publishers, Day said. “Paper mills have proven to be adaptive and also quite quick off the mark.”

Given the ongoing arms race, Alam acknowledges that the fight against paper mills won’t be won as long as the booming demand for their products remains.

According to a Nature analysis, the retraction rate tripled from 2012 to 2022 to close to .02%, or around 1 in 5,000 papers. It then nearly doubled in 2023, in large part because of Wiley’s Hindawi debacle. Today’s commercial publishing is part of the problem, Byrne said. For one, cleaning up the literature is a vast and expensive undertaking with no direct financial upside. “Journals and publishers will never, at the moment, be able to correct the literature at the scale and in the timeliness that’s required to solve the paper-mill problem,” Byrne said. “Either we have to monetize corrections such that publishers are paid for their work, or forget the publishers and do it ourselves.”

But that still wouldn’t fix the fundamental bias built into for-profit publishing: Journals don’t get paid for rejecting papers. “We pay them for accepting papers,” said Bodo Stern, a former editor of the journal Cell and chief of Strategic Initiatives at Howard Hughes Medical Institute, a nonprofit research organization and major funder in Chevy Chase, Maryland. “I mean, what do you think journals are going to do? They’re going to accept papers.”

With more than 50,000 journals on the market, even if some are trying hard to get it right, bad papers that are shopped around long enough eventually find a home, Stern added. “That system cannot function as a quality-control mechanism,” he said. “We have so many journals that everything can get published.”

In Stern’s view, the way to go is to stop paying journals for accepting papers and begin looking at them as public utilities that serve a greater good. “We should pay for transparent and rigorous quality-control mechanisms,” he said.

Peer review, meanwhile, “should be recognized as a true scholarly product, just like the original article, because the authors of the article and the peer reviewers are using the same skills,” Stern said. By the same token, journals should make all peer-review reports publicly available, even for manuscripts they turn down. “When they do quality control, they can’t just reject the paper and then let it be published somewhere else,” Stern said. “That’s not a good service.”

Better measures

Stern isn’t the first scientist to bemoan the excessive focus on bibliometrics. “We need less research, better research, and research done for the right reasons,” wrote the late statistician Douglas G. Altman in a much-cited editorial from 1994. “Abandoning using the number of publications as a measure of ability would be a start.”

Nearly two decades later, a group of some 150 scientists and 75 science organizations released the San Francisco Declaration on Research Assessment, or DORA, discouraging the use of the journal impact factor and other measures as proxies for quality. The 2013 declaration has since been signed by more than 25,000 individuals and organizations in 165 countries.

Despite the declaration, metrics remain in wide use today, and scientists say there is a new sense of urgency.

“We’re getting to the point where people really do feel they have to do something” because of the vast number of fake papers, said Richard Sever, assistant director of Cold Spring Harbor Laboratory Press, in New York, and co-founder of the preprint servers bioRxiv and medRxiv.

Stern and his colleagues have tried to make improvements at their institution. Researchers who wish to renew their seven-year contract have long been required to write a short paragraph describing the importance of their major results. Since the end of 2023, they also have been asked to remove journal names from their applications.

That way, “you can never do what all reviewers do – I’ve done it – look at the bibliography and in just one second decide, ‘Oh, this person has been productive because they have published many papers and they’re published in the right journals,’” says Stern. “What matters is, did it really make a difference?”

Shifting the focus away from convenient performance metrics seems possible not just for wealthy private institutions like Howard Hughes Medical Institute, but also for large government funders. In Australia, for example, the National Health and Medical Research Council in 2022 launched the “top 10 in 10” policy, aiming, in part, to “value research quality rather than quantity of publications.”

Rather than providing their entire bibliography, the agency, which assesses thousands of grant applications every year, asked researchers to list no more than 10 publications from the past decade and explain the contribution each had made to science. According to an evaluation report from April, 2024 close to three-quarters of grant reviewers said the new policy allowed them to concentrate more on research quality than quantity. And more than half said it reduced the time they spent on each application.

Gingras, the Canadian sociologist, advocates giving scientists the time they need to produce work that matters, rather than a gushing stream of publications. He is a signatory to the Slow Science Manifesto: “Once you get slow science, I can predict that the number of corrigenda, the number of retractions, will go down,” he says.

At one point, Gingras was involved in evaluating a research organization whose mission was to improve workplace security. An employee presented his work. “He had a sentence I will never forget,” Gingras recalls. The employee began by saying, “‘You know, I’m proud of one thing: My h-index is zero.’ And it was brilliant.” The scientist had developed a technology that prevented fatal falls among construction workers. “He said, ‘That’s useful, and that’s my job.’ I said, ‘Bravo!’”

Learn more about how the Problematic Paper Screener uncovers compromised papers.The Conversation

Frederik Joelving, Contributing editor, Retraction Watch; Cyril Labbé, Professor of Computer Science, Université Grenoble Alpes (UGA), and Guillaume Cabanac, Professor of Computer Science, Institut de Recherche en Informatique de Toulouse

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Fake papers are contaminating the world’s scientific literature, fueling a corrupt industry and slowing legitimate lifesaving medical research appeared first on theconversation.com

The Conversation

I’m a physician who has looked at hundreds of studies of vaccine safety, and here’s some of what RFK Jr. gets wrong

Published

on

theconversation.com – Jake Scott, Clinical Associate Professor of Infectious Diseases, Stanford University – 2025-06-26 07:31:00


Robert F. Kennedy Jr., since becoming Health and Human Services secretary, has made many false claims about vaccines, including exaggerating mandatory shots for children and alleging conflicts of interest among vaccine advisers. In reality, children receive about 30-32 required vaccine doses protecting against 10-12 diseases, far fewer than his claimed 92. Modern vaccines contain far fewer antigens and improved adjuvants, reducing immune burden. Controlled trials, including placebo comparisons, have tested all routine vaccines extensively. U.S. monitoring systems track vaccine safety continuously. Allegations of widespread conflicts of interest among advisers are unfounded, and vaccines have significantly reduced childhood illnesses and deaths.

Public health experts worry that factually inaccurate statements by Robert F. Kennedy Jr. threaten the public’s confidence in vaccines.
Andrew HarnikGetty Images

Jake Scott, Stanford University

In the four months since he began serving as secretary of the Department of Health and Human Services, Robert F. Kennedy Jr. has made many public statements about vaccines that have cast doubt on their safety and on the objectivity of long-standing processes established to evaluate them.

Many of these statements are factually incorrect. For example, in a newscast aired on June 12, 2025, Kennedy told Fox News viewers that 97% of federal vaccine advisers are on the take. In the same interview, he also claimed that children receive 92 mandatory shots. He has also widely claimed that only COVID-19 vaccines, not other vaccines in use by both children and adults, were ever tested against placebos and that “nobody has any idea” how safe routine immunizations are.

As an infectious disease physician who curates an open database of hundreds of controlled vaccine trials involving over 6 million participants, I am intimately familiar with the decades of research on vaccine safety. I believe it is important to correct the record – especially because these statements come from the official who now oversees the agencies charged with protecting Americans’ health.

Do children really receive 92 mandatory shots?

In 1986, the childhood vaccine schedule contained about 11 doses protecting against seven diseases. Today, it includes roughly 50 injections covering 16 diseases. State school entry laws typically require 30 to 32 shots across 10 to 12 diseases. No state mandates COVID-19 vaccination. Where Kennedy’s “92 mandatory shots” figure comes from is unclear, but the actual number is significantly lower.

From a safety standpoint, the more important question is whether today’s schedule with additional vaccines might be too taxing for children’s immune systems. It isn’t, because as vaccine technology improved over the past several decades, the number of antigens in each vaccine dose is much lower than before.

Antigens are the molecules in vaccines that trigger a response from the immune system, training it to identify the specific pathogen. Some vaccines contain a minute amount of aluminum salt that serves as an adjuvant – a helper ingredient that improves the quality and staying power of the immune response, so each dose can protect with less antigen.

Those 11 doses in 1986 delivered more than 3,000 antigens and 1.5 milligrams of aluminum over 18 years. Today’s complete schedule delivers roughly 165 antigens – which is a 95% reduction – and 5-6 milligrams of aluminum in the same time frame. A single smallpox inoculation in 1900 exposed a child to more antigens than today’s complete series.

A black-and-white photo of a doctor in a white coat giving an injection to a boy who is held by a female nurse.
Jonas Salk, the inventor of the polio vaccine, administers a dose to a boy in 1954.
Underwood Archives via Getty Images

Since 1986, the United States has introduced vaccines against Haemophilus influenzae type b, hepatitis A and B, chickenpox, pneumococcal disease, rotavirus and human papillomavirus. Each addition represents a life-saving advance.

The incidence of Haemophilus influenzae type b, a bacterial infection that can cause pneumonia, meningitis and other severe diseases, has dropped by 99% in infants. Pediatric hepatitis infections are down more than 90%, and chickenpox hospitalizations are down about 90%. The Centers for Disease Control and Prevention estimates that vaccinating children born from 1994 to 2023 will avert 508 million illnesses and 1,129,000 premature deaths.

Placebo testing for vaccines

Kennedy has asserted that only COVID-19 vaccines have undergone rigorous safety trials in which they were tested against placebos. This is categorically wrong.

Of the 378 controlled trials in our database, 195 compared volunteers’ response to a vaccine with their response to a placebo. Of those, 159 gave volunteers only a salt water solution or another inert substance. Another 36 gave them just the adjuvant without any viral or bacterial material, as a way to see whether there were side effects from the antigen itself or the injection. Every routine childhood vaccine antigen appears in at least one such study.

The 1954 Salk polio trial, one of the largest clinical trials in medical history, enrolled more than 600,000 children and tested the vaccine by comparing it with a salt water control. Similar trials, which used a substance that has no biological effect as a control, were used to test Haemophilus influenzae type b, pneumococcal, rotavirus, influenza and HPV vaccines.

Once an effective vaccine exists, ethics boards require new versions be compared against that licensed standard because withholding proven protection from children would be unethical.

How unknown is the safety of widely used vaccines?

Kennedy has insisted on multiple occasions that “nobody has any idea” about vaccine safety profiles. Of the 378 trials in our database, the vast majority published detailed safety outcomes.

Beyond trials, the U.S. operates the Vaccine Adverse Event Reporting System, the Vaccine Safety Datalink and the PRISM network to monitor hundreds of millions of doses for rare problems. The Vaccine Adverse Event Reporting System works like an open mailbox where anyone – patients, parents, clinicians – can report a post-shot problem; the Vaccine Safety Datalink analyzes anonymized electronic health records from large health care systems to spot patterns; and PRISM scans billions of insurance claims in near-real time to confirm or rule out rare safety signals.

These systems led health officials to pull the first rotavirus vaccine in 1999 after it was linked to bowel obstruction, and to restrict the Johnson & Johnson COVID-19 vaccine in 2021 after rare clotting events. Few drug classes undergo such continuous surveillance and are subject to such swift corrective action when genuine risks emerge.

The conflicts of interest claim

On June 9, Kennedy took the unprecedented step of dissolving vetted members of the Advisory Committee on Immunization Practices, the expert body that advises the CDC on national vaccine policy. He has claimed repeatedly that the vast majority of serving members of the committee – 97% – had extensive conflicts of interest because of their entanglements with the pharmaceutical industry. Kennedy bases that number on a 2009 federal audit of conflict-of-interest paperwork, but that report looked at 17 CDC advisory committees, not specifically this vaccine committee. And it found no pervasive wrongdoing – 97% of disclosure forms only contained routine paperwork mistakes, such as information in the wrong box or a missing initial, and not hidden financial ties.

Reuters examined data from Open Payments, a government website that discloses health care providers’ relationships with industry, for all 17 voting members of the committee who were dismissed. Six received no more than US$80 from drugmakers over seven years, and four had no payments at all.

The remaining seven members accepted between $4,000 and $55,000 over seven years, mostly for modest consulting or travel. In other words, just 41% of the committee received anything more than pocket change from drugmakers. Committee members must divest vaccine company stock and recuse themselves from votes involving conflicts.

A term without a meaning

Kennedy has warned that vaccines cause “immune deregulation,” a term that has no basis in immunology. Vaccines train the immune system, and the diseases they prevent are the real threats to immune function.

Measles can wipe immune memory, leaving children vulnerable to other infections for years. COVID-19 can trigger multisystem inflammatory syndrome in children. Chronic hepatitis B can cause immune-mediated organ damage. Preventing these conditions protects people from immune system damage.

Today’s vaccine panel doesn’t just prevent infections; it deters doctor visits and thereby reduces unnecessary prescriptions for “just-in-case” antibiotics. It’s one of the rare places in medicine where physicians like me now do more good with less biological burden than we did 40 years ago.

The evidence is clear and publicly available: Vaccines have dramatically reduced childhood illness, disability and death on a historic scale.The Conversation

Jake Scott, Clinical Associate Professor of Infectious Diseases, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post I’m a physician who has looked at hundreds of studies of vaccine safety, and here’s some of what RFK Jr. gets wrong appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

This content presents a science-based and fact-checked critique of Robert F. Kennedy Jr.’s statements on vaccines, emphasizing the importance of established public health measures and vaccine safety. It supports mainstream medical consensus and public health institutions like the CDC, while challenging anti-vaccine rhetoric associated with certain political or ideological positions. The tone is objective but leans toward defending regulatory agencies and vaccine advocacy, which aligns more closely with Center-Left perspectives favoring public health expertise and government intervention in health policy.

Continue Reading

The Conversation

Cyberattacks shake voters’ trust in elections, regardless of party

Published

on

theconversation.com – Ryan Shandler, Professor of Cybersecurity and International Relations, Georgia Institute of Technology – 2025-06-27 07:29:00


American democracy faces a crisis of trust, with nearly half of Americans doubting election fairness. This mistrust stems not only from polarization and misinformation but also from unease about the digital infrastructure behind voting. While over 95% of ballots are now counted electronically, this complexity fuels skepticism, especially amid foreign disinformation campaigns that amplify doubts about election security. A study during the 2024 election showed that exposure to cyberattack reports, even unrelated to elections, significantly undermines voter confidence, particularly among those using digital voting machines. To protect democracy, it’s vital to pair secure technology with public education and treat trust as a national asset.

An election worker installs a touchscreen voting machine.
Ethan Miller/Getty Images

Ryan Shandler, Georgia Institute of Technology; Anthony J. DeMattee, Emory University, and Bruce Schneier, Harvard Kennedy School

American democracy runs on trust, and that trust is cracking.

Nearly half of Americans, both Democrats and Republicans, question whether elections are conducted fairly. Some voters accept election results only when their side wins. The problem isn’t just political polarization – it’s a creeping erosion of trust in the machinery of democracy itself.

Commentators blame ideological tribalism, misinformation campaigns and partisan echo chambers for this crisis of trust. But these explanations miss a critical piece of the puzzle: a growing unease with the digital infrastructure that now underpins nearly every aspect of how Americans vote.

The digital transformation of American elections has been swift and sweeping. Just two decades ago, most people voted using mechanical levers or punch cards. Today, over 95% of ballots are counted electronically. Digital systems have replaced poll books, taken over voter identity verification processes and are integrated into registration, counting, auditing and voting systems.

This technological leap has made voting more accessible and efficient, and sometimes more secure. But these new systems are also more complex. And that complexity plays into the hands of those looking to undermine democracy.

In recent years, authoritarian regimes have refined a chillingly effective strategy to chip away at Americans’ faith in democracy by relentlessly sowing doubt about the tools U.S. states use to conduct elections. It’s a sustained campaign to fracture civic faith and make Americans believe that democracy is rigged, especially when their side loses.

This is not cyberwar in the traditional sense. There’s no evidence that anyone has managed to break into voting machines and alter votes. But cyberattacks on election systems don’t need to succeed to have an effect. Even a single failed intrusion, magnified by sensational headlines and political echo chambers, is enough to shake public trust. By feeding into existing anxiety about the complexity and opacity of digital systems, adversaries create fertile ground for disinformation and conspiracy theories.

Just before the 2024 presidential election, Director of the Cybersecurity and Infrastructure Security Agency Jen Easterly explains how foreign influence campaigns erode trust in U.S. elections.

Testing cyber fears

To test this dynamic, we launched a study to uncover precisely how cyberattacks corroded trust in the vote during the 2024 U.S. presidential race. We surveyed more than 3,000 voters before and after election day, testing them using a series of fictional but highly realistic breaking news reports depicting cyberattacks against critical infrastructure. We randomly assigned participants to watch different types of news reports: some depicting cyberattacks on election systems, others on unrelated infrastructure such as the power grid, and a third, neutral control group.

The results, which are under peer review, were both striking and sobering. Mere exposure to reports of cyberattacks undermined trust in the electoral process – regardless of partisanship. Voters who supported the losing candidate experienced the greatest drop in trust, with two-thirds of Democratic voters showing heightened skepticism toward the election results.

But winners too showed diminished confidence. Even though most Republican voters, buoyed by their victory, accepted the overall security of the election, the majority of those who viewed news reports about cyberattacks remained suspicious.

The attacks didn’t even have to be related to the election. Even cyberattacks against critical infrastructure such as utilities had spillover effects. Voters seemed to extrapolate: “If the power grid can be hacked, why should I believe that voting machines are secure?”

Strikingly, voters who used digital machines to cast their ballots were the most rattled. For this group of people, belief in the accuracy of the vote count fell by nearly twice as much as that of voters who cast their ballots by mail and who didn’t use any technology. Their firsthand experience with the sorts of systems being portrayed as vulnerable personalized the threat.

It’s not hard to see why. When you’ve just used a touchscreen to vote, and then you see a news report about a digital system being breached, the leap in logic isn’t far.

Our data suggests that in a digital society, perceptions of trust – and distrust – are fluid, contagious and easily activated. The cyber domain isn’t just about networks and code. It’s also about emotions: fear, vulnerability and uncertainty.

Firewall of trust

Does this mean we should scrap electronic voting machines? Not necessarily.

Every election system, digital or analog, has flaws. And in many respects, today’s high-tech systems have solved the problems of the past with voter-verifiable paper ballots. Modern voting machines reduce human error, increase accessibility and speed up the vote count. No one misses the hanging chads of 2000.

But technology, no matter how advanced, cannot instill legitimacy on its own. It must be paired with something harder to code: public trust. In an environment where foreign adversaries amplify every flaw, cyberattacks can trigger spirals of suspicion. It is no longer enough for elections to be secure − voters must also perceive them to be secure.

That’s why public education surrounding elections is now as vital to election security as firewalls and encrypted networks. It’s vital that voters understand how elections are run, how they’re protected and how failures are caught and corrected. Election officials, civil society groups and researchers can teach how audits work, host open-source verification demonstrations and ensure that high-tech electoral processes are comprehensible to voters.

We believe this is an essential investment in democratic resilience. But it needs to be proactive, not reactive. By the time the doubt takes hold, it’s already too late.

Just as crucially, we are convinced that it’s time to rethink the very nature of cyber threats. People often imagine them in military terms. But that framework misses the true power of these threats. The danger of cyberattacks is not only that they can destroy infrastructure or steal classified secrets, but that they chip away at societal cohesion, sow anxiety and fray citizens’ confidence in democratic institutions. These attacks erode the very idea of truth itself by making people doubt that anything can be trusted.

If trust is the target, then we believe that elected officials should start to treat trust as a national asset: something to be built, renewed and defended. Because in the end, elections aren’t just about votes being counted – they’re about people believing that those votes count.

And in that belief lies the true firewall of democracy.The Conversation

Ryan Shandler, Professor of Cybersecurity and International Relations, Georgia Institute of Technology; Anthony J. DeMattee, Data Scientist and Adjunct Instructor, Emory University, and Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Cyberattacks shake voters’ trust in elections, regardless of party appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This article presents a balanced and fact-focused analysis of trust issues surrounding American elections, emphasizing concerns shared across the political spectrum. It highlights the complexity of digital voting infrastructure and the external threats posed by misinformation and foreign influence without promoting partisan viewpoints. The tone is neutral, grounded in data and research, avoiding ideological framing or advocacy. The piece calls for bipartisan solutions like public education and institutional trust-building, reflecting a centrist perspective that prioritizes democratic resilience over partisan blame.

Continue Reading

The Conversation

Toxic algae blooms are lasting longer than before in Lake Erie − why that’s a worry for people and pets

Published

on

theconversation.com – Gregory J. Dick, Professor of Biology, University of Michigan – 2025-06-26 14:38:00


Federal scientists forecast a mild to moderate harmful algal bloom season in Lake Erie for 2025, though even moderate blooms pose health risks. Harmful algal blooms, mainly caused by excess phosphorus and nitrogen runoff from agriculture, produce toxins harmful to humans, pets, and ecosystems. Recent DNA research revealed new toxins, including microcystins and saxitoxins, raising emerging concerns. Climate change exacerbates blooms by increasing water temperatures and heavy rainfall. Blooms now start earlier and last longer. Reducing nutrient runoff through improved farming practices and wetland restoration, like Ohio’s H2Ohio program, is essential to mitigating future blooms and protecting water quality.

A satellite image from Aug. 13, 2024, shows an algal bloom covering approximately 320 square miles (830 square km) of Lake Erie. By Aug. 22, it had nearly doubled in size.
NASA Earth Observatory

Gregory J. Dick, University of Michigan

Federal scientists released their annual forecast for Lake Erie’s harmful algal blooms on June 26, 2025, and they expect a mild to moderate season. However, anyone who comes in contact with toxic algae can face health risks. And 2014, when toxins from algae blooms contaminated the water supply in Toledo, Ohio, was a moderate year, too.

We asked Gregory J. Dick, who leads the Cooperative Institute for Great Lakes Research, a federally funded center at the University of Michigan that studies harmful algal blooms among other Great Lakes issues, why they’re such a concern.

A bar chart shows 2025's forecast to be more severe than 2023 but less than 2024.
The National Oceanic and Atmospheric Administration’s prediction for harmful algal bloom severity in Lake Erie compared with past years.
NOAA

1. What causes harmful algal blooms?

Harmful algal blooms are dense patches of excessive algae growth that can occur in any type of water body, including ponds, reservoirs, rivers, lakes and oceans. When you see them in freshwater, you’re typically seeing cyanobacteria, also known as blue-green algae.

These photosynthetic bacteria have inhabited our planet for billions of years. In fact, they were responsible for oxygenating Earth’s atmosphere, which enabled plant and animal life as we know it.

An illustration of algae bloom sources shows a farm field, city and large body of water.
The leading source of harmful algal blooms today is nutrient runoff from fertilized farm fields.
Michigan Sea Grant

Algae are natural components of ecosystems, but they cause trouble when they proliferate to high densities, creating what we call blooms.

Harmful algal blooms form scums at the water surface and produce toxins that can harm ecosystems, water quality and human health. They have been reported in all 50 U.S. states, all five Great Lakes and nearly every country around the world. Blue-green algae blooms are becoming more common in inland waters.

The main sources of harmful algal blooms are excess nutrients in the water, typically phosphorus and nitrogen.

Historically, these excess nutrients mainly came from sewage and phosphorus-based detergents used in laundry machines and dishwashers that ended up in waterways. U.S. environmental laws in the early 1970s addressed this by requiring sewage treatment and banning phosphorus detergents, with spectacular success.

How pollution affected Lake Erie in the 1960s, before clean water regulations.

Today, agriculture is the main source of excess nutrients from chemical fertilizer or manure applied to farm fields to grow crops. Rainstorms wash these nutrients into streams and rivers that deliver them to lakes and coastal areas, where they fertilize algal blooms. In the U.S., most of these nutrients come from industrial-scale corn production, which is largely used as animal feed or to produce ethanol for gasoline.

Climate change also exacerbates the problem in two ways. First, cyanobacteria grow faster at higher temperatures. Second, climate-driven increases in precipitation, especially large storms, cause more nutrient runoff that has led to record-setting blooms.

2. What does your team’s DNA testing tell us about Lake Erie’s harmful algal blooms?

Harmful algal blooms contain a mixture of cyanobacterial species that can produce an array of different toxins, many of which are still being discovered.

When my colleagues and I recently sequenced DNA from Lake Erie water, we found new types of microcystins, the notorious toxins that were responsible for contaminating Toledo’s drinking water supply in 2014.

These novel molecules cannot be detected with traditional methods and show some signs of causing toxicity, though further studies are needed to confirm their human health effects.

A young woman and dog walk along a shoreline with blue-green algae in the water.
Blue-green algae blooms in freshwater, like this one near Toledo in 2014, can be harmful to humans, causing gastrointestinal symptoms, headache, fever and skin irritation. They can be lethal for pets.
Ty Wright for The Washington Post via Getty Images

We also found organisms responsible for producing saxitoxin, a potent neurotoxin that is well known for causing paralytic shellfish poisoning on the Pacific Coast of North America and elsewhere.

Saxitoxins have been detected at low concentrations in the Great Lakes for some time, but the recent discovery of hot spots of genes that make the toxin makes them an emerging concern.

Our research suggests warmer water temperatures could boost its production, which raises concerns that saxitoxin will become more prevalent with climate change. However, the controls on toxin production are complex, and more research is needed to test this hypothesis. Federal monitoring programs are essential for tracking and understanding emerging threats.

3. Should people worry about these blooms?

Harmful algal blooms are unsightly and smelly, making them a concern for recreation, property values and businesses. They can disrupt food webs and harm aquatic life, though a recent study suggested that their effects on the Lake Erie food web so far are not substantial.

But the biggest impact is from the toxins these algae produce that are harmful to humans and lethal to pets.

The toxins can cause acute health problems such as gastrointestinal symptoms, headache, fever and skin irritation. Dogs can die from ingesting lake water with harmful algal blooms. Emerging science suggests that long-term exposure to harmful algal blooms, for example over months or years, can cause or exacerbate chronic respiratory, cardiovascular and gastrointestinal problems and may be linked to liver cancers, kidney disease and neurological issues.

A large round structure offshore is surrounded by blue-green algae.
The water intake system for the city of Toledo, Ohio, is surrounded by an algae bloom in 2014. Toxic algae got into the water system, resulting in residents being warned not to touch or drink their tap water for three days.
AP Photo/Haraz N. Ghanbari

In addition to exposure through direct ingestion or skin contact, recent research also indicates that inhaling toxins that get into the air may harm health, raising concerns for coastal residents and boaters, but more research is needed to understand the risks.

The Toledo drinking water crisis of 2014 illustrated the vast potential for algal blooms to cause harm in the Great Lakes. Toxins infiltrated the drinking water system and were detected in processed municipal water, resulting in a three-day “do not drink” advisory. The episode affected residents, hospitals and businesses, and it ultimately cost the city an estimated US$65 million.

4. Blooms seem to be starting earlier in the year and lasting longer – why is that happening?

Warmer waters are extending the duration of the blooms.

In 2025, NOAA detected these toxins in Lake Erie on April 28, earlier than ever before. The 2022 bloom in Lake Erie persisted into November, which is rare if not unprecedented.

Scientific studies of western Lake Erie show that the potential cyanobacterial growth rate has increased by up to 30% and the length of the bloom season has expanded by up to a month from 1995 to 2022, especially in warmer, shallow waters. These results are consistent with our understanding of cyanobacterial physiology: Blooms like it hot – cyanobacteria grow faster at higher temperatures.

5. What can be done to reduce the likelihood of algal blooms in the future?

The best and perhaps only hope of reducing the size and occurrence of harmful algal blooms is to reduce the amount of nutrients reaching the Great Lakes.

In Lake Erie, where nutrients come primarily from agriculture, that means improving agricultural practices and restoring wetlands to reduce the amount of nutrients flowing off of farm fields and into the lake. Early indications suggest that Ohio’s H2Ohio program, which works with farmers to reduce runoff, is making some gains in this regard, but future funding for H2Ohio is uncertain.

In places like Lake Superior, where harmful algal blooms appear to be driven by climate change, the solution likely requires halting and reversing the rapid human-driven increase in greenhouse gases in the atmosphere.The Conversation

Gregory J. Dick, Professor of Biology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Toxic algae blooms are lasting longer than before in Lake Erie − why that’s a worry for people and pets appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This article presents a neutral and factual overview of the harmful algal blooms in Lake Erie, relying on scientific data and expert analysis without promoting a political agenda. It references federal and academic research, explains causes like agricultural runoff and climate change, and discusses practical mitigation efforts such as agricultural practice improvements and wetland restoration. The tone is informative and balanced, avoiding partisan framing or ideological language. While it touches on environmental issues that can be politically charged, the article remains focused on evidence-based explanations and policy-neutral recommendations.

Continue Reading

Trending