fbpx
Connect with us

The Conversation

Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create

Published

on

Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn't create

The pandemic spurred an increase in research, much of it with methodological holes.
Andriy Onufriyenko/Moment via Getty Images

Dennis M. Gorman, Texas A&M University

Early in the COVID-19 pandemic, researchers flooded journals with studies about the then-novel coronavirus. Many publications streamlined the peer- for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.

However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public health journals listed in Google Scholar, I found that many of these studies used poor quality methods. Several other reviews of studies published in medical journals have also shown that much early COVID-19 research used poor research methods.

Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar used data from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely occurrences. However, this study has been cited over 11,000 times.

A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not strongly linked to research quality, since researchers and journals can game and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.

Advertisement

Methodology matters

I am a public health researcher with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement spreading misinformation about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.

The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to provide valid and useful information.

For example, researchers have known for decades that for studies evaluating the effectiveness of an intervention, a control group is needed to know whether any observed effects can be attributed to the intervention.

Systematic reviews pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will all the available evidence and tell a reader which is worth attending to and which is not.

Advertisement

Certain types of studies, such as one-time surveys of convenience samples that aren't representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable caused a particular outcome.

Systematic reviews involve thoroughly identifying and extracting information from existing research.

All study designs have standards that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.

Studies comparing COVID-19 papers with non-COVID-19 papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against dredging the data to find “statistically significant” results that could be selectively reported.

Such methodological problems were likely overlooked in the considerably shortened peer-review process for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be 13 days, compared with 110 days in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of about three weeks.

Advertisement

Publish-or-perish culture

These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.

Journals tend to favor positive, “novel” findings: that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.

Person with head in hands, elbows planted on stacks of paperwork and books littering a desk, glasses and laptop on the side
Many researchers feel pressure to publish papers in order to advance their careers.
South_agency/E+ via Getty Images

Academics have worked in a publish-or-perish incentive system for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The flood of mixed-quality COVID-19 information afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.

Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of today's online mega-journals publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.

Healthy criticism

Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an open society capable of solving complex problems such as a worldwide pandemic.

Advertisement

Publishing a large volume of misinformation disguised as science during a pandemic obscures true and useful knowledge. At worst, this can to bad public health practice and policy.

Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves critically examining the quality of a study's designs, statistical methods, reproducibility and transparency, not the number of times it has been cited or tweeted about.

Science depends on a slow, thoughtful and meticulous approach to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.

Two scientists pipetting liquids under a fume hood, with another scientist in the background examining a sample
Rigorous science requires careful deliberation and attention, not haste.
Assembly/Stone via Getty Images

Public health heavily draws upon disciplines that are experiencing replication crises, such as psychology, biomedical science and biology. It is similar to these disciplines in terms of its incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.

Reexamining how the discipline rewards its scholars and assesses their scholarship can it better prepare for the next public health crisis.The Conversation

Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M University

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Poor media literacy in the social media age

Published

on

theconversation.com – Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston – 2024-04-19 10:01:58

Tiktok is not the only social app to pose the threats it's been accused of.

picture alliance via Getty Images

Nir Eisikovits, UMass Boston

The U.S. government moved closer to banning the social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr. 17, 2024. The move could improve the bill's chances in the Senate, and has indicated that he will sign the bill if it reaches his desk.

Advertisement

The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country. The company has said it will fight any effort to force a sale.

The proposed legislation was motivated by a set of national security concerns. For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law. In other words, the data TikTok collects can, in theory, be used by the Chinese government.

Furthermore, TikTok's popularity in the United States, and the fact that many young people get their from the platform – one-third of Americans under the age of 30 – turns it into a potent instrument for Chinese political influence.

Indeed, the U.S. Office of the Director of National Intelligence recently claimed that TikTok accounts by a Chinese propaganda arm of the government targeted candidates from both political parties during the U.S. midterm election cycle in 2022, and the Chinese Communist Party might attempt to influence the U.S. elections in 2024 in order to sideline critics of China and magnify U.S. social divisions.

Advertisement

To these worries, proponents of the legislation have appended two more arguments: It's only right to curtail TikTok because China bans most U.S.-based social media networks from operating there, and there would be nothing new in such a ban, since the U.S. already restricts the foreign ownership of important media networks.

Some of these arguments are stronger than others.

China doesn't need TikTok to collect data about Americans. The Chinese government can buy all the data it wants from data brokers because the U.S. has no federal data privacy laws to speak of. The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.

The debate about banning TikTok tends to miss the larger picture of social media literacy.

I believe the cumulative force of these claims is substantial and the legislation, on balance, is plausible. But banning the app is also a red herring.

Advertisement

In the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying the impact of AI systems on how people understand themselves. Here's why I think the recent move against TikTok misses the larger point: Americans' sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

Perhaps the most compelling argument for banning TikTok is that the app's ubiquity and the fact that so many young Americans get their news from it turns it into an effective tool for political influence. But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.

The deeper problem is not that the Chinese government can easily manipulate content on the app. It is, rather, that people think it is OK to get their news from social media in the first place. In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Social media is not made to inform people. It is designed to capture consumer attention for the sake of advertisers. With slight variations, that's the business model of all platforms. That's why a lot of the content people encounter on social media is violent, divisive and disturbing. Controversial posts that generate strong feelings literally capture users' notice, hold their gaze for longer, and advertisers with improved opportunities to monetize engagement.

Advertisement

There's an important difference between actively consuming serious, well-vetted information and being manipulated to spend as much time as possible on a platform. The former is the lifeblood of democratic citizenship because being a citizen who participates in political decision-making requires having reliable information on the issues of the day. The latter amounts to letting your attention get hijacked for someone else's financial gain.

If TikTok is banned, many of its users are likely to migrate to Instagram and YouTube. This would benefit Meta and Google, their parent companies, but it wouldn't benefit national security. People would still be exposed to as much junk news as before, and experience shows that these social media platforms could be vulnerable to manipulation as well. After all, the Russians primarily used Facebook and Twitter to meddle in the 2016 election.

Media literacy is especially critical in the age of social media.

Media and technology literacy

That Americans have settled on getting their information from outlets that are uninterested in informing them undermines the very requirement of serious political participation, namely educated decision-making. This problem is not going to be solved by restricting access to foreign apps.

Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age. This involves teaching young people how social media companies make money, how algorithms shape what they see on their phones, and how different types of content affect them psychologically.

Advertisement

My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor's Youth Council. We are talking to Boston's youth leaders about how the technologies they use everyday undermine their privacy, about the role of algorithms in shaping everything from their in music to their political sympathies, and about how generative AI is going to influence their ability to think and write clearly and even who they count as friends.

We are planning to present them with evidence about the adverse effects of excessive social media use on their mental . We are going to to them about taking time away from their phones and developing a healthy skepticism towards what they see on social media.

Protecting people's capacity for critical thinking is a that calls for bipartisan attention. Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies. But I believe they are necessary for raising thoughtful citizens rather than passive social media consumers who have surrendered their attention to commercial and political actors who do not have their interests at heart.The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Are tomorrow’s engineers ready to face AI’s ethical challenges?

Published

on

theconversation.com – Elana Goldenkoff, Doctoral Candidate in Movement Science, of Michigan – 2024-04-19 07:42:44

Are tomorrow's engineers ready to face AI's ethical challenges?

Finding ethics' place in the engineering curriculum.

PeopleImages/iStock via Getty Images Plus

Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

Advertisement

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into daily , ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What's more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

Advertisement

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal benefits.” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I'm supposed to go to?”

Advertisement

Another was troubled by the lack of : “I [would be] dealing with that with no experience. … Who knows how I'll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.

Many students are worried about ethics in their field – but that doesn't mean they feel prepared to deal with the challenges.

The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do . Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical -making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off'

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Advertisement

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students' understanding, but feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

Advertisement

“If I'm paying money to attend ethics class as an engineer, I'm going to be furious,” one said.

These attitudes sometimes extend to how students view engineers' role in society. One interviewee in our current study, for example, said that an engineer's “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Advertisement

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.

The next generation needs to be prepared for ethical questions, not just technical ones.

Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

Advertisement

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public's first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

Published

on

theconversation.com – Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt – 2024-04-18 07:23:58

AI chatbots restrict their output according to vague and broad policies.

taviox/iStock via Getty Images

Jordi Calvet-Bademunt, Vanderbilt University and Jacob Mchangama, Vanderbilt University

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly's image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

Advertisement

The discussions over AI's political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry's approach to speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent , we found that generative AI has important shortcomings regarding of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, including Google's Gemini and OpenAI's ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies' misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Advertisement

Our analysis found that companies' hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google's can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be to participate in women's tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women's tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators' subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … of … choice,” according to a key United Nations convention.

Advertisement

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the pandemic to repress criticism of the . More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies' policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI's integration into search, word processors, email and other applications.

Advertisement

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe's online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union's 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies' influence should require them to adopt a free speech culture. International human rights a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It's also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users' exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

Advertisement

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

News from the South

Trending