fbpx
Connect with us

The Conversation

ChatGPT and other generative AI could foster science denial and misunderstanding – here’s how you can be on alert

Published

on

ChatGPT and other generative AI could foster science denial and misunderstanding – here's how you can be on alert

Approach all information with some initial skepticism.
Guillermo Spelucin/Moment via Getty Images

Gale Sinatra, University of Southern California and Barbara K. Hofer, Middlebury

Until very recently, if you wanted to know more about a controversial scientific topic – stem cell research, the safety of nuclear energy, climate change – you probably did a Google search. Presented with multiple sources, you chose what to read, selecting which sites or authorities to trust.

Now you have another option: You can pose your question to ChatGPT or another generative artificial intelligence platform and quickly receive a succinct response in paragraph form.

ChatGPT does not search the internet the way Google does. Instead, it generates responses to queries by predicting likely word combinations from a massive amalgam of available online information.

Although it has the potential for enhancing productivity, generative AI has been shown to have some major faults. It can produce misinformation. It can create “hallucinations” – a benign term for making things up. And it doesn't always accurately solve reasoning problems. For example, when asked if both a car and a tank can fit through a doorway, it failed to consider both width and height. Nevertheless, it is already being used to produce articles and website content you may have encountered, or as a tool in the writing . Yet you are unlikely to know if what you're reading was created by AI.

Advertisement

As the authors of “Science Denial: Why It Happens and What to Do About It,” we are concerned about how generative AI may blur the boundaries between truth and fiction for those seeking authoritative scientific information.

Every consumer needs to be more vigilant than ever in verifying scientific accuracy in what they read. Here's how you can stay on your toes in this new information landscape.

glowing purple points connected by blue lines
Based on all the data points it ingests, an AI platform uses predictive algorithms to produce answers to queries.
Cobalt88/iStock via Getty Images Plus

How generative AI could promote science denial

Erosion of epistemic trust. All consumers of science information depend on judgments of scientific and medical experts. Epistemic trust is the process of trusting knowledge you get from others. It is fundamental to the understanding and use of scientific information. Whether someone is seeking information about a health concern or to understand to climate change, they often have limited scientific understanding and little access to firsthand evidence. With a rapidly growing body of information online, people must make frequent decisions about what and whom to trust. With the increased use of generative AI and the potential for manipulation, we believe trust is likely to erode further than it already has.

Misleading or just plain wrong. If there are errors or biases in the data on which AI platforms are trained, that can be reflected in the results. In our own searches, when we have asked ChatGPT to regenerate multiple answers to the same question, we have gotten conflicting answers. Asked why, it responded, “Sometimes I make mistakes.” Perhaps the trickiest issue with AI-generated content is knowing when it is wrong.

Disinformation spread intentionally. AI can be used to generate compelling disinformation as text as well as deepfake images and videos. When we asked ChatGPT to “write about vaccines in the style of disinformation,” it produced a nonexistent citation with fake data. Geoffrey Hinton, former head of AI at Google, quit to be to sound the alarm, saying, “It is hard to see how you can prevent the bad actors from using it for bad things.” The potential to create and spread deliberately incorrect information about science already existed, but it is now dangerously easy.

Advertisement

Fabricated sources. ChatGPT provides responses with no sources at all, or if asked for sources, may present ones it made up. We both asked ChatGPT to generate a list of our own publications. We each identified a few correct sources. More were hallucinations, yet seemingly reputable and mostly plausible, with actual previous co-authors, in similar sounding journals. This inventiveness is a big problem if a list of a scholar's publications conveys authority to a reader who doesn't take time to verify them.

Dated knowledge. ChatGPT doesn't know what happened in the world after its training concluded. A query on what percentage of the world has had returned an answer prefaced by “as of my knowledge cutoff date of September 2021.” Given how rapidly knowledge advances in some , this limitation could mean readers get erroneous outdated information. If you're seeking recent research on a personal health issue, for instance, beware.

Rapid advancement and poor transparency. AI continue to become more powerful and learn faster, and they may learn more science misinformation along the way. Google recently announced 25 new embedded uses of AI in its services. At this point, insufficient guardrails are in place to assure that generative AI will become a more accurate purveyor of scientific information over time.

woman looks confused taking notes on paper looking at laptop
Be ready to look beyond your ChatGPT request.
10'000 Hours/DigitalVision via Getty Images

What can you do?

If you use ChatGPT or other AI platforms, recognize that they might not be completely accurate. The burden falls to the user to discern accuracy.

Increase your vigilance. AI fact-checking apps may be available soon, but for now, users must serve as their own fact-checkers. There are steps we recommend. The first is: Be vigilant. People often reflexively share information found from searches on social media with little or no vetting. Know when to become more deliberately thoughtful and when it's worth identifying and evaluating sources of information. If you're trying to decide how to manage a serious illness or to understand the best steps for addressing climate change, take time to vet the sources.

Advertisement

Improve your fact-checking. A second step is lateral reading, a process professional fact-checkers use. Open a new window and search for information about the sources, if provided. Is the source credible? Does the author have relevant expertise? And what is the consensus of experts? If no sources are provided or you don't know if they are valid, use a traditional search engine to find and evaluate experts on the topic.

Evaluate the evidence. Next, take a look at the evidence and its connection to the claim. Is there evidence that genetically modified foods are safe? Is there evidence that they are not? What is the scientific consensus? Evaluating the claims will take effort beyond a quick query to ChatGPT.

If you begin with AI, don't stop there. Exercise caution in using it as the sole authority on any scientific issue. You might see what ChatGPT has to say about genetically modified organisms or vaccine safety, but also follow up with a more diligent search using traditional search engines before you draw conclusions.

Assess plausibility. Judge whether the claim is plausible. Is it likely to be true? If AI makes an implausible (and inaccurate) statement like “1 million deaths were caused by vaccines, not COVID-19,” consider if it even makes sense. Make a tentative judgment and then be open to revising your thinking once you have checked the evidence.

Advertisement

Promote digital literacy in yourself and others. Everyone needs to up their game. Improve your own digital literacy, and if you are a parent, teacher, mentor or community leader, promote digital literacy in others. The American Psychological Association provides guidance on fact-checking online information and recommends be trained in social media skills to minimize risks to health and well-being. The News Literacy Project provides helpful tools for improving and supporting digital literacy.

Arm yourself with the skills you need to navigate the new AI information landscape. Even if you don't use generative AI, it is likely you have already read articles created by it or developed from it. It can take time and effort to find and evaluate reliable information about science online – but it is worth it.The Conversation

Gale Sinatra, Professor of Education and Psychology, University of Southern California and Barbara K. Hofer, Professor of Psychology Emerita, Middlebury

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

Massive IT outage spotlights major vulnerabilities in the global information ecosystem

Published

on

theconversation.com – Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County – 2024-07-19 13:55:07
Displays at LaGuardia Airport in New York show the infamous “blue screen of .”
AP Photo/Yuki Iwamura

Richard Forno, University of Maryland, Baltimore County

The global information technology outage on July 19, 2024, that paralyzed ranging from airlines to hospitals and even the delivery of uniforms for the Olympic represents a growing concern for cybersecurity professionals, businesses and governments.

The outage is emblematic of the way organizational networks, cloud computing services and the internet are interdependent, and the vulnerabilities this creates. In this case, a faulty automatic update to the widely used Falcon cybersecurity software from CrowdStrike caused PCs running Microsoft's Windows operating system to crash. Unfortunately, many servers and PCs need to be fixed manually, and many of the affected organizations have thousands of them spread around the world.

For Microsoft, the problem was made worse because the company released an update to its Azure cloud computing platform at roughly the same time as the CrowdStrike update. Microsoft, CrowdStrike and other companies like Amazon have issued technical work-arounds for customers willing to take matters into their own hands. But for the vast majority of global users, especially companies, this isn't going to be a quick fix.

Advertisement

Modern technology incidents, whether cyberattacks or technical problems, continue to paralyze the world in new and interesting ways. Massive incidents like the CrowdStrike update fault not only create chaos in the business world but disrupt global society itself. The economic losses resulting from such incidents – lost productivity, recovery, disruption to business and individual activities – are likely to be extremely high.

As a former cybersecurity professional and current security researcher, I believe that the world may finally be realizing that modern information-based society is based on a very fragile foundation.

A display screen shows numerous rows of text
The outage led to thousands of flight delays on July 19, 2024.
AP Photo/Yuki Iwamura

The bigger picture

Interestingly, on June 11, 2024, a post on CrowdStrike's own blog seemed to predict this very situation – the global computing ecosystem compromised by one vendor's faulty technology – though they probably didn't expect that their product would be the cause.

Software supply chains have long been a serious cybersecurity concern and potential single point of failure. Companies like CrowdStrike, Microsoft, Apple and others have direct, trusted access into organizations' and individuals' computers. As a result, people have to trust that the companies are not only secure themselves, but that the products and updates they push out are well-tested and robust before they're applied to customers' systems. The SolarWinds incident of 2019, which involved hacking the software supply chain, may well be considered a preview of 's CrowdStrike incident.

CrowdStrike George Kurtz said “this is not a security incident or cyberattack” and that “the issue has been identified, isolated and a fix has been deployed.” While perhaps true from CrowdStrike's perspective – they were not – it doesn't mean the effects of this incident won't create security problems for customers. It's quite possible that in the short term, organizations may disable some of their internet security devices to try and get ahead of the problem, but in doing so they may have opened themselves up to criminals penetrating their networks.

Advertisement

It's also likely that people will be targeted by various scams preying on user panic or ignorance regarding the issue. Overwhelmed users might either take offers of faux assistance that lead to identity , or throw away money on bogus to this problem.

Transportation Secretary Pete Buttigieg explains the effects of the outage on airlines and other transportation systems.

What to do

Organizations and users will need to wait until a fix is available or try to recover on their own if they have the technical ability. After that, I believe there are several things to do and consider as the world recovers from this incident.

Companies will need to ensure that the products and services they use are trustworthy. This means doing due diligence on the vendors of such products for security and resilience. Large organizations typically test any product upgrades and updates before allowing them to be released to their internal users, but for some routine products like security tools, that may not happen.

Governments and companies alike will need to emphasize resilience in designing networks and systems. This means taking steps to avoid creating single points of failure in infrastructure, software and workflows that an adversary could target or a disaster could make worse. It also means knowing whether any of the products organizations depend on are themselves dependent on certain other products or infrastructures to function.

Advertisement

Organizations will need to renew their commitment to best practices in cybersecurity and general IT management. For example, having a robust backup system in place can make recovery from such incidents easier and minimize data loss. Ensuring appropriate policies, procedures, staffing and technical resources is essential.

Problems in the software supply chain like this make it difficult to follow the standard IT recommendation to always keep your systems patched and current. Unfortunately, the costs of not keeping systems regularly updated now have to be weighed against the risks of a situation like this again.The Conversation

Richard Forno, Principal Lecturer in Computer Science and Electrical Engineering, University of Maryland, Baltimore County

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Advertisement

The post Massive IT outage spotlights major vulnerabilities in the global information ecosystem appeared first on .com

Continue Reading

The Conversation

Online rumors sparked by the Trump assassination attempt spread rapidly, on both ends of the political spectrum

Published

on

theconversation.com – Danielle Lee Tomson, Research Manager, Center for an Informed Public, of Washington – 2024-07-19 07:32:18
A bloodied Donald Trump is surrounded by Secret Service agents.
Rebecca Droke/AFP via Getty Images

Danielle Lee Tomson, University of Washington; Melinda McClure Haughey, University of Washington, and Stephen Prochaska, University of Washington

In the immediate hours after the assassination attempt on former on July 13, 2024, social users posted the same videos, images and eyewitness accounts but used them as evidence for different rumors or theories that aligned with their political preferences.

Among the deluge of rumors, one TikTok creator narrated the instantly iconic photo of Trump raising his fist, ear bloodied as he emerged from the Secret Service scrum. “People are wondering if this is staged?” His answer: “Yes.”

People across the political spectrum, President Joe Biden, questioned why the Secret Service had failed to prevent the attack. But then some people took this critique further. An influencer on the social media platform X posted an aerial photo and asked how an armed assailant could make it to an unsecured rooftop, concluding, “This reeks of an inside job.”

Advertisement

As researchers who study misinformation at the University of Washington's Center for an Informed Public, we have seen groups of people coming together during previous crises to make sense of what is going on by providing evidence and interpreting it through different political or cultural lenses called frames. This is part of a dynamic scholars call collective sensemaking.

Spreading rumors is a part of this process and a natural human response to crisis events. Rumors, regardless of their accuracy, help people assign meaning and explain an uncertain or scary unfolding reality. and identity help determine which frames people use to interpret and characterize evidence in a crisis. Some political operatives and activists may try to influence these frames to score points toward their goals.

In the aftermath of the assassination attempt, our rapid response research team observed rumors unfolding across social media platforms. We saw three politically coded frames emerge across the spectrum:

  • claiming the event was staged

  • criticizing the Secret Service often by blaming Diversity Equity and Inclusion initiatives

  • suggesting the shooting was an inside job

Advertisement

‘It was staged'

On the anti-Trump extreme, a rumor quickly gained traction claiming the shooting was staged for Trump's political gain, though this has slowed as more evidence emerged about the shooter. One creator questioned if the audience were crisis actors because they did not disperse quickly enough after the shooting. Others pointed to Trump's history with World Wrestling Entertainment and reality television, suggesting he had cut himself for dramatic effect like pro wrestlers. Entertainment professionals weighed in, saying Trump had used fake blood packets found in Hollywood studios.

The staged rumor resonated with a conspiratorial frame we've seen people use to process crisis events, such as accusations of a false flag event or crisis actors being used to facilitate a political victory.

Secret Service failings

On social and mainstream media, we saw questioning across the political spectrum of how the Secret Service failed to protect a presidential candidate. Many compared videos of the Secret Service's swift reaction to the 1981 assassination attempt on President Ronald Reagan, suggesting their reaction with Trump was slower.

However, some politicized this frame further, blaming DEI for the Secret Service's failure. The claim is that efforts to increase the number of women in the Secret Service led to unqualified agents working on Trump's security detail.

Advertisement

Blaming DEI is a common and increasingly used trope on social media, recently seen in rumors the Baltimore bridge collapse and the Boeing whistleblower crisis. Pro-Trump creators shared images critical of female Secret Service agents juxtaposed against celebrated images of male service members. This is a framing we expect to continue to see.

Adjacent to this critique framing, a rumor took hold among pro-Trump communities that the Secret Service had rejected Trump's additional security requests, which the GOP had been investigating — a claim the Secret Service has denied. This narrative was further fueled by recent proposed legislation calling for the removal of Trump's Secret Service protection if he were to prison following a conviction for a felony.

an aerial view of a rural fairground with lines and text annotations
The chaotic and consequential nature of the shooting at Donald Trump's rally is typical of episodes that spark conspiracy theories, rumors and efforts by people to make sense of – and spin – the event.
AP Graphic

‘It was an inside job'

Highlighting many of the same critiques and questions of how the shooter could get to an unsecured rooftop, other influencers suggested the shooting must have been an inside job. In retweeting a popular pro-Trump influencer, Elon Musk speculated that the mistake was either “incompetence” or “deliberate.” A popular post on X – formerly Twitter – tried to make sense of how a 20-year-old could outsmart the Secret Service and concluded by insinuating the failure was potentially intentional.

These inside job speculations are similar to the rumor that the shooting was staged – though they emerged slightly later – and they align with claims of false operations in previous crisis events.

Rumor-spreading is human nature

As the crisis recedes in time, rumors are likely to persist and people are likely to adjust their frames as new evidence emerges – all part of the collective sensemaking process. Some frames we've identified in this event are likely to also evolve, like political critiques of the Secret Service. Some are likely to dissipate, like the rumor that the shooting was staged.

Advertisement

This is a natural social process that everyone participates in as we apply our political and social values to rapidly shifting information environments in order to make sense of our realities. When there are intense emotions and lots of ambiguity, most people make mistakes as they try to find out what's going on.

Getting caught up in conspiracy theorizing after a tragedy – whether it's for political, social or even entertainment reasons – is a common human response. What's important to remember is that in the process of collective sensemaking, people with agendas other than determining and communicating accurate information may engage in framing that suits their interests and objectives. These can include foreign adversaries, political operatives, social media influencers and scammers. Some might continue to share false rumors or spin salacious narratives for gain.

It's important not to scold each other for sharing rumors, but rather help each other understand the social dynamics and contexts of how and why rumors emerge. Recognizing how people's political identities are intentionally exploited – and even just incidentally make people susceptible – to spread false rumors may help them become more resilient to these forces.The Conversation

Danielle Lee Tomson, Research Manager, Center for an Informed Public, University of Washington; Melinda McClure Haughey, Graduate Research Assistant, Center for an Informed Public, University of Washington, and Stephen Prochaska, Graduate Research Assistant, Center for an Informed Public, University of Washington

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Read More

The post Online rumors sparked by the Trump assassination attempt spread rapidly, on both ends of the political spectrum appeared first on theconversation.com

Continue Reading

The Conversation

Republicans wary of Republicans – how politics became a clue about infection risk during the pandemic

Published

on

theconversation.com – Ahra Ko, Research for the Behavior Change for Good Initiative, University of Pennsylvania – 2024-07-18 07:31:19
The behavioral immune system learned a new proxy for disease risk during the COVID pandemic.
gilaxi/E+ via Getty Images

Ahra Ko, University of Pennsylvania and Steven Neuberg, Arizona State University

Americans who felt most vulnerable during the early days of the pandemic perceived as infection risks, leading to greater disgust and avoidance of them – regardless of their own political party. Even Republicans who felt vulnerable became more wary of other Republicans. That's one finding from research we recently published in the journal American Psychologist, and it has important implications for understanding a fundamental feature of human disease psychology.

Many Republican politicians and supporters, as to their Democratic counterparts, downplayed the threat of COVID-19 to public and personal and resisted masking and social distancing. These attitudes and actions appear to have turned political affiliation into a new cue of possible infection risk.

This is an example of what scientists call the behavioral immune system at work.

Advertisement

Why it matters

Most people are familiar with the physiological immune system your body uses to fight disease by activating defenses, like fever and coughs, after you get infected.

In contrast, your behavioral immune system tries to you avoid getting infected in the first place. It scans for observable cues correlated with infectious disease – such as other people's coughs and open sores. Then it marshals feelings, such as disgust, and behaviors, such as distancing, that help you avoid people who might be contagious. These reactions likely occur without conscious awareness or deliberate intention.

Scientists have learned a great deal about this , but some important questions remain. As psychology researchers, we were interested in how the behavioral immune system could adjust quickly to new cues about infectiousness and changing risks.

How we do our work

Starting in April 2020, shortly after the initial COVID-19 lockdowns started, our team tracked a nationally representative sample of over 1,100 Americans for around eight months. This was a time of great unpredictability, with no vaccine available.

Advertisement

Every eight weeks, we asked participants through an online survey about their motivation to avoid disease and their attitudes toward various groups, including Republicans and Democrats. As COVID-19 infection rates fluctuated over the eight months of our study, we could watch changes in the same person's motivation over time and their evolving views of political partisans.

We found that Americans who were highly motivated to avoid disease and whose motivation increased as infection rates rose perceived Republicans as posing greater infection risks than Democrats. They also reported more feelings of disgust toward and avoidance of Republicans. These patterns were consistent across respondents' political affiliations, even after controlling for people's strong tendency to favor their own party and dislike the opposing one.

What other research is being done

An unexpected twist lends even more weight to these findings.

Previous research shows that political conservatives tend to be more vigilant about disease than political liberals. This vigilance is a way to protect themselves and their communities from external threats. Moreover, Americans have tended to favor conservatives in elections during disease outbreaks. So the partisan reaction to the COVID-19 pandemic unfolded in a way contrary to what we expected.

Advertisement

The fact that our respondents used Republican affiliation as a sign of potential infection risk, despite the typical conservative tendencies, reveals how flexible the behavioral immune system can be. It was able to learn and use a new cue of perceived infection risk – in this case, political affiliation – in response to a quickly changing . We also saw that the behavioral immune system can adapt to real-world changes in infection risk over time.

The Research Brief is a short take on interesting academic work.The Conversation

Ahra Ko, Research Project Manager for the Behavior Change for Good Initiative, University of Pennsylvania and Steven Neuberg, Foundation Professor of Psychology, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

Advertisement

The post Republicans wary of Republicans – how politics became a clue about infection risk during the pandemic appeared first on .com

Continue Reading

Trending