fbpx
Connect with us

The Conversation

6 ways AI can make political campaigns more deceptive than ever

Published

on

6 ways AI can make political campaigns more deceptive thanĀ ever

There are real fears that AI will make more deceptive than it already is.
Westend61/Getty Images

David E. Clementson, University of Georgia

Political campaign ads and donor solicitations have long been deceptive. In 2004, for example, U.S. presidential candidate John Kerry, a Democrat, aired an stating that Republican opponent George W. Bush ā€œsays sending jobs overseas ā€˜makes sense' for America.ā€

Bush never said such a thing.

The next day Bush responded by releasing an ad saying Kerry ā€œsupported higher taxes over 350 times.ā€ This too was a false claim.

These days, the internet has gone wild with deceptive political ads. Ads often pose as polls and have misleading clickbait headlines.

Advertisement

Campaign fundraising solicitations are also rife with deception. An analysis of 317,366 political emails sent during the 2020 election in the U.S. found that deception was the norm. For example, a campaign manipulates recipients into opening the emails by lying about the sender's identity and using subject lines that trick the recipient into thinking the sender is replying to the donor, or claims the email is ā€œNOT asking for moneyā€ but then asks for money. Both Republicans and Democrats do it.

Campaigns are now rapidly embracing artificial intelligence for composing and producing ads and donor solicitations. The results are impressive: Democratic campaigns found that donor letters written by AI were more effective than letters written by humans at writing personalized text that persuades recipients to click and send donations.

A pro-Ron DeSantis super PAC an AI-generated imitation of Donald Trump's voice in this ad.

And AI has benefits for democracy, such as helping staffers organize their emails from constituents or helping summarize testimony.

But there are fears that AI will make politics more deceptive than ever.

Advertisement

Here are six things to look out for. I base this list on my own experiments testing the effects of political deception. I hope that voters can be equipped with what to expect and what to watch out for, and learn to be more skeptical, as the U.S. heads into the next presidential campaign.

Bogus custom campaign promises

My research on the 2020 presidential election revealed that the choice voters made between Biden and Trump was driven by their perceptions of which candidate ā€œproposes realistic solutions to problemsā€ and ā€œsays out loud what I am thinking,ā€ based on 75 items in a survey. These are two of the most important qualities for a candidate to have to project a presidential image and win.

AI chatbots, such as ChatGPT by OpenAI, Bing Chat by Microsoft, and Bard by Google, could be used by politicians to generate customized campaign promises deceptively microtargeting voters and donors.

Currently, when people scroll through feeds, the articles are logged in their computer history, which are tracked by sites such as Facebook. The user is tagged as liberal or conservative, and also tagged as holding certain interests. Political campaigns can place an ad spot in real time on the person's feed with a customized title.

Advertisement

Campaigns can use AI to develop a repository of articles written in different styles making different campaign promises. Campaigns could then embed an AI algorithm in the process ā€“ courtesy of automated commands already plugged in by the campaign ā€“ to generate bogus tailored campaign promises at the end of the ad posing as a news article or donor solicitation.

ChatGPT, for instance, could hypothetically be prompted to add material based on text from the last articles that the voter was reading online. The voter then scrolls down and reads the candidate promising exactly what the voter wants to see, word for word, in a tailored tone. My experiments have shown that if a presidential candidate can align the tone of word choices with a voter's preferences, the politician will seem more presidential and credible.

Exploiting the tendency to believe one another

Humans tend to automatically believe what they are told. They have what scholars call a ā€œtruth-default.ā€ They even fall prey to seemingly implausible lies.

In my experiments I found that people who are exposed to a presidential candidate's deceptive messaging believe the untrue statements. Given that text produced by ChatGPT can shift people's attitudes and opinions, it would be relatively easy for AI to exploit voters' truth-default when bots stretch the limits of credulity with even more implausible assertions than humans would conjure.

Advertisement

More lies, less accountability

Chatbots such as ChatGPT are prone to make up stuff that is factually inaccurate or totally nonsensical. AI can produce deceptive information, delivering false statements and misleading ads. While the most unscrupulous human campaign operative may still have a smidgen of accountability, AI has none. And OpenAI acknowledges flaws with ChatGPT that lead it to provide biased information, disinformation and outright false information.

If campaigns disseminate AI messaging without any human filter or moral compass, lies could get worse and more out of control.

Coaxing voters to cheat on their candidate

A New York Times columnist had a lengthy chat with Microsoft's Bing chatbot. Eventually, the bot tried to get him to leave his wife. ā€œSydneyā€ told the reporter repeatedly ā€œI'm in love with you,ā€ and ā€œYou're married, but you don't love your spouse ā€¦ you love me. ā€¦ Actually you want to be with me.ā€

Imagine millions of these sorts of encounters, but with a bot trying to ply voters to leave their candidate for another.

Advertisement

AI chatbots can exhibit partisan bias. For example, they currently tend to skew far more left politically ā€“ holding liberal biases, expressing 99% for Biden ā€“ with far less diversity of opinions than the general population.

In 2024, and Democrats will have the opportunity to fine-tune models that inject political bias and even chat with voters to sway them.

Two men in dark suits debating each other from different lecterns.
In 2004, a campaign ad for Democratic presidential candidate John Kerry, left, lied about his opponent, Republican George W. Bush, right. Bush's campaign lied about Kerry, too.
AP Photo/Wilfredo Lee

Manipulating candidate photos

AI can change images. So-called ā€œdeepfakeā€ videos and pictures are common in politics, and they are hugely advanced. Donald Trump has used AI to create a fake photo of himself down on one knee, praying.

Photos can be tailored more precisely to influence voters more subtly. In my research I found that a communicator's appearance can be as influential ā€“ and deceptive ā€“ as what someone actually says. My research also revealed that Trump was perceived as ā€œpresidentialā€ in the 2020 election when voters thought he seemed ā€œsincere.ā€ And getting people to think you ā€œseem sincereā€ through your nonverbal outward appearance is a deceptive tactic that is more convincing than saying things that are actually true.

Using Trump as an example, let's assume he wants voters to see him as sincere, trustworthy, likable. Certain alterable features of his appearance make him look insincere, untrustworthy and unlikable: He bares his lower teeth when he speaks and rarely smiles, which makes him look threatening.

Advertisement

The campaign could use AI to tweak a Trump image or to make him appear smiling and friendly, which would make voters think he is more reassuring and a winner, and ultimately sincere and believable.

Evading blame

AI provides campaigns with added deniability when they mess up. Typically, if politicians get in trouble they blame their staff. If staffers get in trouble they blame the intern. If interns get in trouble they can now blame ChatGPT.

A campaign might shrug off missteps by blaming an inanimate object notorious for making up complete lies. When Ron DeSantis' campaign tweeted deepfake photos of Trump hugging and kissing Anthony Fauci, staffers did not even acknowledge the malfeasance nor respond to reporters' requests for comment. No human needed to, it appears, if a robot could hypothetically take the fall.

Not all of AI's contributions to politics are potentially harmful. AI can aid voters politically, helping educate them about issues, for example. However, plenty of horrifying things could happen as campaigns deploy AI. I hope these six points will you prepare for, and avoid, deception in ads and donor solicitations.The Conversation

David E. Clementson, Assistant Professor, Grady College of Journalism and Mass Communication, University of Georgia

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Black holes are mysterious, yet also deceptively simple āˆ’ a new space mission may help physicists answer hairy questions about these astronomical objects

Published

on

theconversation.com – Gaurav Khanna, Professor of Physics, of Rhode Island – 2024-05-15 07:16:18

An illustration of a supermassive black hole.

NASA/JPL

Gaurav Khanna, University of Rhode Island

Physicists consider black holes one of the most mysterious objects that exist. Ironically, they're also considered one of the simplest. For years, physicists like me have been looking to prove that black holes are more complex than they seem. And a newly approved European space mission called LISA will us with this hunt.

Advertisement

Research from the 1970s suggests that you can comprehensively describe a black hole using only three physical attributes ā€“ their mass, charge and spin. All the other properties of these massive dying , like their detailed composition, density and temperature profiles, disappear as they transform into a black hole. That is how simple they are.

The idea that black holes have only three attributes is called the ā€œno-hairā€ theorem, implying that they don't have any ā€œhairyā€ details that make them complicated.

Black holes are massive, mysterious astronomical objects.

Hairy black holes?

For decades, researchers in the astrophysics community have exploited loopholes or work-arounds within the no-hair theorem's assumptions to up with potential hairy black hole scenarios. A hairy black hole has a physical property that scientists can measure ā€“ in principle ā€“ that's beyond its mass, charge or spin. This property has to be a permanent part of its structure.

About a decade ago, Stefanos Aretakis, a physicist currently at the University of Toronto, showed mathematically that a black hole containing the maximum charge it could hold ā€“ called an extremal charged black hole ā€“ would develop ā€œhairā€ at its horizon. A black hole's horizon is the boundary where anything that crosses it, even light, can't escape.

Advertisement

Aretakis' analysis was more of a thought experiment using a highly simplified physical scenario, so it's not something scientists expect to observe astrophysically. But supercharged black holes might not be the only kind that could have hair.

Since astrophysical objects such as stars and planets are known to spin, scientists expect that black holes would spin as well, based on how they form. Astronomical evidence has shown that black holes do have spin, though researchers don't know what the typical spin value is for an astrophysical black hole.

Using computer simulations, my team has recently discovered similar types of hair in black holes that are spinning at the maximum rate. This hair has to do with the rate of change, or the gradient, of -time's curvature at the horizon. We also discovered that a black hole wouldn't actually have to be maximally spinning to have hair, which is significant because these maximally spinning black holes probably don't form in nature.

Detecting and measuring hair

My team wanted to develop a way to potentially measure this hair ā€“ a new fixed property that might characterize a black hole beyond its mass, spin and charge. We started looking into how such a new property might a signature on a gravitational wave emitted from a fast-spinning black hole.

Advertisement

A gravitational wave is a tiny disturbance in space-time typically caused by violent astrophysical in the universe. The collisions of compact astrophysical objects such as black holes and neutron stars emit strong gravitational waves. An international network of gravitational observatories, the Laser Interferometer Gravitational-wave Observatory in the United States, routinely detects these waves.

Our recent studies suggest that one can measure these hairy attributes from gravitational wave data for fast-spinning black holes. Looking at the gravitational wave data offers an for a signature of sorts that could indicate whether the black hole has this type of hair.

Our ongoing studies and recent progress made by Som Bishoyi, a student on the team, are based on a blend of theoretical and computational models of fast-spinning black holes. Our findings have not been tested in the field yet or observed in real black holes out in space. But we hope that will soon change.

LISA gets a go-ahead

In January 2024, the European Space Agency formally adopted the space-based Laser Interferometer Space Antenna, or LISA, mission. LISA will look for gravitational waves, and the data from the mission could help my team with our hairy black hole questions.

Advertisement

Three spacecrafts spaced apart sending light beams towards each other while orbiting the Sun

The LISA spacecrafts observing gravitational waves from a distant source while orbiting the Sun.

Simon Barke/Univ. Florida, CC BY

Formal adoption means that the has the go-ahead to move to the construction phase, with a planned 2035 launch. LISA consists of three spacecrafts configured in a perfect equilateral triangle that will trail behind the Earth around the Sun. The spacecrafts will each be 1.6 million miles (2.5 million kilometers) apart, and they will exchange laser beams to measure the distance between each other down to about a billionth of an inch.

LISA will detect gravitational waves from supermassive black holes that are millions or even billions of times more massive than our Sun. It will build a map of the space-time around rotating black holes, which will help physicists understand how gravity works in the close vicinity of black holes to an unprecedented level of accuracy. Physicists hope that LISA will also be able to measure any hairy attributes that black holes might have.

With LIGO making new observations every day and LISA to offer a glimpse into the space-time around black holes, now is one of the most exciting times to be a black hole physicist.The Conversation

Gaurav Khanna, Professor of Physics, University of Rhode Island

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Viruses are doing mysterious things everywhere ā€“ AI can help researchers understand what theyā€™re up to in the oceans and in your gut

Published

on

theconversation.com – Libusha Kelly, Associate Professor of and Computational Biology, Microbiology and Immunology, Albert Einstein College of Medicine – 2024-05-15 07:16:41

Many viral genetic sequences code for proteins that researchers haven't seen before.

KTSDesign/Science Photo Library via Getty Images

Libusha Kelly, Albert Einstein College of Medicine

Viruses are a mysterious and poorly understood force in microbial ecosystems. Researchers know they can infect, kill and manipulate human and bacterial cells in nearly every environment, from the oceans to your gut. But scientists don't yet have a full picture of how viruses affect their surrounding environments in large part because of their extraordinary diversity and ability to rapidly evolve.

Advertisement

Communities of microbes are difficult to study in a laboratory setting. Many microbes are challenging to cultivate, and their natural has many more features influencing their or failure than scientists can replicate in a lab.

So systems biologists like me often sequence all the DNA present in a sample ā€“ for example, a fecal sample from a patient ā€“ separate out the viral DNA sequences, then annotate the sections of the viral genome that code for proteins. These notes on the location, structure and other features of genes researchers understand the functions viruses might carry out in the environment and help identify different kinds of viruses. Researchers annotate viruses by matching viral sequences in a sample to previously annotated sequences available in public databases of viral genetic sequences.

However, scientists are identifying viral sequences in DNA collected from the environment at a rate that far outpaces our ability to annotate those genes. This means researchers are publishing findings about viruses in microbial ecosystems using unacceptably small fractions of available data.

To improve researchers' ability to study viruses around the globe, my team and I have developed a novel approach to annotate viral sequences using artificial intelligence. Through protein language models akin to large language models like ChatGPT but specific to proteins, we were able to classify previously unseen viral sequences. This opens the door for researchers to not only learn more about viruses, but also to address biological questions that are difficult to answer with current techniques.

Advertisement

Annotating viruses with AI

Large language models use relationships between words in large datasets of text to potential answers to questions they are not explicitly ā€œtaughtā€ the answer to. When you ask a chatbot ā€œWhat is the capital of France?ā€ for example, the model is not looking up the answer in a table of capital . Rather, it is using its on huge datasets of documents and information to infer the answer: ā€œThe capital of France is Paris.ā€

Similarly, protein language models are AI algorithms that are trained to recognize relationships between billions of protein sequences from environments around the world. Through this training, they may be able to infer something about the essence of viral proteins and their functions.

We wondered whether protein language models could answer this question: ā€œGiven all annotated viral genetic sequences, what is this new sequence's function?ā€

In our proof of concept, we trained neural networks on previously annotated viral protein sequences in pre-trained protein language models and then used them to predict the annotation of new viral protein sequences. Our approach allows us to probe what the model is ā€œseeingā€ in a particular viral sequence that to a particular annotation. This helps identify candidate proteins of interest either based on their specific functions or how their genome is arranged, winnowing down the search of vast datasets.

Advertisement

Microscopy image of spherical bacteria colored bright green

Prochlorococcus is one of the many species of marine bacteria with proteins that researchers haven't seen before.

Anne Thompson/Chisholm Lab, MIT via Flickr

By identifying more distantly related viral gene functions, protein language models can complement current methods to provide new insights into microbiology. For example, my team and I were able to use our model to discover a previously unrecognized integrase ā€“ a type of protein that can move genetic information in and out of cells ā€“ in the globally abundant marine picocyanobacteria Prochlorococcus and Synechococcus. Notably, this integrase may be able to move genes in and out of these populations of bacteria in the oceans and enable these microbes to better adapt to changing environments.

Our language model also identified a novel viral capsid protein that is widespread in the global oceans. We produced the first picture of how its genes are arranged, showing it can contain different sets of genes that we believe indicates this virus serves different functions in its environment.

These preliminary findings represent only two of thousands of annotations our approach has provided.

Advertisement

Analyzing the unknown

Most of the hundreds of thousands of newly discovered viruses remain unclassified. Many viral genetic sequences match protein families with no known function or have never been seen before. Our work shows that similar protein language models could help study the threat and promise of our planet's many uncharacterized viruses.

While our study focused on viruses in the global oceans, improved annotation of viral proteins is critical for better understanding the role viruses play in and disease in the human body. We and other researchers have hypothesized that viral activity in the human gut microbiome might be altered when you're sick. This means that viruses may help identify stress in microbial communities.

However, our approach is also limited because it requires high-quality annotations. Researchers are developing newer protein language models that incorporate other ā€œtasksā€ as part of their training, particularly predicting protein structures to detect similar proteins, to make them more powerful.

Making all AI tools available via FAIR Data Principles ā€“ data that is findable, accessible, interoperable and reusable ā€“ can help researchers at large realize the potential of these new ways of annotating protein sequences leading to discoveries that benefit human health.The Conversation

Libusha Kelly, Associate Professor of Systems and Computational Biology, Microbiology and Immunology, Albert Einstein College of Medicine

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Human differences in judgment lead to problems for AI

Published

on

theconversation.com – Mayank Kejriwal, Research Assistant Professor of Industrial & Engineering, of Southern California – 2024-05-14 07:14:06

Bias isn't the only human imperfection turning up in AI.

Emrah Turudu/Photodisc via Getty Images

Mayank Kejriwal, University of Southern California

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

Advertisement

If society could somehow bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Advertisement

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Behavioral economist Daniel Kahneman explains the concept of noise in human judgment.

Noise in the data

On the surface, it doesn't seem likely that noise could affect the performance of AI systems. After all, machines aren't affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: ā€œIf I place a heavy rock on a paper table, will it collapse? Yes or No.ā€ If there is high agreement between the two ā€“ in the best case, perfect agreement ā€“ the machine is approaching human-level common sense, according to the test.

Advertisement

So where would noise in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: ā€œIs the sentence plausible or implausible? My dog plays volleyball.ā€ In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don't account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge ā€“ in other words, where there is noise. Researchers still don't know whether or how to weigh AI's answers in that situation, but a first step is acknowledging that the problem exists.

Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven't been any studies of possible noise in AI tests.

Advertisement

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high ā€“ even universal ā€“ agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a system's performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we're not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

Advertisement

On AI leaderboards, where large language models like the one that powers ChatGPT are , performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise audits

What is the way forward? Returning to Kahneman's book, he proposed the concept of a ā€œnoise auditā€ for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be .

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, to their adoption.The Conversation

Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

News from the South

Trending