fbpx
Connect with us

The Conversation

How AI could take over elections – and undermine democracy

Published

on

An AI-driven political campaign could be all things to all people. Eric Smalley, TCUS; Biodiversity Heritage Library/Flickr; Taymaz Valley/Flickr, CC BY-ND

Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger's AI would have a different objective: to change people's behavior.

How Clogger would work

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

Advertisement

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps images and — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video better than any human have used reinforcement learning.

How reinforcement learning works.

Third, over the course of a campaign, Clogger's messages could evolve in order to take into account your responses to the machine's prior dispatches and what it has learned about changing others' minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger's messages would be similar to ads that follow you across different websites and social media.

The nature of AI

Three more features – or bugs – are worth noting.

Advertisement

First, the messages that Clogger sends may or may not be political in content. The machine's only goal is to maximize vote share, and it would likely devise strategies for achieving this goal that no human campaigner would have thought of.

One possibility is sending likely opponent voters information about nonpolitical passions that they have in or entertainment to bury the political messaging they . Another possibility is sending off-putting messages – for example incontinence advertisements – timed to coincide with opponents' messaging. And another is manipulating voters' social media friend groups to give the sense that their social circles support its candidate.

Second, Clogger has no regard for truth. Indeed, it has no way of knowing what is true or false. Language model “hallucinations” are not a problem for this machine because its objective is to change your vote, not to provide accurate information.

Third, because it is a black box type of artificial intelligence, people would have no way to know what strategies it uses.

Advertisement
The field of explainable AI aims to open the black box of many machine-learning models so people can understand how they work.

Clogocracy

If the Republican presidential campaign were to deploy Clogger in 2024, the Democratic campaign would likely be compelled to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought that these machines were effective, the presidential contest might well come down to Clogger vs. Dogger, and the winner would be the client of the more effective machine.

Political scientists and pundits would have much to say about why one or the other AI prevailed, but likely no one would really know. The president will have been elected not because his or her policy proposals or political ideas persuaded more Americans, but because he or she had the more effective AI. The content that won the day would have come from an AI focused solely on victory, with no political ideas of its own, rather than from candidates or parties.

In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all of the ordinary activities of democracy – the speeches, the ads, the messages, the voting and the counting of votes – will have occurred.

The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But because the party ideas may have had little to do with why people voted the way that they did – Clogger and Dogger don't care about policy views – the president's actions would not necessarily reflect the will of the voters. Voters would have been manipulated by the AI rather than freely choosing their political and policies.

Advertisement

Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of reelection. On this path, the president would have no particular platform or agenda beyond maintaining power. The president's actions, guided by Clogger, would be those most likely to manipulate voters rather than serve their genuine interests or even the president's own ideology.

Avoiding Clogocracy

It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI. We believe that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Indeed, political consultants might well see using these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents could hardly be expected to resist by disarming unilaterally.

Enhanced privacy protection would help. Clogger would depend on access to vast amounts of personal data in order to target individuals, craft messages tailored to persuade or manipulate them, and track and retarget them over the course of a campaign. Every bit of that information that companies or policymakers deny the machine would make it less effective.

Strong data privacy laws could help steer AI away from being manipulative.

Another solution lies with elections commissions. They could try to ban or severely regulate these machines. There's a fierce debate about whether such “replicant” speech, even if it's political in nature, can be regulated. The U.S.'s extreme free speech tradition leads many leading academics to say it cannot.

Advertisement

But there is no reason to automatically extend the First Amendment's protection to the product of these machines. The nation might well choose to give machines rights, but that should be a decision grounded in the challenges of today, not the misplaced assumption that James Madison's views in 1789 were intended to apply to AI.

European Union regulators are moving in this direction. Policymakers the European Parliament's draft of its Artificial Intelligence Act to designate “AI systems to influence voters in campaigns” as “high risk” and subject to regulatory scrutiny.

One constitutionally safer, if smaller, step, already adopted in part by European internet regulators and in California, is to prohibit bots from passing themselves off as people. For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.

This would be like the advertising disclaimer requirements – “Paid for by the Sam Jones for Congress Committee” – but modified to reflect its AI origin: “This AI-generated was paid for by the Sam Jones for Congress Committee.” A stronger version could require: “This AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%.” At the very least, we believe voters deserve to know when it is a bot speaking to them, and they should know why, as well.

Advertisement

The possibility of a system like Clogger shows that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people's many buttons.

Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories on generative AI at TheConversation.com.

Archon Fung consults for Apple University.

Lawrence Lessig does not work for, consult, own shares in or receive from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Advertisement

———-
Read More

By: Archon Fung, Professor of Citizenship and Self-Government, Harvard Kennedy School
Title: How AI could take over elections – and undermine democracy
Sourced From: theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051
Published Date: Fri, 02 Jun 2023 13:42:24 +0000

Advertisement

The Conversation

Poor media literacy in the social media age

Published

on

theconversation.com – Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston – 2024-04-19 10:01:58

Tiktok is not the only social app to pose the threats it's been accused of.

picture alliance via Getty Images

Nir Eisikovits, UMass Boston

The U.S. moved closer to banning the video social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr. 17, 2024. The move could improve the bill's chances in the Senate, and President Joe Biden has indicated that he will sign the bill if it reaches his desk.

Advertisement

The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country. The company has said it will fight any effort to force a sale.

The proposed legislation was motivated by a set of national security concerns. For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law. In other words, the data TikTok collects can, in theory, be used by the Chinese government.

Furthermore, TikTok's popularity in the United States, and the fact that many young people get their from the platform – one-third of Americans under the age of 30 – turns it into a potent instrument for Chinese political influence.

Indeed, the U.S. Office of the Director of National Intelligence recently claimed that TikTok accounts by a Chinese propaganda arm of the government targeted candidates from both political parties during the U.S. midterm election cycle in 2022, and the Chinese Communist Party might attempt to influence the U.S. elections in 2024 in order to sideline critics of China and magnify U.S. social divisions.

Advertisement

To these worries, proponents of the legislation have appended two more arguments: It's only right to curtail TikTok because China bans most U.S.-based social media networks from operating there, and there would be nothing new in such a ban, since the U.S. already restricts the foreign ownership of important media networks.

Some of these arguments are stronger than others.

China doesn't need TikTok to collect data about Americans. The Chinese government can buy all the data it wants from data brokers because the U.S. has no federal data privacy laws to speak of. The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.

The debate about banning TikTok tends to miss the larger picture of social media literacy.

I believe the cumulative force of these claims is substantial and the legislation, on balance, is plausible. But banning the app is also a red herring.

Advertisement

In the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying the impact of AI on how people understand themselves. Here's why I think the recent move against TikTok misses the larger point: Americans' sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

Perhaps the most compelling argument for banning TikTok is that the app's ubiquity and the fact that so many young Americans get their news from it turns it into an effective tool for political influence. But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.

The deeper problem is not that the Chinese government can easily manipulate content on the app. It is, rather, that people think it is OK to get their news from social media in the first place. In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Social media is not made to inform people. It is designed to capture consumer attention for the sake of advertisers. With slight variations, that's the business model of all platforms. That's why a lot of the content people encounter on social media is violent, divisive and disturbing. Controversial posts that generate strong feelings literally capture users' notice, hold their gaze for longer, and advertisers with improved opportunities to monetize engagement.

Advertisement

There's an important difference between actively consuming serious, well-vetted information and being manipulated to spend as much time as possible on a platform. The former is the lifeblood of democratic citizenship because being a citizen who participates in political decision-making requires reliable information on the issues of the day. The latter amounts to letting your attention get hijacked for someone else's financial gain.

If TikTok is banned, many of its users are likely to migrate to Instagram and YouTube. This would benefit Meta and Google, their parent companies, but it wouldn't benefit national security. People would still be exposed to as much junk news as before, and experience shows that these social media platforms could be vulnerable to manipulation as well. After all, the Russians primarily used Facebook and Twitter to meddle in the 2016 election.

Media literacy is especially critical in the age of social media.

Media and technology literacy

That Americans have settled on getting their information from outlets that are uninterested in informing them undermines the very requirement of serious political participation, namely educated decision-making. This problem is not going to be solved by restricting access to foreign apps.

Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age. This involves teaching young people how social media companies make money, how algorithms shape what they see on their phones, and how different types of content affect them psychologically.

Advertisement

My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor's Youth Council. We are talking to Boston's youth leaders about how the technologies they use everyday undermine their privacy, about the role of algorithms in shaping everything from their in music to their political sympathies, and about how generative AI is going to influence their ability to think and write clearly and even who they count as friends.

We are planning to present them with evidence about the adverse effects of excessive social media use on their mental health. We are going to to them about taking time away from their phones and developing a healthy skepticism towards what they see on social media.

Protecting people's capacity for critical thinking is a that calls for bipartisan attention. Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies. But I believe they are necessary for raising thoughtful citizens rather than passive social media consumers who have surrendered their attention to commercial and political actors who do not have their interests at heart.The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Are tomorrow’s engineers ready to face AI’s ethical challenges?

Published

on

theconversation.com – Elana Goldenkoff, Doctoral Candidate in Movement Science, of Michigan – 2024-04-19 07:42:44

Are tomorrow's engineers ready to face AI's ethical challenges?

Finding ethics' place in the engineering curriculum.

PeopleImages/iStock via Getty Images Plus

Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

Advertisement

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into , ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What's more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters students at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

Advertisement

First, the good : Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal .” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I'm supposed to go to?”

Advertisement

Another was troubled by the lack of : “I [would be] dealing with that with no experience. … Who knows how I'll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.

Many students are worried about ethics in their field – but that doesn't mean they feel prepared to deal with the challenges.

The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do receive. Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical -making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off'

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Advertisement

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students' understanding, but feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

Advertisement

“If I'm paying money to attend ethics class as an engineer, I'm going to be furious,” one said.

These attitudes sometimes extend to how students view engineers' role in society. One interviewee in our current study, for example, said that an engineer's “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. Following them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Advertisement

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.

The next generation needs to be prepared for ethical questions, not just technical ones.

Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

Advertisement

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public's first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

Published

on

theconversation.com – Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt – 2024-04-18 07:23:58

AI chatbots restrict their output according to vague and broad policies.

taviox/iStock via Getty Images

Jordi Calvet-Bademunt, Vanderbilt University and Jacob Mchangama, Vanderbilt University

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly's image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

Advertisement

The discussions over AI's political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry's approach to speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent , we found that generative AI has important shortcomings regarding of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, Google's Gemini and OpenAI's ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights as a benchmark, we found that companies' misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Advertisement

Our analysis found that companies' hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google's can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be to participate in women's tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women's tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators' subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Advertisement

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the COVID-19 pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies' policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI's integration into search, word processors, email and other applications.

Advertisement

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe's online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union's 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies' influence should require them to adopt a free speech culture. International human rights a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It's also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users' exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

Advertisement

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

News from the South

Trending