Connect with us

The Conversation

How governments handle data matters for inclusion



How governments handle data matters for inclusion

Do you feel included in how handles and uses data?
AP Photo/Patrick Semansky

Suzanne J. Piotrowski, Rutgers University – Newark; Erna Ruijer, Utrecht University, and Gregory Porumbescu, Rutgers University – Newark

Governments increasingly rely on large amounts of data to services ranging from mobility and air quality to child welfare and policing programs. While governments have always relied on data, their increasing use of algorithms and artificial intelligence has fundamentally changed the way they use data for public services.

These technologies have the potential to improve the effectiveness and efficiency of public services. But if data is not handled thoughtfully, it can to inequitable outcomes for different communities because data gathered by governments can mirror existing inequalities. To minimize this effect, governments can make inclusion an element of their data practices.

To better understand how data practices affect inclusion, we – scholars of public affairs, policy and administration – break down government data practices into four activities: data collection, storage, analysis and use.


Governments collect data about all manner of subjects via surveys, registrations, social and in real time via mobile devices such as sensors, cellphones and body cameras. These datasets provide opportunities to shape social inclusion and equity. For example, open data can be used as a to expose health disparities or inequalities in commuting.


At the same time, we found that poor-quality data can worsen inequalities. Data that is incomplete, outdated or inaccurate can result in the underrepresentation of vulnerable groups because they may not have access to the technology used to collect the data. Also, government data collection might lead to oversurveillance of vulnerable communities. Consequently, some people may choose to avoid contributing data to government institutions.

A city map with numerous small red, orange and yellow squares
Predictive policing is an example of government use of data that researchers have found can be biased and inaccurate.
Arnout de Vries/Wikimedia

To foster inclusive practices, government practitioners could work with citizens to develop inclusive data collection protocols.


Data storage refers to where and how data is stored by the government, such as in databases or cloud data storage services. We found that government decisions about access to stored data and data ownership might lead to administrative exclusion, meaning unintentionally restricting citizen access to and services. For example, administrative registration errors in applications for services and the difficulty citizens experience when they attempt to correct errors in stored data can lead to differences in how governments treat them and even a loss of public services.

We also found that personal data might be stored with cloud vendors in data warehouses outside the influence of the government organizations that initially created and collected the data. While governments are typically required to follow rigorous data collection practices, data storage companies do not necessarily need to comply with the same standards.

To overcome this problem, governments can set transparency and accountability requirements for data storage that foster inclusion.



One important way governments analyze data to extract information is by using algorithms. For example, predictive policing uses algorithms to predict where will occur.

A key question is who is conducting the analysis. Those who might be providing data, such as citizens or civil society , are less likely to analyze the data. Citizens may not have the skills, expertise or the tools to do so. Often, external experts conduct the analysis, and they might be unaware of the historical context, culture and local conditions of the data. In that way, data may also construct and reinforce inequalities.

To foster inclusion, governments could diversify and increase the of the teams who perform the analyses and write the algorithms so that they can interpret data within its larger historical and political context.

Using the data

Finally, governments are using the results of data analysis to inform public service provision. For example, data-driven visualizations, such as maps, might be used to make decisions about where to direct police officers. However, this might also lead to disproportionate surveillance of different groups.


Another issue is “function creep.” Data might be collected for one purpose but is often eventually used for other purposes or by other government agencies, possibly leading to misuse of data and the reproduction of inequalities.

Digital literacy programs for both government professionals and the public can facilitate a better understanding of how data is visualized and used.

Building inclusion into the process

It is important to highlight that these activities – collection, storage, analysis and use – are linked. Inequalities in the early stages may eventually lead to inequitable outcomes in the form of policies, decisions and services.

Additionally, we found a conundrum: On the one hand, the invisibility of vulnerable groups in data collection can result in inequalities. Therefore, different groups should be included in the activities of the data . On the other hand, this can also be problematic because digital footprints can lead to oversurveillance of the same groups.


Reconciling these conflicting concerns requires an ethical reflection: pausing before embracing data and reflecting on its purpose, limitations and long-term implications for inclusion.

The four activities are a repeated rather than linear process in which governments, citizens and third parties embrace inclusive data strategies. This means looking at what was created, including diverse voices and understanding the analysis, results and consequences of decisions. And it means consistently changing aspects of the process that do not foster inclusion.The Conversation

Suzanne J. Piotrowski, Professor of Public Affairs and Administration, Rutgers University – Newark; Erna Ruijer, Assistant Professor of Governance, Utrecht University, and Gregory Porumbescu, Associate Professor of Public Affairs and Administration, Rutgers University – Newark

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The Conversation

Poor media literacy in the social media age



theconversation.com – Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston – 2024-04-19 10:01:58

Tiktok is not the only social app to pose the threats it's been accused of.

picture alliance via Getty Images

Nir Eisikovits, UMass Boston

The U.S. government moved closer to banning the video social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr. 17, 2024. The move could improve the bill's chances in the Senate, and has indicated that he will sign the bill if it reaches his desk.


The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country. The company has said it will fight any effort to force a sale.

The proposed legislation was motivated by a set of national security concerns. For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law. In other words, the data TikTok collects can, in theory, be used by the Chinese government.

Furthermore, TikTok's popularity in the United States, and the fact that many young people get their from the platform – one-third of Americans under the age of 30 – turns it into a potent instrument for Chinese political influence.

Indeed, the U.S. Office of the Director of National Intelligence recently claimed that TikTok accounts by a Chinese propaganda arm of the government targeted candidates from both political parties during the U.S. midterm election cycle in 2022, and the Chinese Communist Party might attempt to influence the U.S. elections in 2024 in order to sideline critics of China and magnify U.S. social divisions.


To these worries, proponents of the legislation have appended two more arguments: It's only right to curtail TikTok because China bans most U.S.-based social media networks from operating there, and there would be nothing new in such a ban, since the U.S. already restricts the foreign ownership of important media networks.

Some of these arguments are stronger than others.

China doesn't need TikTok to collect data about Americans. The Chinese government can buy all the data it wants from data brokers because the U.S. has no federal data privacy laws to speak of. The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.

The debate about banning TikTok tends to miss the larger picture of social media literacy.

I believe the cumulative force of these claims is substantial and the legislation, on balance, is plausible. But banning the app is also a red herring.


In the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying the impact of AI on how people understand themselves. Here's why I think the recent move against TikTok misses the larger point: Americans' sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

Perhaps the most compelling argument for banning TikTok is that the app's ubiquity and the fact that so many young Americans get their news from it turns it into an effective tool for political influence. But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.

The deeper problem is not that the Chinese government can easily manipulate content on the app. It is, rather, that people think it is OK to get their news from social media in the first place. In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Social media is not made to inform people. It is designed to capture consumer attention for the sake of advertisers. With slight variations, that's the business model of all platforms. That's why a lot of the content people encounter on social media is violent, divisive and disturbing. Controversial posts that generate strong feelings literally capture users' notice, hold their gaze for longer, and provide advertisers with improved opportunities to monetize engagement.


There's an important difference between actively consuming serious, well-vetted information and being manipulated to spend as much time as possible on a platform. The former is the lifeblood of democratic citizenship because being a citizen who participates in political -making requires reliable information on the issues of the day. The latter amounts to letting your attention get hijacked for someone else's financial gain.

If TikTok is banned, many of its users are likely to migrate to Instagram and YouTube. This would benefit Meta and Google, their parent companies, but it wouldn't benefit national security. People would still be exposed to as much junk news as before, and experience shows that these social media platforms could be vulnerable to manipulation as well. After all, the Russians primarily used Facebook and Twitter to meddle in the 2016 election.

Media literacy is especially critical in the age of social media.

Media and technology literacy

That Americans have settled on getting their information from outlets that are uninterested in informing them undermines the very requirement of serious political participation, namely educated decision-making. This problem is not going to be solved by restricting access to foreign apps.

Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age. This involves teaching young people how social media companies make money, how algorithms shape what they see on their phones, and how different types of content affect them psychologically.


My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor's Youth Council. We are talking to Boston's youth about how the technologies they use everyday undermine their privacy, about the role of algorithms in shaping everything from their taste in music to their political sympathies, and about how generative AI is going to influence their ability to think and write clearly and even who they count as friends.

We are planning to present them with evidence about the adverse effects of excessive social media use on their mental . We are going to talk to them about taking time away from their phones and developing a healthy skepticism towards what they see on social media.

Protecting people's capacity for critical thinking is a that calls for bipartisan attention. Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies. But I believe they are necessary for raising thoughtful citizens rather than passive social media consumers who have surrendered their attention to commercial and political actors who do not have their interests at heart.The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Are tomorrow’s engineers ready to face AI’s ethical challenges?



theconversation.com – Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan – 2024-04-19 07:42:44

Are tomorrow's engineers ready to face AI's ethical challenges?

Finding ethics' place in the engineering curriculum.

PeopleImages/iStock via Getty Images Plus

Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.


These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into , ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What's more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.


First, the good : Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal .” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I'm supposed to go to?”


Another was troubled by the lack of training: “I [would be] dealing with that with no experience. … Who knows how I'll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.

Many students are worried about ethics in their field – but that doesn't mean they feel prepared to deal with the challenges.

The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do . Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical -making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off'

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.


Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students' understanding, but report feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”


“If I'm paying money to attend ethics class as an engineer, I'm going to be furious,” one said.

These attitudes sometimes extend to how students view engineers' role in society. One interviewee in our current study, for example, said that an engineer's “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.


Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.

The next generation needs to be prepared for ethical questions, not just technical ones.

Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.


To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public's first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem



theconversation.com – Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt – 2024-04-18 07:23:58

AI chatbots restrict their output according to vague and broad policies.

taviox/iStock via Getty Images

Jordi Calvet-Bademunt, Vanderbilt University and Jacob Mchangama, Vanderbilt University

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly's image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.


The discussions over AI's political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry's approach to free speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent , we found that generative AI has important shortcomings regarding freedom of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, Google's Gemini and OpenAI's ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights as a benchmark, we found that companies' misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.


Our analysis found that companies' hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google's can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be to participate in women's tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women's tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators' subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … of … choice,” according to a key United Nations convention.


Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the pandemic to repress criticism of the government. More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies' policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI's integration into search, word processors, email and other applications.


This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe's online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union's 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies' influence should require them to adopt a free speech culture. International human rights a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It's also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users greatly depends on their prompts. Therefore, users' exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.


This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

News from the South