fbpx
Connect with us

The Conversation

How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power

Published

on

How can Congress regulate AI? Erect guardrails, ensure accountability and address monopolistic power

IBM executive Christina Montgomery, cognitive scientist Gary Marcus and OpenAI Sam Altman prepared to testify before a Senate Judiciary subcommittee.
AP Photo/Patrick Semansky

Anjana Susarla, Michigan State University

Takeaways:

  • A new federal agency to regulate AI sounds helpful but could become unduly influenced by the tech industry. Instead, Congress can legislate accountability.

  • Instead of licensing companies to release advanced AI technologies, the could license auditors and push for companies to set up institutional review boards.

  • The government hasn't had great in curbing technology monopolies, but disclosure requirements and data privacy laws could help check corporate power.


OpenAI CEO Sam Altman urged lawmakers to consider regulating AI during his Senate testimony on May 16, 2023. That recommendation raises the question of what comes next for Congress. The Altman proposed – creating an AI regulatory agency and requiring licensing for companies – are interesting. But what the other experts on the same panel suggested is at least as important: requiring transparency on training data and establishing clear frameworks for AI-related risks.

Another point left unsaid was that, given the economics of building large-scale AI models, the industry may be witnessing the emergence of a new type of tech monopoly.

As a researcher who studies social media and artificial intelligence, I believe that Altman's suggestions have highlighted important issues but don't provide answers in and of themselves. Regulation would be helpful, but in what form? Licensing also makes sense, but for whom? And any effort to regulate the AI industry will need to account for the companies' economic power and political sway.

Advertisement

An agency to regulate AI?

Lawmakers and policymakers across the world have already begun to address some of the issues raised in Altman's testimony. The European Union's AI Act is based on a risk model that assigns AI applications to three categories of risk: unacceptable, high risk, and low or minimal risk. This categorization recognizes that tools for social scoring by governments and automated tools for hiring pose different risks than those from the use of AI in spam filters, for example.

The U.S. National Institute of Standards and Technology likewise has an AI risk management framework that was created with extensive input from multiple stakeholders, including the U.S. Chamber of Commerce and the Federation of American Scientists, as well as other business and professional associations, technology companies and think tanks.

Federal agencies such as the Equal Employment Opportunity Commission and the Federal Trade Commission have already issued guidelines on some of the risks inherent in AI. The Consumer Product Safety Commission and other agencies have a role to play as well.

Rather than create a new agency that runs the risk of becoming compromised by the technology industry it's meant to regulate, Congress can support private and public adoption of the NIST risk management framework and pass bills such as the Algorithmic Accountability Act. That would have the effect of imposing accountability, much as the Sarbanes-Oxley Act and other regulations transformed reporting requirements for companies. Congress can also adopt comprehensive laws around data privacy.

Advertisement

Regulating AI should involve collaboration among academia, industry, policy experts and international agencies. Experts have likened this approach to international organizations such as the European Organization for Nuclear Research, known as CERN, and the Intergovernmental Panel on Climate Change. The internet has been managed by nongovernmental bodies involving nonprofits, civil society, industry and policymakers, such as the Internet Corporation for Assigned Names and Numbers and the World Telecommunication Standardization Assembly. Those examples provide models for industry and policymakers .

Cognitive scientist and AI developer Gary Marcus explains the need to regulate AI.

Licensing auditors, not companies

Though OpenAI's Altman suggested that companies could be licensed to release artificial intelligence technologies to the public, he clarified that he was referring to artificial general intelligence, meaning potential future AI with humanlike intelligence that could pose a threat to humanity. That would be akin to companies being licensed to handle other potentially dangerous technologies, like nuclear power. But licensing could have a role to play well before such a futuristic scenario comes to pass.

Algorithmic auditing would require credentialing, standards of practice and extensive . Requiring accountability is not just a matter of licensing individuals but also requires companywide standards and practices.

Experts on AI fairness contend that issues of bias and fairness in AI cannot be addressed by technical methods alone but require more comprehensive risk mitigation practices such as adopting institutional review boards for AI. Institutional review boards in the medical field help uphold individual rights, for example.

Advertisement

Academic bodies and professional societies have likewise adopted standards for responsible use of AI, whether it is authorship standards for AI-generated text or standards for patient-mediated data sharing in medicine.

Strengthening existing statutes on consumer safety, privacy and protection while introducing norms of algorithmic accountability would help demystify complex AI systems. It's also important to recognize that greater data accountability and transparency may impose new restrictions on .

Scholars of data privacy and AI ethics have called for “technological due process” and frameworks to recognize harms of predictive processes. The widespread use of AI-enabled -making in such fields as employment, insurance and calls for licensing and audit requirements to ensure procedural fairness and privacy safeguards.

Requiring such accountability provisions, though, demands a robust debate among AI developers, policymakers and those who are affected by broad deployment of AI. In the absence of strong algorithmic accountability practices, the danger is narrow audits that promote the appearance of compliance.

Advertisement

AI monopolies?

What was also missing in Altman's testimony is the extent of investment required to train large-scale AI models, whether it is GPT-4, which is one of the foundations of ChatGPT, or text-to-image generator Stable Diffusion. Only a handful of companies, such as Google, Meta, Amazon and Microsoft, are responsible for developing the world's largest language models.

Given the lack of transparency in the training data used by these companies, AI ethics experts Timnit Gebru, Emily Bender and others have warned that large-scale adoption of such technologies without corresponding oversight risks amplifying machine bias at a societal scale.

It is also important to acknowledge that the training data for tools such as ChatGPT includes the intellectual labor of a host of people such as Wikipedia contributors, bloggers and authors of digitized books. The economic benefits from these tools, however, accrue only to the technology corporations.

Advertisement

Proving technology firms' monopoly power can be difficult, as the Department of Justice's antitrust case against Microsoft demonstrated. I believe that the most feasible regulatory options for Congress to address potential algorithmic harms from AI may be to strengthen disclosure requirements for AI firms and users of AI alike, to urge comprehensive adoption of AI risk assessment frameworks, and to require processes that safeguard individual data rights and privacy.


Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories on generative AI at TheConversation.com.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

Are tomorrow’s engineers ready to face AI’s ethical challenges?

Published

on

theconversation.com – Elana Goldenkoff, Doctoral Candidate in Movement Science, of Michigan – 2024-04-19 07:42:44

Are tomorrow's engineers ready to face AI's ethical challenges?

Finding ethics' place in the engineering curriculum.

PeopleImages/iStock via Getty Images Plus

Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

Advertisement

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What's more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

Advertisement

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal .” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I'm supposed to go to?”

Advertisement

Another was troubled by the lack of : “I [would be] dealing with that with no experience. … Who knows how I'll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.

Many students are worried about ethics in their field – but that doesn't mean they feel prepared to deal with the challenges.

The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do . Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical decision-making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off'

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Advertisement

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students' understanding, but feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

Advertisement

“If I'm paying money to attend ethics class as an engineer, I'm going to be furious,” one said.

These attitudes sometimes extend to how students view engineers' role in society. One interviewee in our current study, for example, said that an engineer's “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Advertisement

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.

The next generation needs to be prepared for ethical questions, not just technical ones.

Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

Advertisement

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public's first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

Published

on

theconversation.com – Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University – 2024-04-18 07:23:58

AI chatbots restrict their output according to vague and broad policies.

taviox/iStock via Getty Images

Jordi Calvet-Bademunt, Vanderbilt University and Jacob Mchangama, Vanderbilt University

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly's image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

Advertisement

The discussions over AI's political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry's approach to speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, Google's Gemini and OpenAI's ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights as a benchmark, we found that companies' misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Advertisement

Our analysis found that companies' hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google's can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be to participate in women's tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women's tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators' subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Advertisement

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the pandemic to repress criticism of the . More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies' policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI's integration into search, word processors, email and other applications.

Advertisement

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe's online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union's 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies' influence should require them to adopt a free speech culture. International human rights provide a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It's also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users greatly depends on their prompts. Therefore, users' exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

Advertisement

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Fermented foods sustain both microbiomes and cultural heritage

Published

on

theconversation.com – Andrew Flachs, Associate Professor of Anthropology, Purdue University – 2024-04-17 07:19:21

Fermented foods sustain both microbiomes and cultural heritage

Each subtle cultural or personal twist to a fermented dish is felt by your body's microbial community.
microgen/iStock via Getty Images

Andrew Flachs, Purdue University and Joseph Orkin, Université de Montréal

Many people around the world make and eat fermented foods. Millions in Korea alone make kimchi. The cultural heritage of these picklers shape not only what they eat every time they crack open a jar but also something much, much smaller: their microbiomes.

On the microbial scale, we are what we eat in very real ways. Your body is teeming with trillions of microbes. These complex ecosystems exist on your skin, inside your mouth and in your gut. They are particularly influenced by your surrounding , especially the food you eat. Just like any other ecosystem, your gut microbiome requires diversity to be healthy.

People boil, fry, bake and season meals, transforming them through cultural ideas of “good food.” When people ferment food, they affect the microbiome of their meals directly. Fermentation offers a chance to learn how taste and heritage shape microbiomes: not only of culturally significant foods such as German sauerkraut, kosher pickles, Korean kimchi or Bulgarian yogurt, but of our own guts.

Advertisement
Fermentation uses microbes to transform food.

Our work as anthropologists focuses on how culture transforms food. In fact, we first sketched out our plan to link cultural values and microbiology while writing our Ph.D. dissertations at our local deli in St. Louis, Missouri. Staring down at our pickles and lox, we wondered how the salty, crispy zing of these foods represented the marriage of culture and microbiology.

Equipped with the tools of microbial genetics and cultural anthropology, we were determined to find out.

Science and art of fermentation

Fermentation is the creation of an extreme microbiological environment through salt, acid and lack of oxygen deprivation. It is both an ancient food preservation technique and a way to create distinctive tastes, smells and textures.

Taste is highly variable and something you experience through the layers of your social experience. What may be nauseating in one context is a delicacy in another. Fermented foods are notoriously unsubtle: they bubble, they smell and they zing. Whether and how these pungent foods taste good can be a moment of group pride or a chance to heal social divides.

Advertisement

In each case, cultural notions of good food and heritage recipes combine to create a microbiome in a jar. From this perspective, sauerkraut is a particular ecosystem shaped by German food traditions, kosher dill pickles by Ashkenazi Jewish traditions, and pao cai by southwestern Chinese traditions.

Where culture and microbiology intersect

To begin to understand the effects of culinary traditions and individual creativity on microbiomes, we partnered with Sandor Katz, a fermentation practitioner based in Tennessee. Over the course of four days during one of Katz's workshops, we made, ate and shared fermented foods with nine fellow participants. Through conversations and interviews, we learned about the unique tastes and meanings we each brought to our love of fermented foods.

Those stories provided context to the 46 food samples we collected and froze to capture a snapshot of the swimming through kimchi or miso. Participants also collected stool samples each day and mailed in a sample a after the workshop, preserving a record of the gut microbial communities they created with each bite.

The fermented foods we all made were rich, complex and microbially diverse. Where many store-bought fermented foods are pasteurized to clear out all living microbes and then reinoculated with two to six specific bacterial species, our research showed that homemade ferments contain dozens of strains.

Advertisement
Close-up of a spoonful of homemade yogurt
Eating fermented foods such as yogurt shapes the form and function of your microbiome.
Basak Gurbuz Derman/Moment via Getty Images

On the microbiome level, different kinds of fermented foods will have distinct profiles. Just as forests and deserts share ecological features, sauerkrauts and kimchis look more similar to each other than yogurt to cheese.

But just as different habitats have unique combinations of plants and animals, so too did every crock and jar have its own distinct microbial world because of minor differences in preparation or ingredients. The cultural values of taste, creativity and that create a kimchi or a sauerkraut go on to distinct microbiomes on those foods and inside the people who eat them.

Through variations in recipes and cultural preferences toward an extra pinch of salt or a disdain for dill, fermentation traditions result in distinctive microbial and taste profiles that your culture trains you to identify as good or bad to eat. That is, our sauerkraut is not your sauerkraut, even if they both might be good for us.

Fermented food as cultural medicine

Microbially rich fermented foods can influence the composition of your gut microbiome. Because your tastes and recipes are culturally informed, those preferences can have a meaningful effect on your gut microbiome. You can eat these foods in ways that introduce microbial diversity, potentially probiotic microbes that offer benefits to human health such as killing off bacteria that make you ill, improving your cardiovascular or restoring a healthy gut microbiome after you take antibiotics.

Person passing a dish of kimchi to another person across a table of food
Making and sharing fermented foods can bring people together.
Kilito Chan/Moment via Getty Images

Fermentation is an ancient craft, and like all crafts it requires patience, creativity and practice. Cloudy brine is a signal of tasty pickled cucumbers, but it can be a problem for lox. When fermented foods smell rotten, taste too soft or turn red, that can be a sign of contamination by harmful bacteria or molds.

Fermenting foods at home might seem daunting when food is something that from the store with a regulatory guarantee. People hoping to take a more active role in creating their food or embracing their own culture's traditional foods need only time, and salt to make simple fermented foods. As friends share sourdough starters, yogurt cultures and kombucha mothers, they forge social connections.

Advertisement

Through a unique combination of culture and microbiology, heritage food traditions can support microbial diversity in your gut. These cultural practices environments for the yeasts, bacteria and local fruits and grains that in turn sustain heritage foods and flavors.The Conversation

Andrew Flachs, Associate Professor of Anthropology, Purdue University and Joseph Orkin, Assistant Professor of Anthropology, Université de Montréal

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

News from the South

Trending