fbpx
Connect with us

The Conversation

How vinyl chloride, chemical released in the Ohio train derailment, can damage the liver – it’s used to make PVC plastics

Published

on

How vinyl chloride, chemical released in the Ohio train derailment, can damage the liver – it's used to make PVC plastics

An illustration of a human liver.
Canva – Human Liver Illustration

Juliane I. Beier, University of Pittsburgh

Vinyl chloride – the chemical in several of the train cars that derailed and burned in East Palestine, Ohio, in February 2023 – can wreak havoc on the human liver.

It has been shown to cause liver cancer, as well as a nonmalignant liver disease known as TASH, or toxicant-associated steatohepatitis. With TASH, the livers of otherwise healthy people can develop the same fat accumulation, inflammation and scarring (fibrosis and cirrhosis) as people who have cirrhosis from alcohol or obesity.

That kind of typically requires relatively high levels of vinyl chloride exposure – the kind an industrial worker might experience on the job.

However, exposures to lower environmental concentrations are still a concern. That's in part because little is known about the impact low-level exposure might have on liver , especially for people with underlying liver disease and other risks.

Advertisement

As an assistant professor of medicine and environmental and occupational health, I study the impact of vinyl chloride exposure on the liver, particularly on how it may affect people with underlying liver disease. Recent findings have changed our understanding of the risk.

Lessons from ‘Rubbertown'

Vinyl chloride is used to produce PVC, a hard plastic used for pipes, as well as in some packaging, coatings and wires.

Its health risks were discovered in the 1970s at a B.F. Goodrich factory in the Rubbertown neighborhood of Louisville, Kentucky. Four workers involved in the polymerization process for producing polyvinyl chloride there each developed angiosarcoma of the liver, an extremely rare type of tumor.

Their cases became among the most important sentinel events in the history of occupational medicine and led to the worldwide recognition of vinyl chloride as a carcinogen.

Advertisement

The liver is the body's filter for removing toxicants from the blood. Specialized cells known as hepatocytes reduce the toxicity of , alcohol, caffeine and environmental chemicals and then send away the waste to be excreted.

How the liver works.

The hallmark of vinyl chloride exposure to the liver is a paradoxical combination of normal liver function tests and the presence of fat in the liver and the of hepatic cells, which make up the bulk of the liver's mass. However, the detailed mechanisms that to vinyl chloride-induced liver disease are still largely unknown.

Recent research has demonstrated that exposure to vinyl chloride, even at levels below the federal limits for safety, can enhance liver disease caused by a “Western diet” – one rich in fat and sugar. This previously unidentified interaction between vinyl chloride and underlying fatty liver diseases raises concerns that the risk from lower vinyl chloride exposures may be underestimated.

Outdoor exposure and the risk from wells

In outdoor air, vinyl chloride becomes diluted fairly quickly. Sunlight also breaks it down, typically in nine to 11 days. Therefore, outdoor exposure is likely not a problem except with intense periods of exposure, such as immediately a release of vinyl chloride. If there is a chemical smell, or you feel itchy or disorientated, the area and seek medical attention.

Advertisement

Vinyl chloride also disperses in . The federal Clean Water Act requires monitoring and removing volatile organic compounds such as vinyl chloride from municipal water supplies, so those shouldn't be a concern.

However, private wells could become contaminated if vinyl chloride enters the groundwater. Private wells are not regulated by the Clean Water Act and are not usually monitored.

Vinyl chloride readily volatilizes into the air from water, and it can accumulate in enclosed spaces located above contaminated groundwater. This is especially a concern if the water is heated, such as for showers or during cooking. Vinyl chloride gas in enclosed spaces can therefore accumulate. This effect is similar to recent concerns about fumes from natural gas stoves in poorly ventilated homes.

Although there are established safety levels for acute and intermediate exposure, such levels don't exist for chronic exposures, so testing over time is important.

Advertisement

What can be done? Anyone with a private well that may have been exposed to vinyl chloride should have the well monitored and tested more than once. People can air out their homes and are encouraged to seek medical help if they experience dizziness or itching eyes.The Conversation

Juliane I. Beier, Assistant Professor of Medicine and Environmental Health, University of Pittsburgh

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Poor media literacy in the social media age

Published

on

theconversation.com – Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston – 2024-04-19 10:01:58

Tiktok is not the only social app to pose the threats it's been accused of.

picture alliance via Getty Images

Nir Eisikovits, UMass Boston

The U.S. government moved closer to banning the social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr. 17, 2024. The move could improve the bill's chances in the Senate, and has indicated that he will sign the bill if it reaches his desk.

Advertisement

The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country. The company has said it will fight any effort to force a sale.

The proposed legislation was motivated by a set of national security concerns. For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law. In other words, the data TikTok collects can, in theory, be used by the Chinese government.

Furthermore, TikTok's popularity in the United States, and the fact that many young people get their from the platform – one-third of Americans under the age of 30 – turns it into a potent instrument for Chinese political influence.

Indeed, the U.S. Office of the Director of National Intelligence recently claimed that TikTok accounts run by a Chinese propaganda arm of the government targeted candidates from both political parties during the U.S. midterm election cycle in 2022, and the Chinese Communist Party might attempt to influence the U.S. elections in 2024 in order to sideline critics of China and magnify U.S. social divisions.

Advertisement

To these worries, proponents of the legislation have appended two more arguments: It's only right to curtail TikTok because China bans most U.S.-based social media networks from operating there, and there would be nothing new in such a ban, since the U.S. already restricts the foreign ownership of important media networks.

Some of these arguments are stronger than others.

China doesn't need TikTok to collect data about Americans. The Chinese government can buy all the data it wants from data brokers because the U.S. has no federal data privacy laws to speak of. The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.

The debate about banning TikTok tends to miss the larger picture of social media literacy.

I believe the cumulative force of these claims is substantial and the legislation, on balance, is plausible. But banning the app is also a red herring.

Advertisement

In the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying the impact of AI on how people understand themselves. Here's why I think the recent move against TikTok misses the larger point: Americans' sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

Perhaps the most compelling argument for banning TikTok is that the app's ubiquity and the fact that so many young Americans get their news from it turns it into an effective tool for political influence. But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.

The deeper problem is not that the Chinese government can easily manipulate content on the app. It is, rather, that people think it is OK to get their news from social media in the first place. In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Social media is not made to inform people. It is designed to capture consumer attention for the sake of advertisers. With slight variations, that's the business model of all platforms. That's why a lot of the content people encounter on social media is violent, divisive and disturbing. Controversial posts that generate strong feelings literally capture users' notice, hold their gaze for longer, and advertisers with improved opportunities to monetize engagement.

Advertisement

There's an important difference between actively consuming serious, well-vetted information and being manipulated to spend as much time as possible on a platform. The former is the lifeblood of democratic citizenship because being a citizen who participates in political decision-making requires reliable information on the issues of the day. The latter amounts to letting your attention get hijacked for someone else's financial gain.

If TikTok is banned, many of its users are likely to migrate to Instagram and YouTube. This would benefit Meta and Google, their parent companies, but it wouldn't benefit national security. People would still be exposed to as much junk news as before, and experience shows that these social media platforms could be vulnerable to manipulation as well. After all, the Russians primarily used Facebook and Twitter to meddle in the 2016 election.

Media literacy is especially critical in the age of social media.

Media and technology literacy

That Americans have settled on getting their information from outlets that are uninterested in informing them undermines the very requirement of serious political participation, namely educated decision-making. This problem is not going to be solved by restricting access to foreign apps.

Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age. This involves teaching young people how social media companies make money, how algorithms shape what they see on their phones, and how different types of content affect them psychologically.

Advertisement

My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor's Youth Council. We are talking to Boston's youth about how the technologies they use everyday undermine their privacy, about the role of algorithms in shaping everything from their in music to their political sympathies, and about how generative AI is going to influence their ability to think and write clearly and even who they count as friends.

We are planning to present them with evidence about the adverse effects of excessive social media use on their mental . We are going to talk to them about taking time away from their phones and developing a healthy skepticism towards what they see on social media.

Protecting people's capacity for critical thinking is a challenge that calls for bipartisan attention. Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies. But I believe they are necessary for raising thoughtful citizens rather than passive social media consumers who have surrendered their attention to commercial and political actors who do not have their interests at heart.The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Are tomorrow’s engineers ready to face AI’s ethical challenges?

Published

on

theconversation.com – Elana Goldenkoff, Doctoral Candidate in Movement Science, of Michigan – 2024-04-19 07:42:44

Are tomorrow's engineers ready to face AI's ethical challenges?

Finding ethics' place in the engineering curriculum.

PeopleImages/iStock via Getty Images Plus

Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

Advertisement

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into , ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What's more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

Advertisement

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal .” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I'm supposed to go to?”

Advertisement

Another was troubled by the lack of training: “I [would be] dealing with that with no experience. … Who knows how I'll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.

Many students are worried about ethics in their field – but that doesn't mean they feel prepared to deal with the challenges.

The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do . Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical -making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off'

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Advertisement

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students' understanding, but feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research project share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

Advertisement

“If I'm paying money to attend ethics class as an engineer, I'm going to be furious,” one said.

These attitudes sometimes extend to how students view engineers' role in society. One interviewee in our current study, for example, said that an engineer's “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Advertisement

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.

The next generation needs to be prepared for ethical questions, not just technical ones.

Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

Advertisement

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public's first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

Published

on

theconversation.com – Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt – 2024-04-18 07:23:58

AI chatbots restrict their output according to vague and broad policies.

taviox/iStock via Getty Images

Jordi Calvet-Bademunt, Vanderbilt University and Jacob Mchangama, Vanderbilt University

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly's image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

Advertisement

The discussions over AI's political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry's approach to speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent , we found that generative AI has important shortcomings regarding of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, Google's Gemini and OpenAI's ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies' misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Advertisement

Our analysis found that companies' hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google's can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women's tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women's tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators' subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … media of … choice,” according to a key United Nations convention.

Advertisement

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the pandemic to repress criticism of the . More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies' policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI's integration into search, word processors, email and other applications.

Advertisement

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe's online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union's 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power comes great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies' influence should require them to adopt a free speech culture. International human rights a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It's also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users greatly depends on their prompts. Therefore, users' exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

Advertisement

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

News from the South

Trending