fbpx
Connect with us

The Conversation

CBD is not a cure-all – here’s what science says about its real health benefits

Published

on

CBD is not a cure-all – here's what science says about its real health benefits

Since 2018, it has been legal in the U.S. to use a drug made from purified -derived cannabidiol – CBD – to treat certain childhood seizure disorders.

Visoot Uthairam/Moment via Getty Images

Kent E Vrana, Penn State

Over the last five years, an often forgotten piece of U.S. federal legislation – the Agriculture Improvement Act of 2018, also known as the 2018 Farm Bill – has ushered in an explosion of interest in the medical potential of cannabis-derived cannabidiol, or CBD.

After decades of debate, the bill made it legal for farmers to grow industrial hemp, a plant rich in CBD. Hemp itself has tremendous value as a cash crop; it's used to produce biofuel, textiles and animal feed. But the CBD extracted from the hemp plant also has numerous medicinal properties, with the potential to benefit millions through the treatment of seizure disorders, pain or anxiety.

Advertisement

Prior to the bill's passage, the resistance to legalizing hemp was due to its association with marijuana, its biological cousin. Though hemp and marijuana belong to the same species of plant, Cannabis sativa, they each have a unique chemistry, with very different characteristics and effects. Marijuana possesses tetrahydrocannabinol, or THC, the chemical that produces the characteristic high that is associated with cannabis. Hemp, on the other hand, is a strain of the cannabis plant that contains virtually no THC, and neither it nor the CBD derived from it can produce a high sensation.

As a professor and chair of the department of pharmacology at Penn State, I have been research developments with CBD closely and have seen some promising evidence for its role in treating a broad range of medical conditions.

While there is growing evidence that CBD can help with certain conditions, caution is needed. Rigorous scientific studies are limited, so it is important that the marketing of CBD products does not get out ahead of the research and of robust evidence.

Before purchasing any CBD products, first discuss it with your doctor and pharmacist.

Unpacking the hype behind CBD

The primary concern about CBD marketing is that the scientific community is not sure of the best form of CBD to use. CBD can be produced as either a pure compound or a complex mixture of molecules from hemp that constitute CBD oil. CBD can also be formulated as a topical cream or lotion, or as a gummy, capsule or tincture.

Advertisement

Guidance, backed by clinical research, is needed on the best dose and delivery form of CBD for each medical . That research is still in progress.

But in the meantime, the siren's call of the marketplace has sounded and created an environment in which CBD is often hyped as a cure-all – an elixir for insomnia, anxiety, neuropathic pain, cancer and heart disease.

Sadly, there is precious little rigorous scientific evidence to support many of these claims, and much of the existing research has been performed in animal models.

CBD is simply not a panacea for all that ails you.

Advertisement

Childhood seizure disorders

Here's one thing that is known: Based on rigorous trials with hundreds of patients, CBD has been shown to be a proven safe and effective drug for seizure disorders, particularly in children.

In 2018, the U.S. Food and Drug Administration granted regulatory approval for the use of a purified CBD product sold under the brand name Epidiolex for the treatment of Lennox-Gastaut and Dravet syndromes in children.

These two rare syndromes, appearing early in , produce large numbers of frequent seizures that are resistant to traditional epilepsy treatments. CBD delivered as an oral solution as Epidiolex, however, can produce a significant reduction – greater than 25% – in the frequency of seizures in these children, with 5% of the patients becoming seizure-.

More than 200 scientific trials

CBD is what pharmacologists call a promiscuous drug. That means it could be effective for treating a number of medical conditions. In broad strokes, CBD affects more than one process in the body – a term called polypharmacology – and so could benefit more than one medical condition.

Advertisement

As of early 2023, there are 202 ongoing or completed scientific trials examining the effectiveness of CBD in humans on such diverse disorders as chronic pain, substance use disorders, anxiety and arthritis.

In particular, CBD appears to be an anti-inflammatory agent and analgesic, similar to the functions of aspirin. This means it might be helpful for treating people suffering with inflammatory pain, like arthritis, or headaches and body aches.

CBD also potential for use in cancer therapy, although it has not been approved by the FDA for this purpose.

The potential for CBD in the context of cancer is twofold:

Advertisement

First, there is evidence that it can directly kill cancer cells, enhancing the ability of traditional therapies to treat the disease. This is not to say that CBD will replace those traditional therapies; the data is not that compelling.

Second, because of its ability to reduce pain and perhaps anxiety, the addition of CBD to a treatment plan may reduce side effects and increase the quality of life for people with cancer.

Things to consider before purchasing a CBD product.

The risks of unregulated CBD

While prescription CBD is safe when used as directed, other forms of the molecule with risks. This is especially true for CBD oils. The over-the-counter CBD oil industry is unregulated and not necessarily safe, in that there are no regulatory requirements for monitoring what is in a product.

What's more, rigorous science does not support the unsubstantiated marketing claims made by many CBD products.

Advertisement

In a 2018 commentary, the author describes the results of his own study, which was published in Dutch (in 2017). His team obtained samples of CBD products from patients and analyzed their content. Virtually none of the 21 samples contained the advertised quantity of CBD; indeed, 13 had little to no CBD at all and many contained significant levels of THC, the compound in marijuana that leads to a high – and that was not supposed to have been present.

In fact, studies have shown that there is little control of the contaminants that may be present in over-the-counter products. The FDA has issued scores of warning letters to companies that market unapproved drugs containing CBD. In spite of the marketing of CBD oils as all-natural, plant-derived products, consumers should be aware of the risks of unknown compounds in their products or unintended interactions with their prescription drugs.

Regulatory guidelines for CBD are sorely lacking. Most recently, in January 2023, the FDA concluded that the existing framework is “not appropriate for CBD” and said it would work with to chart a way forward. In a statement, the agency said that “a new regulatory pathway for CBD is needed that balances individuals' desire for access to CBD products with the regulatory oversight needed to manage risks.”

As a natural product, CBD is still acting as a drug – much like aspirin, acetaminophen or even a cancer chemotherapy. care providers simply need to better understand the risks or benefits.

Advertisement

CBD may interact with the body in ways that are unintended. CBD is eliminated from the body by the same liver enzymes that remove a variety of drugs such as blood thinners, antidepressants and organ transplant drugs. Adding CBD oil to your medication list without consulting a physician could be risky and could interfere with prescription medications.

In an effort to help prevent these unwanted interactions, my colleague Dr. Paul Kocis, a clinical pharmacist, and I have created a free online application called the CANNabinoid Drug Interaction Resource. It identifies how CBD could potentially interact with other prescription medications. And we urge all people to disclose both over-the-counter CBD or recreational or use to their health care providers to prevent undesirable drug interactions.

In the end, I believe that CBD will prove to have a place in people's medicine cabinets – but not until the medical community has established the right form to take and the right dosage for a given medical condition.The Conversation

Kent E Vrana, Professor and Chair of Pharmacology, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

Poor media literacy in the social media age

Published

on

theconversation.com – Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston – 2024-04-19 10:01:58

Tiktok is not the only social app to pose the threats it's been accused of.

picture alliance via Getty Images

Nir Eisikovits, UMass Boston

The U.S. moved closer to banning the social media app TikTok after the House of Representatives attached the measure to an emergency spending bill on Apr. 17, 2024. The move could improve the bill's chances in the Senate, and has indicated that he will sign the bill if it reaches his desk.

Advertisement

The bill would force ByteDance, the Chinese company that owns TikTok, to either sell its American holdings to a U.S. company or face a ban in the country. The company has said it will fight any effort to force a sale.

The proposed legislation was motivated by a set of national security concerns. For one, ByteDance can be required to assist the Chinese Communist Party in gathering intelligence, according to the Chinese National Intelligence Law. In other words, the data TikTok collects can, in theory, be used by the Chinese government.

Furthermore, TikTok's popularity in the United States, and the fact that many young people get their news from the platform – one-third of Americans under the age of 30 – turns it into a potent instrument for Chinese political influence.

Indeed, the U.S. Office of the Director of National Intelligence recently claimed that TikTok accounts by a Chinese propaganda arm of the government targeted candidates from both political parties during the U.S. midterm election cycle in 2022, and the Chinese Communist Party might attempt to influence the U.S. elections in 2024 in order to sideline critics of China and magnify U.S. social divisions.

Advertisement

To these worries, proponents of the legislation have appended two more arguments: It's only right to curtail TikTok because China bans most U.S.-based social media networks from operating there, and there would be nothing new in such a ban, since the U.S. already restricts the foreign ownership of important media networks.

Some of these arguments are stronger than others.

China doesn't need TikTok to collect data about Americans. The Chinese government can buy all the data it wants from data brokers because the U.S. has no federal data privacy laws to speak of. The fact that China, a country that Americans criticize for its authoritarian practices, bans social media platforms is hardly a reason for the U.S. to do the same.

The debate about banning TikTok tends to miss the larger picture of social media literacy.

I believe the cumulative force of these claims is substantial and the legislation, on balance, is plausible. But banning the app is also a red herring.

Advertisement

In the past few years, my colleagues and I at UMass Boston's Applied Ethics Center have been studying the impact of AI on how people understand themselves. Here's why I think the recent move against TikTok misses the larger point: Americans' sources of information have declined in quality and the problem goes beyond any one social media platform.

The deeper problem

Perhaps the most compelling argument for banning TikTok is that the app's ubiquity and the fact that so many young Americans get their news from it turns it into an effective tool for political influence. But the proposed solution of switching to American ownership of the app ignores an even more fundamental threat.

The deeper problem is not that the Chinese government can easily manipulate content on the app. It is, rather, that people think it is OK to get their news from social media in the first place. In other words, the real national security vulnerability is that people have acquiesced to informing themselves through social media.

Social media is not made to inform people. It is designed to capture consumer attention for the sake of advertisers. With slight variations, that's the business model of all platforms. That's why a lot of the content people encounter on social media is violent, divisive and disturbing. Controversial posts that generate strong feelings literally capture users' notice, hold their gaze for longer, and provide advertisers with improved opportunities to monetize engagement.

Advertisement

There's an important difference between actively consuming serious, well-vetted information and being manipulated to spend as much time as possible on a platform. The former is the lifeblood of democratic citizenship because being a citizen who participates in political decision-making requires having reliable information on the issues of the day. The latter amounts to letting your attention get hijacked for someone else's financial gain.

If TikTok is banned, many of its users are likely to migrate to Instagram and YouTube. This would benefit Meta and Google, their parent companies, but it wouldn't benefit national security. People would still be exposed to as much junk news as before, and experience shows that these social media platforms could be vulnerable to manipulation as well. After all, the Russians primarily used Facebook and Twitter to meddle in the 2016 election.

Media literacy is especially critical in the age of social media.

Media and technology literacy

That Americans have settled on getting their information from outlets that are uninterested in informing them undermines the very requirement of serious political participation, namely educated decision-making. This problem is not going to be solved by restricting access to foreign apps.

Research suggests that it will only be alleviated by inculcating media and technology literacy habits from an early age. This involves teaching young people how social media companies make money, how algorithms shape what they see on their phones, and how different types of content affect them psychologically.

Advertisement

My colleagues and I have just launched a pilot program to boost digital media literacy with the Boston Mayor's Youth Council. We are talking to Boston's youth about how the technologies they use everyday undermine their privacy, about the role of algorithms in shaping everything from their in music to their political sympathies, and about how generative AI is going to influence their ability to think and write clearly and even who they count as friends.

We are planning to present them with evidence about the adverse effects of excessive social media use on their mental . We are going to to them about taking time away from their phones and developing a healthy skepticism towards what they see on social media.

Protecting people's capacity for critical thinking is a challenge that calls for bipartisan attention. Some of these measures to boost media and technology literacy might not be popular among tech users and tech companies. But I believe they are necessary for raising thoughtful citizens rather than passive social media consumers who have surrendered their attention to commercial and political actors who do not have their interests at heart.The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Are tomorrow’s engineers ready to face AI’s ethical challenges?

Published

on

theconversation.com – Elana Goldenkoff, Doctoral Candidate in Movement Science, of Michigan – 2024-04-19 07:42:44

Are tomorrow's engineers ready to face AI's ethical challenges?

Finding ethics' place in the engineering curriculum.

PeopleImages/iStock via Getty Images Plus

Elana Goldenkoff, University of Michigan and Erin A. Cech, University of Michigan

A chatbot turns hostile. A test version of a Roomba vacuum collects images of users in private situations. A Black woman is falsely identified as a suspect on the basis of facial recognition software, which tends to be less accurate at identifying women and people of color.

Advertisement

These incidents are not just glitches, but examples of more fundamental problems. As artificial intelligence and machine learning tools become more integrated into life, ethical considerations are growing, from privacy issues and race and gender biases in coding to the spread of misinformation.

The general public depends on software engineers and computer scientists to ensure these technologies are created in a safe and ethical manner. As a sociologist and doctoral candidate interested in science, technology, engineering and math education, we are currently researching how engineers in many different fields learn and understand their responsibilities to the public.

Yet our recent research, as well as that of other scholars, points to a troubling reality: The next generation of engineers often seem unprepared to grapple with the social implications of their work. What's more, some appear apathetic about the moral dilemmas their careers may bring – just as advances in AI intensify such dilemmas.

Aware, but unprepared

As part of our ongoing research, we interviewed more than 60 electrical engineering and computer science masters at a top engineering program in the United States. We asked students about their experiences with ethical challenges in engineering, their knowledge of ethical dilemmas in the field and how they would respond to scenarios in the future.

Advertisement

First, the good news: Most students recognized potential dangers of AI and expressed concern about personal privacy and the potential to cause harm – like how race and gender biases can be written into algorithms, intentionally or unintentionally.

One student, for example, expressed dismay at the environmental impact of AI, saying AI companies are using “more and more greenhouse power, [for] minimal benefits.” Others discussed concerns about where and how AIs are being applied, including for military technology and to generate falsified information and images.

When asked, however, “Do you feel equipped to respond in concerning or unethical situations?” students often said no.

“Flat out no. … It is kind of scary,” one student replied. “Do YOU know who I'm supposed to go to?”

Advertisement

Another was troubled by the lack of : “I [would be] dealing with that with no experience. … Who knows how I'll react.”

Two young women, one Black and one Asian, sit at a table together as they work on two laptops.

Many students are worried about ethics in their field – but that doesn't mean they feel prepared to deal with the challenges.

The Good Brigade/DigitalVision via Getty Images

Other researchers have similarly found that many engineering students do not feel satisfied with the ethics training they do . Common training usually emphasizes professional codes of conduct, rather than the complex socio-technical factors underlying ethical -making. Research suggests that even when presented with particular scenarios or case studies, engineering students often struggle to recognize ethical dilemmas.

‘A box to check off'

Accredited engineering programs are required to “include topics related to professional and ethical responsibilities” in some capacity.

Advertisement

Yet ethics training is rarely emphasized in the formal curricula. A study assessing undergraduate STEM curricula in the U.S. found that coverage of ethical issues varied greatly in terms of content, amount and how seriously it is presented. Additionally, an analysis of academic literature about engineering education found that ethics is often considered nonessential training.

Many engineering faculty express dissatisfaction with students' understanding, but feeling pressure from engineering colleagues and students themselves to prioritize technical skills in their limited class time.

Researchers in one 2018 study interviewed over 50 engineering faculty and documented hesitancy – and sometimes even outright resistance – toward incorporating public welfare issues into their engineering classes. More than a quarter of professors they interviewed saw ethics and societal impacts as outside “real” engineering work.

About a third of students we interviewed in our ongoing research share this seeming apathy toward ethics training, referring to ethics classes as “just a box to check off.”

Advertisement

“If I'm paying money to attend ethics class as an engineer, I'm going to be furious,” one said.

These attitudes sometimes extend to how students view engineers' role in society. One interviewee in our current study, for example, said that an engineer's “responsibility is just to create that thing, design that thing and … tell people how to use it. [Misusage] issues are not their concern.”

One of us, Erin Cech, followed a cohort of 326 engineering students from four U.S. colleges. This research, published in 2014, suggested that engineers actually became less concerned over the course of their degree about their ethical responsibilities and understanding the public consequences of technology. them after they left college, we found that their concerns regarding ethics did not rebound once these new graduates entered the workforce.

Joining the work world

When engineers do receive ethics training as part of their degree, it seems to work.

Advertisement

Along with engineering professor Cynthia Finelli, we conducted a survey of over 500 employed engineers. Engineers who received formal ethics and public welfare training in school are more likely to understand their responsibility to the public in their professional roles, and recognize the need for collective problem solving. to engineers who did not receive training, they were 30% more likely to have noticed an ethical issue in their workplace and 52% more likely to have taken action.

An Asian man wearing glasses stares seriously into space, standing against a holographic background in shades of pink and blue.

The next generation needs to be prepared for ethical questions, not just technical ones.

Qi Yang/Moment via Getty Images

Over a quarter of these practicing engineers reported encountering a concerning ethical situation at work. Yet approximately one-third said they have never received training in public welfare – not during their education, and not during their career.

This gap in ethics education raises serious questions about how well-prepared the next generation of engineers will be to navigate the complex ethical landscape of their field, especially when it comes to AI.

Advertisement

To be sure, the burden of watching out for public welfare is not shouldered by engineers, designers and programmers alone. Companies and legislators share the responsibility.

But the people who are designing, testing and fine-tuning this technology are the public's first line of defense. We believe educational programs owe it to them – and the rest of us – to take this training seriously.The Conversation

Elana Goldenkoff, Doctoral Candidate in Movement Science, University of Michigan and Erin A. Cech, Associate Professor of Sociology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

AI chatbots refuse to produce ‘controversial’ output − why that’s a free speech problem

Published

on

theconversation.com – Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt – 2024-04-18 07:23:58

AI chatbots restrict their output according to vague and broad policies.

taviox/iStock via Getty Images

Jordi Calvet-Bademunt, Vanderbilt University and Jacob Mchangama, Vanderbilt University

Google recently made headlines globally because its chatbot Gemini generated images of people of color instead of white people in historical settings that featured white people. Adobe Firefly's image creation tool saw similar issues. This led some commentators to complain that AI had gone “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

Advertisement

The discussions over AI's political leanings and efforts to fight bias are important. Still, the conversation on AI ignores another crucial issue: What is the AI industry's approach to speech, and does it embrace international free speech standards?

We are policy researchers who study free speech, as well as executive director and a research fellow at The Future of Free Speech, an independent, nonpartisan think tank based at Vanderbilt University. In a recent report, we found that generative AI has important shortcomings regarding of expression and access to information.

Generative AI is a type of AI that creates content, like text or images, based on the data it has been trained with. In particular, we found that the use policies of major chatbots do not meet United Nations standards. In practice, this means that AI chatbots often censor output when dealing with issues the companies deem controversial. Without a solid culture of free speech, the companies producing generative AI tools are likely to continue to face backlash in these increasingly polarized times.

Vague and broad use policies

Our report analyzed the use policies of six major AI chatbots, Google's Gemini and OpenAI's ChatGPT. Companies issue policies to set the rules for how people can use their models. With international human rights as a benchmark, we found that companies' misinformation and hate speech policies are too vague and expansive. It is worth noting that international human rights law is less protective of free speech than the U.S. First Amendment.

Advertisement

Our analysis found that companies' hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google's can backfire.

To show how vague and broad use policies can affect users, we tested a range of prompts on controversial topics. We asked chatbots questions like whether transgender women should or should not be allowed to participate in women's sports tournaments or about the role of European colonialism in the current climate and inequality crises. We did not ask the chatbots to produce hate speech denigrating any side or group. Similar to what some users have reported, the chatbots refused to generate content for 40% of the 140 prompts we used. For example, all chatbots refused to generate posts opposing the participation of transgender women in women's tournaments. However, most of them did produce posts supporting their participation.

Freedom of speech is a foundational right in the U.S., but what it means and how far it goes are still widely debated.

Vaguely phrased policies rely heavily on moderators' subjective opinions about what hate speech is. Users can also perceive that the rules are unjustly applied and interpret them as too strict or too lenient.

For example, the chatbot Pi bans “content that may spread misinformation.” However, international human rights standards on freedom of expression generally protect misinformation unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers … through any … of … choice,” according to a key United Nations convention.

Advertisement

Defining what constitutes accurate information also has political implications. Governments of several countries used rules adopted in the context of the pandemic to repress criticism of the . More recently, India confronted Google after Gemini noted that some experts consider the policies of the Indian prime minister, Narendra Modi, to be fascist.

Free speech culture

There are reasons AI providers may want to adopt restrictive use policies. They may wish to protect their reputations and not be associated with controversial content. If they serve a global audience, they may want to avoid content that is offensive in any region.

In general, AI providers have the right to adopt restrictive policies. They are not bound by international human rights. Still, their market power makes them different from other companies. Users who want to generate AI content will most likely end up using one of the chatbots we analyzed, especially ChatGPT or Gemini.

These companies' policies have an outsize effect on the right to access information. This effect is likely to increase with generative AI's integration into search, word processors, email and other applications.

Advertisement

This means society has an interest in ensuring such policies adequately protect free speech. In fact, the Digital Services Act, Europe's online safety rulebook, requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on freedom of expression and information.

Jacob Mchangama discusses online free speech in the context of the European Union's 2022 Digital Services Act.

This obligation, imperfectly applied so far by the European Commission, illustrates that with great power great responsibility. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Even where a similar legal obligation does not apply to AI providers, we believe that the companies' influence should require them to adopt a free speech culture. International human rights a useful guiding star on how to responsibly balance the different interests at stake. At least two of the companies we focused on – Google and Anthropic – have recognized as much.

Outright refusals

It's also important to remember that users have a significant degree of autonomy over the content they see in generative AI. Like search engines, the output users receive greatly depends on their prompts. Therefore, users' exposure to hate speech and misinformation from generative AI will typically be limited unless they specifically seek it.

Advertisement

This is unlike social media, where people have much less control over their own feeds. Stricter controls, including on AI-generated content, may be justified at the level of social media since they distribute content publicly. For AI providers, we believe that use policies should be less restrictive about what information users can generate than those of social media platforms.

AI companies have other ways to address hate speech and misinformation. For instance, they can provide context or countervailing facts in the content they generate. They can also allow for greater user customization. We believe that chatbots should avoid merely refusing to generate any content altogether. This is unless there are solid public interest grounds, such as preventing child sexual abuse material, something laws prohibit.

Refusals to generate content not only affect fundamental rights to free speech and access to information. They can also push users toward chatbots that specialize in generating hateful content and echo chambers. That would be a worrying outcome.The Conversation

Jordi Calvet-Bademunt, Research Fellow and Visiting Scholar of Political Science, Vanderbilt University and Jacob Mchangama, Research Professor of Political Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

News from the South

Trending