Connect with us

The Conversation

Social media design is key to protecting kids online

Published

on

theconversation.com – Abdulmalik Alluhidan, Ph.D. student in Computer Science, Vanderbilt University – 2025-03-19 07:50:00

How social media apps are designed has a lot to do with whether teens have good or bad experiences.
Daniel de la Hoz/Moment via Getty Images

Abdulmalik Alluhidan, Vanderbilt University

Social media is a complex environment that presents both opportunities and threats for adolescents, with self-expression and emotional support on the one hand and body-shaming, cyberbullying and addictive behaviors on the other. This complexity underscores the challenge to regulating teen social media use, but it also points to another avenue for protecting young people online: how social media platforms are designed.

The growing debate around teen social media use has intensified, with recent bipartisan policy efforts in the U.S., such as the Kids Online Safety Act, seeking to protect young people from digital harms. These efforts reflect legitimate concerns. However, broad restrictions on social media could also limit benefits for teens, throwing the baby out with the bath water.

I am a researcher who studies online safety and digital well-being. My recent work with colleagues in computer scientist Pamela Wisniewski’s Socio-Technical Interaction Research Lab underscores a critical point: social media is neither inherently harmful nor entirely beneficial. It is a tool shaped by its design, how teens use it, and the context of their experiences.

In other words, social media’s impact is shaped by its affordances – how platforms are designed and what they enable users to do or constrain them from doing. Some features foster connection while others amplify harms.

As society moves toward practical solutions for online safety, it is important to use evidence-based research on how these features shape teens’ social media experiences and how they could be redesigned to be age appropriate for young people. It’s also important to incorporate teens’ perspectives to pinpoint what policies and design choices should be made to protect young people using social media.

My colleagues and I analyzed over 2,000 posts from teens ages 15-17 on an online peer-support platform. Teens openly discussed their experiences with popular social media platforms such as Instagram, YouTube, Snapchat and TikTok. Their voices highlight a potential path forward: focusing on safety by design – an approach that improves platform features to amplify benefits and mitigate harms. This approach respects young people’s agency while prioritizing their digital well-being.

What teens say about social media

While social media’s worst outcomes such as cyberbullying or mental health crises are often in the spotlight, our research shows that teens’ experiences are far more nuanced. Instead, platforms enable diverse outcomes depending on their features and design.

Teens commonly described negative experiences involving social drama, cyberbullying and privacy violations. For example, Instagram was a focal point for body-shaming and self-esteem issues, driven by its emphasis on curated visual content. Facebook triggered complaints about privacy violations, such as parents sharing private information without teens’ consent. Snapchat, meanwhile, exposed teens to risky interactions due to its ephemeral messaging, which fosters intimate but potentially unsafe connections.

Research – and teens themselves – indicates that social media has negative and positive effects on young people.

At the same time, teens expressed that social media provides a space for support, inspiration and self-expression, particularly when offline spaces feel isolating. Teens used social media to cope with stress or seek out uplifting content.

Platforms such as Snapchat and WhatsApp were key spaces for seeking connection, enabling teens to build relationships and find emotional support. Snapchat, in particular, was the go-to platform for fostering close personal connections, while YouTube empowered teens to promote their creativity and identity by sharing videos.

Many praised Instagram and Snapchat for providing inspiration, distraction or emotional relief during stressful times. Teens also used social media to seek information, turning to YouTube and Twitter to learn new things, verify information or troubleshoot technical problems.

These findings underscore a critical insight: Platform design matters. Features such as algorithms, privacy controls and content-sharing mechanisms directly shape how teens experience social media. These findings further question the perception of social media as a purely negative force. Instead, teens’ experiences highlight its dual nature: a space for both risk and opportunity.

Key to safer social media

The concept of affordances – design and features – helps explain why teens’ experiences differ across platforms and provides a path toward safer design. For example, Instagram’s affordances such as image sharing and algorithmic content promotion amplify social comparison, leading to body-shaming and self-esteem issues. Snapchat’s affordances, such as ephemeral messaging and visibility of “best friends,” encourage personal connections but can foster risky interactions. Meanwhile, YouTube’s affordances, such as easy content creation and discovery, promote self-expression but can contribute to time-management struggles due to its endless scroll design.

By understanding these platform-specific designs and features, it is possible to mitigate risks without losing the benefits. For example, Facebook could allow for appropriate levels of parental oversight of teen accounts while preserving privacy. Instagram could reduce algorithmic promotion of harmful content. And Snapchat could improve safety features.

This safety by design approach moves beyond restricting access to focus on improving the platforms themselves. By thoughtfully redesigning social media features, tech companies can empower teens to use these tools safely and meaningfully. Policymakers can focus on holding social media companies responsible for their platforms’ impact, while simultaneously promoting the digital rights of teens to benefit from social media use.

Call for safety by design

It’s important for policymakers to recognize that social media’s risks and rewards coexist. Instead of viewing social media as a monolith, however, policymakers can target the features of social media platforms most likely to cause harm. For example, they could require platform companies to conduct safety audits or disclose algorithmic risks. These steps could encourage safer design without limiting access.

By addressing platform affordances and adopting safety by design, it is possible to create digital spaces that protect teens from harm while preserving the connection, creativity and support that social media enables. The tools to build a future where teens can thrive are already available; they just need to be designed better.

Pamela Wisniewski contributed to the writing of this article.The Conversation

Abdulmalik Alluhidan, Ph.D. student in Computer Science, Vanderbilt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Social media design is key to protecting kids online appeared first on theconversation.com

The Conversation

How states are placing guardrails around AI in the absence of strong federal regulation

Published

on

theconversation.com – Anjana Susarla, Professor of Information Systems, Michigan State University – 2025-08-06 07:55:00


U.S. states are actively regulating artificial intelligence (AI) amid minimal federal oversight, with all 50 states proposing AI-related laws in 2025. Key regulatory areas include government use of AI—especially to prevent biases in social services and criminal justice—and AI’s role in healthcare, focusing on transparency, consumer protection, insurers, and clinicians. Facial recognition laws address privacy and racial bias concerns, with 15 states imposing restrictions and requiring bias reporting. Generative AI laws, like those in California and Utah, mandate disclosure about AI use and training data. Despite states’ efforts, federal policy under the Trump administration threatens to limit regulations deemed burdensome, complicating oversight.

The California State Capitol has been the scene of numerous efforts to regulate AI.
AP Photo/Juliana Yamada

Anjana Susarla, Michigan State University

U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap.

Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025.

Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI.

Government use of AI

The oversight and responsible use of AI are especially critical in the public sector. Predictive AI – AI that performs statistical analysis to make forecasts – has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole.

But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases.

Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment.

Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them.

Montana’s new “Right to Compute” law sets requirements that AI developers adopt risk management frameworks – methods for addressing security and privacy in the development process – for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York’s SB 8755 bill.

AI in health care

In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers’ use of AI and clinicians’ use of AI.

Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose.

Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology.

a mannequin wearing a device across the chest with four wires attached to circular pads attached to the torso
Numerous bills in state legislatures aim to regulate the use of AI in health care, including medical devices like this electrocardiogram recorder.
VCG via Getty Images

Bills covering insurers provide oversight of the payers’ use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients.

Facial recognition and surveillance

In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases.

Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces.

Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity.

By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies.

a Black woman with short hair and hoop earrings sits at a conference table
Porcha Woodruff was wrongly arrested for a carjacking in 2023 based on facial recognition technology.
AP Photo/Carlos Osorio

Generative AI and foundation models

The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utah’s Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they’re using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information.

Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training.

AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency.

Trying to fill the gap

In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers’ compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections.

Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says “The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations … ”

The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration’s definition of burdensome against needed federal funding for AI.The Conversation

Anjana Susarla, Professor of Information Systems, Michigan State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post How states are placing guardrails around AI in the absence of strong federal regulation appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Center-Left

This article primarily offers a factual overview of AI regulation efforts at the state level, highlighting concerns around transparency, consumer protections, and biases in AI systems. It emphasizes the risks of algorithmic harms, especially relating to racial and gender biases, which aligns with issues often highlighted by progressive or center-left perspectives. The coverage of civil rights, privacy, and regulatory oversight reflects a cautious stance toward unchecked AI deployment. While it reports on federal actions from the Trump administration critically, the overall tone remains measured and policy-focused rather than overtly ideological, placing it slightly left-of-center.

Continue Reading

The Conversation

2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech

Published

on

theconversation.com – Christopher Palma, Teaching Professor of Astronomy & Astrophysics, Penn State – 2025-08-04 07:41:00


During a solar eclipse, astronomers can study the Sun’s faint corona, usually hidden by the bright Sun. The European Space Agency’s Proba-3 mission creates artificial eclipses using two spacecraft flying in precise formation about 492 feet apart. One spacecraft blocks the Sun’s bright disk, casting a shadow on the second, which photographs the corona. Launched in 2024, Proba-3 orbits between 372 miles and 37,282 miles from Earth, maintaining alignment within one millimeter at high speeds. The mission aids future satellite technologies and studies space weather to improve forecasting of solar storms that affect Earth’s satellites.

The solar corona, as viewed by Proba-3’s ASPIICS coronagraph.
ESA/Proba-3/ASPIICS/WOW algorithm, CC BY-SA

Christopher Palma, Penn State

During a solar eclipse, astronomers who study heliophysics are able to study the Sun’s corona – its outer atmosphere – in ways they are unable to do at any other time.

The brightest part of the Sun is so bright that it blocks the faint light from the corona, so it is invisible to most of the instruments astronomers use. The exception is when the Moon blocks the Sun, casting a shadow on the Earth during an eclipse. But as an astronomer, I know eclipses are rare, they last only a few minutes, and they are visible only on narrow paths across the Earth. So, researchers have to work hard to get their equipment to the right place to capture these short, infrequent events.

In their quest to learn more about the Sun, scientists at the European Space Agency have built and launched a new probe designed specifically to create artificial eclipses.

Meet Proba-3

This probe, called Proba-3, works just like a real solar eclipse. One spacecraft, which is roughly circular when viewed from the front, orbits closer to the Sun, and its job is to block the bright parts of the Sun, acting as the Moon would in a real eclipse. It casts a shadow on a second probe that has a camera capable of photographing the resulting artificial eclipse.

An illustration of two spacecraft, one which is spherical and moves in front of the Sun, another that is box-shaped facing the Sun.
The two spacecraft of Proba-3 fly in precise formation about 492 feet (150 meters) apart.
ESA-P. Carril, CC BY-NC-ND

Having two separate spacecraft flying independently but in such a way that one casts a shadow on the other is a challenging task. But future missions depend on scientists figuring out how to make this precision choreography technology work, and so Proba-3 is a test.

This technology is helping to pave the way for future missions that could include satellites that dock with and deorbit dead satellites or powerful telescopes with instruments located far from their main mirrors.

The side benefit is that researchers get to practice by taking important scientific photos of the Sun’s corona, allowing them to learn more about the Sun at the same time.

An immense challenge

The two satellites launched in 2024 and entered orbits that approach Earth as close as 372 miles (600 kilometers) – that’s about 50% farther from Earth than the International Space Station – and reach more than 37,282 miles (60,000 km) at their most distant point, about one-sixth of the way to the Moon.

During this orbit, the satellites move at speeds between 5,400 miles per hour (8,690 kilometers per hour) and 79,200 mph (127,460 kph). At their slowest, they’re still moving fast enough to go from New York City to Philadelphia in one minute.

While flying at that speed, they can control themselves automatically, without a human guiding them, and fly 492 feet (150 meters) apart – a separation that is longer than the length of a typical football stadium – while still keeping their locations aligned to about one millimeter.

They needed to maintain that precise flying pattern for hours in order to take a picture of the Sun’s corona, and they did it in June 2025.

The Proba-3 mission is also studying space weather by observing high-energy particles that the Sun ejects out into space, sometimes in the direction of the Earth. Space weather causes the aurora, also known as the northern lights, on Earth.

While the aurora is beautiful, solar storms can also harm Earth-orbiting satellites. The hope is that Proba-3 will help scientists continue learning about the Sun and better predict dangerous space weather events in time to protect sensitive satellites.The Conversation

Christopher Palma, Teaching Professor of Astronomy & Astrophysics, Penn State

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post 2 spacecraft flew exactly in line to imitate a solar eclipse, capture a stunning image and test new tech appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content is a factual and scientific discussion of the Proba-3 space mission and its efforts to study the Sun’s corona through artificial eclipses. It emphasizes technological achievement and scientific advancement without promoting any political ideology or taking a stance on politically charged issues. The tone is neutral, informative, and focused on space exploration and research, which aligns with a centrist, nonpartisan perspective.

Continue Reading

The Conversation

Are you really allergic to penicillin? A pharmacist explains why there’s a good chance you’re not − and how you can find out for sure

Published

on

theconversation.com – Elizabeth W. Covington, Associate Clinical Professor of Pharmacy, Auburn University – 2025-07-31 07:35:00


About 10–20% of Americans report a penicillin allergy, but fewer than 1% actually are allergic. Many people are labeled allergic due to childhood rashes or mild side effects, which are often unrelated to true allergies. Penicillin, discovered in 1928, is a narrow-spectrum antibiotic used to treat many infections safely and effectively. Incorrect allergy labels lead to use of broader, costlier antibiotics that promote resistance and may cause more side effects. Allergy status can be evaluated through detailed medical history and penicillin skin testing or monitored test dosing, allowing many to safely use penicillin again.

Penicillin is a substance produced by penicillium mold. About 80% of people with a penicillin allergy will lose the allergy after about 10 years.
Clouds Hill Imaging Ltd./Corbis Documentary via Getty Images

Elizabeth W. Covington, Auburn University

Imagine this: You’re at your doctor’s office with a sore throat. The nurse asks, “Any allergies?” And without hesitation you reply, “Penicillin.” It’s something you’ve said for years – maybe since childhood, maybe because a parent told you so. The nurse nods, makes a note and moves on.

But here’s the kicker: There’s a good chance you’re not actually allergic to penicillin. About 10% to 20% of Americans report that they have a penicillin allergy, yet fewer than 1% actually do.

I’m a clinical associate professor of pharmacy specializing in infectious disease. I study antibiotics and drug allergies, including ways to determine whether people have penicillin allergies.

I know from my research that incorrectly being labeled as allergic to penicillin can prevent you from getting the most appropriate, safest treatment for an infection. It can also put you at an increased risk of antimicrobial resistance, which is when an antibiotic no longer works against bacteria.

The good news? It’s gotten a lot easier in recent years to pin down the truth of the matter. More and more clinicians now recognize that many penicillin allergy labels are incorrect – and there are safe, simple ways to find out your actual allergy status.

A steadfast lifesaver

Penicillin, the first antibiotic drug, was discovered in 1928 when a physician named Alexander Fleming extracted it from a type of mold called penicillium. It became widely used to treat infections in the 1940s. Penicillin and closely related antibiotics such as amoxicillin and amoxicillin/clavulanate, which goes by the brand name Augmentin, are frequently prescribed to treat common infections such as ear infections, strep throat, urinary tract infections, pneumonia and dental infections.

Penicillin antibiotics are a class of narrow-spectrum antibiotics, which means they target specific types of bacteria. People who report having a penicillin allergy are more likely to receive broad-spectrum antibiotics. Broad-spectrum antibiotics kill many types of bacteria, including helpful ones, making it easier for resistant bacteria to survive and spread. This overuse speeds up the development of antibiotic resistance. Broad-spectrum antibiotics can also be less effective and are often costlier.

Figuring out whether you’re really allergic to penicillin is easier than it used to be.

Why the mismatch?

People often get labeled as allergic to antibiotics as children when they have a reaction such as a rash after taking one. But skin rashes frequently occur alongside infections in childhood, with many viruses and infections actually causing rashes. If a child is taking an antibiotic at the time, they may be labeled as allergic even though the rash may have been caused by the illness itself.

Some side effects such as nausea, diarrhea or headaches can happen with antibiotics, but they don’t always mean you are allergic. These common reactions usually go away on their own or can be managed. A doctor or pharmacist can talk to you about ways to reduce these side effects.

People also often assume penicillin allergies run in families, but having a relative with an allergy doesn’t mean you’re allergic – it’s not hereditary.

Finally, about 80% of patients with a true penicillin allergy will lose the allergy after about 10 years. That means even if you used to be allergic to this antibiotic, you might not be anymore, depending on the timing of your reaction.

Why does it matter if I have a penicillin allergy?

Believing you’re allergic to penicillin when you’re not can negatively affect your health. For one thing, you are more likely to receive stronger, broad-spectrum antibiotics that aren’t always the best fit and can have more side effects. You may also be more likely to get an infection after surgery and to spend longer in the hospital when hospitalized for an infection. What’s more, your medical bills could end up higher due to using more expensive drugs.

Penicillin and its close cousins are often the best tools doctors have to treat many infections. If you’re not truly allergic, figuring that out can open the door to safer, more effective and more affordable treatment options.

An arm stretched out on an examining table gets pricked with a white needle by the hands of a clinician administering an allergy test.
A penicillin skin test can safely determine whether you have a penicillin allergy, but a health care professional may also be able to tell by asking you some specific questions.
BSIP/Collection Mix: Subjects via Getty Images

How can I tell if I am really allergic to penicillin?

Start by talking to a health care professional such as a doctor or pharmacist. Allergy symptoms can range from a mild, self-limiting rash to severe facial swelling and trouble breathing. A health care professional may ask you several questions about your allergies, such as what happened, how soon after starting the antibiotic did the reaction occur, whether treatment was needed, and whether you’ve taken similar medications since then.

These questions can help distinguish between a true allergy and a nonallergic reaction. In many cases, this interview is enough to determine you aren’t allergic. But sometimes, further testing may be recommended.

One way to find out whether you’re really allergic to penicillin is through penicillin skin testing, which includes tiny skin pricks and small injections under the skin. These tests use components related to penicillin to safely check for a true allergy. If skin testing doesn’t cause a reaction, the next step is usually to take a small dose of amoxicillin while being monitored at your doctor’s office, just to be sure it’s safe.

A study published in 2023 showed that in many cases, skipping the skin test and going straight to the small test dose can also be a safe way to check for a true allergy. In this method, patients take a low dose of amoxicillin and are observed for about 30 minutes to see whether any reaction occurs.

With the right questions, testing and expertise, many people can safely reclaim penicillin as an option for treating common infections.The Conversation

Elizabeth W. Covington, Associate Clinical Professor of Pharmacy, Auburn University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Are you really allergic to penicillin? A pharmacist explains why there’s a good chance you’re not − and how you can find out for sure appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This content is educational and focused on medical information, specifically on penicillin allergies and their impact on health care. It presents scientific research and clinical practices without promoting any political ideology or partisan perspective. The article emphasizes evidence-based medical facts and encourages discussion with health care professionals, maintaining a neutral and informative tone typical of centrist communication.

Continue Reading

Trending