fbpx
Connect with us

The Conversation

How much time do kids spend on devices – playing games, watching videos, texting and using the phone?

Published

on

How much time do kids spend on devices – playing games, watching videos, texting and using the phone?

, nearly all U.S. have a smartphone.
MoMo Productions/Digital Vision via Getty Images

David Rosenberg, Wayne State University and Natalia Szura, Wayne State University

Curious Kids is a series for children of all ages. If you have a question you'd like an expert to answer, send it to curiouskidsus@theconversation.com.


How many hours does the average American spend on devices each year? – Maxwell P., age 10


Think about your favorite devices – your smartphone, laptop, tablet, computer or console – the things you use to play cool games, watch hilarious videos and connect and chat with friends.

Many young people spend a lot of time looking at them. Turns out that teens spend an average of 8½ hours on screens per day, and tweens – that's ages 8 to 12 – are not far behind, at 5½ hours .

Keep in mind those numbers are for only social media, gaming and texting. They do not include the time that kids used screens for schoolwork or homework.

Advertisement

What's more, much of the time taken up by social media and texting is apparently not even enjoyable, much less productive. A 2017 study of teens ages 13 to 18 suggests they spend most of those hours on the phone in their bedroom, alone and distressed.

These lonely feelings correlate with the rise in the use of digital media. In 2022, 95% of teens had smartphones with only 23% in 2011. And 46% of today's teens say they use the internet almost constantly, compared with 24% of teenagers who said the same in 2014 and 2015.

Our team of psychiatrists who treat young people with digital addiction have many who spend over 40 hours per week on screens – and some, up to 80 hours.

Think about it: If you spend “just” an average of 50 hours per week on devices from ages 13 to 18 – the total time you will spend on screens equates to more than 12 years of school!

Advertisement
The U.S. surgeon general says too much screen time can increase anxiety and depression in teens and tweens.

Find the right balance

All this is not to say that everything about devices is bad. In this digital age, people embark on exciting journeys through the screens of their devices. Sometimes, screens are the windows to a magical adventure.

But too much screen time can to problems. As human beings, we function best when we're in a state of balance. That happens when we eat well, exercise regularly and get enough sleep.

But spending too much time using digital devices can cause changes in the way you think and behave. Many teens and tweens developed the “fear of missing out” – known as FOMO. And one study shows some people develop nomophobia, which is the fear of being without your phone, or feeling anxious when you can't use it.

Moreover, digital addiction in high school may predict serious depression, anxiety and sleep disruption in college.

Advertisement

Rates of depression and anxiety are skyrocketing among college students. The fear of missing out is pervasive, resulting in sleep disruption; too many college students sleep with smartphones turned on and near their bed – and wake up to respond to texts and notifications during the night. Sleep disruption itself is a core symptom of both depression and anxiety.

How to avoid device addiction

A 2016 poll indicated that half of teens felt they were addicted to their mobile devices.

Getting hooked on screens means missing out on healthy activities. To achieve a better balance, some experts recommend the following: Turn off all screens during family meals and outings. Don't complain when your parents use parental controls. And turn off all the screens in your bedroom 30 to 60 minutes before bedtime – this step will improve sleep.

You may be a “screen addict” if you:

Advertisement
  • Feel uneasy or grumpy when you can't use your device.
  • Don't take breaks while spending hours on your device.
  • Ignore other fun activities you enjoy, like going outside or reading a book.
  • Have trouble sleeping, or falling asleep, because your screen time is too close to bedtime.
  • Experience eye, lower back and neck strain.
  • Struggle with weight gain or obesity because you're inactive.
  • Have difficulty with real-life, face-to-face social interactions.

If you notice these signs, do not dismiss them. But also realize you're not alone and is out there. You can find balance again.

A kid breaks his addiction to gaming and social media.

A healthy approach

Exercise – riding a bike, playing , lifting weights or going for a jog or walk – keeps your brain healthy and protects it against depression and anxiety, as well as limiting your screen time.

Another way to be happier and healthier is to spend time with people – face to face, not via a screen. Seeing people and in person is the best way to bond with others, and it may be even better for life span than exercise.

Creative hobbies are good, too. Cooking, playing an instrument, dancing, any arts and crafts, and thousands of other fun things make people happier and more creative. What's more, hobbies make you well rounded and more attractive to others – which leads to more face-to-face interactions.

It's also critical for parents to practice healthy screen habits. But about one-third of adults say they use screens “constantly.” This is not exactly a great example for kids; when adults take responsibility to minimize their own screen time, the whole family gets better.

Advertisement

Our research team used magnetic resonance imaging, also known as MRI, to scan the brains of teens who had digital addiction. We found impairment in the brain's decision-making, processing and reward centers. But after a digital fast – meaning the addicted teens unplugged for two weeks – those brain abnormalities reversed, and the damage was undone.

Our findings also showed that kids with a desire to overcome digital addiction did better with a digital fast than those who were less willing or who denied their addiction.

A balanced lifestyle in the digital age is all about finding joy in screenless activities – being active, connecting with others and exploring your offline interests.


Hello, curious kids! Do you have a question you'd like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

Advertisement

And since curiosity has no age limit – adults, let us know what you're wondering, too. We won't be able to answer every question, but we will do our best.The Conversation

David Rosenberg, Professor of Psychiatry and Neuroscience, Wayne State University and Natalia Szura, Research Assistant in Psychiatry, Wayne State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Did you miss our previous article…
https://www.biloxinewsevents.com/?p=297893

Advertisement

The Conversation

Viruses are doing mysterious things everywhere – AI can help researchers understand what they’re up to in the oceans and in your gut

Published

on

theconversation.com – Libusha , Associate Professor of and Computational Biology, Microbiology and Immunology, Albert Einstein College of Medicine – 2024-05-15 07:16:41

Many viral genetic sequences code for proteins that researchers haven't seen before.

KTSDesign/Science Photo Library via Getty Images

Libusha Kelly, Albert Einstein College of Medicine

Viruses are a mysterious and poorly understood force in microbial ecosystems. Researchers know they can infect, kill and manipulate human and bacterial cells in nearly every environment, from the oceans to your gut. But scientists don't yet have a full picture of how viruses affect their surrounding environments in large part because of their extraordinary diversity and ability to rapidly evolve.

Advertisement

Communities of microbes are difficult to study in a laboratory setting. Many microbes are challenging to cultivate, and their natural has many more features influencing their or failure than scientists can replicate in a lab.

So systems biologists like me often sequence all the DNA present in a sample – for example, a fecal sample from a patient – separate out the viral DNA sequences, then annotate the sections of the viral genome that code for proteins. These notes on the location, structure and other features of genes researchers understand the functions viruses might carry out in the environment and help identify different kinds of viruses. Researchers annotate viruses by matching viral sequences in a sample to previously annotated sequences available in public databases of viral genetic sequences.

However, scientists are identifying viral sequences in DNA collected from the environment at a rate that far outpaces our ability to annotate those genes. This means researchers are publishing findings about viruses in microbial ecosystems using unacceptably small fractions of available data.

To improve researchers' ability to study viruses around the globe, my team and I have developed a novel approach to annotate viral sequences using artificial intelligence. Through protein language models akin to large language models like ChatGPT but specific to proteins, we were able to classify previously unseen viral sequences. This opens the door for researchers to not only learn more about viruses, but also to address biological questions that are difficult to answer with current techniques.

Advertisement

Annotating viruses with AI

Large language models use relationships between words in large datasets of text to potential answers to questions they are not explicitly “taught” the answer to. When you ask a chatbot “What is the capital of France?” for example, the model is not looking up the answer in a table of capital cities. Rather, it is using its on huge datasets of documents and information to infer the answer: “The capital of France is Paris.”

Similarly, protein language models are AI algorithms that are trained to recognize relationships between billions of protein sequences from environments around the world. Through this training, they may be able to infer something about the essence of viral proteins and their functions.

We wondered whether protein language models could answer this question: “Given all annotated viral genetic sequences, what is this new sequence's function?”

In our proof of concept, we trained neural networks on previously annotated viral protein sequences in pre-trained protein language models and then used them to predict the annotation of new viral protein sequences. Our approach allows us to probe what the model is “seeing” in a particular viral sequence that to a particular annotation. This helps identify candidate proteins of interest either based on their specific functions or how their genome is arranged, winnowing down the search of vast datasets.

Advertisement

Microscopy image of spherical bacteria colored bright green

Prochlorococcus is one of the many species of marine bacteria with proteins that researchers haven't seen before.

Anne Thompson/Chisholm Lab, MIT via Flickr

By identifying more distantly related viral gene functions, protein language models can complement current methods to provide new insights into microbiology. For example, my team and I were able to use our model to discover a previously unrecognized integrase – a type of protein that can move genetic information in and out of cells – in the globally abundant marine picocyanobacteria Prochlorococcus and Synechococcus. Notably, this integrase may be able to move genes in and out of these populations of bacteria in the oceans and enable these microbes to better adapt to changing environments.

Our language model also identified a novel viral capsid protein that is widespread in the global oceans. We produced the first picture of how its genes are arranged, showing it can contain different sets of genes that we believe indicates this virus serves different functions in its environment.

These preliminary findings represent only two of thousands of annotations our approach has provided.

Advertisement

Analyzing the unknown

Most of the hundreds of thousands of newly discovered viruses remain unclassified. Many viral genetic sequences match protein families with no known function or have never been seen before. Our work shows that similar protein language models could help study the threat and promise of our planet's many uncharacterized viruses.

While our study focused on viruses in the global oceans, improved annotation of viral proteins is critical for better understanding the role viruses play in and disease in the human body. We and other researchers have hypothesized that viral activity in the human gut microbiome might be altered when you're sick. This means that viruses may help identify stress in microbial communities.

However, our approach is also limited because it requires high-quality annotations. Researchers are developing newer protein language models that incorporate other “tasks” as part of their training, particularly predicting protein structures to detect similar proteins, to make them more powerful.

Making all AI tools available via FAIR Data Principles – data that is findable, accessible, interoperable and reusable – can help researchers at large realize the potential of these new ways of annotating protein sequences leading to discoveries that benefit human health.The Conversation

Libusha Kelly, Associate Professor of Systems and Computational Biology, Microbiology and Immunology, Albert Einstein College of Medicine

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Human differences in judgment lead to problems for AI

Published

on

theconversation.com – Mayank Kejriwal, Research Assistant Professor of Industrial & Engineering, of Southern California – 2024-05-14 07:14:06

Bias isn't the only human imperfection turning up in AI.

Emrah Turudu/Photodisc via Getty Images

Mayank Kejriwal, University of Southern California

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

Advertisement

If society could somehow bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Advertisement

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Behavioral economist Daniel Kahneman explains the concept of noise in human judgment.

Noise in the data

On the surface, it doesn't seem likely that noise could affect the performance of AI systems. After all, machines aren't affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two – in the best case, perfect agreement – the machine is approaching human-level common sense, according to the test.

Advertisement

So where would noise in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don't account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge – in other words, where there is noise. Researchers still don't know whether or how to weigh AI's answers in that situation, but a first step is acknowledging that the problem exists.

Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven't been any studies of possible noise in AI tests.

Advertisement

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high – even universal – agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a system's performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we're not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

Advertisement

On AI leaderboards, where large language models like the one that powers ChatGPT are , performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise audits

What is the way forward? Returning to Kahneman's book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be .

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, to their adoption.The Conversation

Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Iron fuels immune cells – and it could make asthma worse

Published

on

theconversation.com – Benjamin Hurrell, Assistant Professor of Research in Molecular Microbiology and Immunology, of Southern California – 2024-05-14 07:13:50

Iron carries oxygen throughout the body, but ironically, it can also make it harder to breathe for people with asthma.

Hiroshi Watanabe/Stone via Getty Images

Benjamin Hurrell, University of Southern California and Omid Akbari, University of Southern California

You've likely heard that you can get iron from eating spinach and steak. You might also know that it's an essential trace element that is a major component of hemoglobin, a protein in red blood cells that carries oxygen from your lungs to all parts of the body.

Advertisement

A lesser known important function of iron is its involvement in generating energy for certain immune cells.

In our lab's newly published research, we found that blocking or limiting iron uptake in immune cells could potentially ease up the symptoms of an asthma attack caused by allergens.

Immune cells that need iron

During an asthma attack, harmless allergens activate immune cells in your lungs called ILC2s. This causes them to multiply and release large amounts of cytokines – messengers that immune cells use to communicate – and to unwanted inflammation. The result is symptoms such as coughing and wheezing that make it feel like someone is squeezing your airways.

To assess the role iron plays in how ILC2s function in the lungs, we conducted a of experiments with ILC2s in the lab. We then confirmed our findings in mice with allergic asthma and in with different severities of asthma.

Advertisement

First, we found that ILC2s use a protein called transferrin receptor 1, or TfR1, to take up iron. When we blocked this protein as the ILC2s were undergoing activation, the cells were unable to use iron and could no longer multiply and cause inflammation as well as they did before.

We then used a chemical called an iron chelator to prevent ILC2s from using any iron at all. Iron chelators are like superpowered magnets for iron and are used in medical treatments to manage conditions where there's too much iron in the body.

When we deprived ILC2s with an iron chelator, the cells had to change their metabolism and switch to a different way of getting energy, like trading in a car for a bicycle. The cells weren't as effective at causing inflammation in the lungs anymore.

Person with one hand to chest and other hand clutching an inhaler

An asthma attack can feel like someone is squeezing your airways.

Mariia Siurtukova/Moment via Getty Images

Advertisement

Next, we limited cellular iron in mice with sensitive airways due to ILC2s. We did this in three different ways: by inhibiting TfR1, adding an iron chelator or inducing low overall iron levels using a synthetic protein called mini-hepcidin. Each of these methods helped reduce the mice's airway hyperreactivity – basically reducing the severity of their asthma symptoms.

Lastly, we looked at cells from patients with asthma. We noticed something interesting: the more TfR1 protein on their ILC2 cells, the worse their asthma symptoms. In other words, iron was playing a big role in how bad their asthma got. Blocking TfR1 and administering iron chelators both reduced ILC2 proliferation and cytokine production, suggesting that our findings in mice apply to human cells. This means we can move these findings from the lab to clinical trials as quickly as possible.

Iron therapy for asthma

Iron is like the conductor of an orchestra, instructing immune cells such as ILC2s how to behave during an asthma attack. Without enough iron, these cells can't cause as much trouble, which could mean fewer asthma symptoms.

Next, we're working on targeting a patient's immune cells during an asthma attack. If we can lower the amount of iron available to ILC2s without depleting overall iron levels in the body, this could mean a new therapy for asthma that tackles the root cause of the disease, not just the symptoms. Available treatments can control symptoms to keep patients alive, but they are not curing the disease. Iron-related therapies may offer a better solution for patients with asthma.

Advertisement

Our discovery applies to more than just asthma. It could be a -changer for other diseases where ILC2s are involved, such as eczema and type 2 diabetes. Who knew iron could be such a big deal to your immune system?The Conversation

Benjamin Hurrell, Assistant Professor of Research in Molecular Microbiology and Immunology, University of Southern California and Omid Akbari, Professor of Molecular Microbiology and Immunology, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

News from the South

Trending