Connect with us

The Conversation

Enigmatic human fossil jawbone may be evidence of an early *Homo sapiens* presence in Europe – and adds mystery about who those humans were

Published

on

Enigmatic human fossil jawbone may be evidence of an early Homo sapiens presence in Europe – and adds mystery about who those humans were

Close examination of digital and 3D-printed models suggested the fossil needs to be reclassified.
Brian A. Keeling

Brian Anthony Keeling, Binghamton University, State University of New York and Rolf Quam, Binghamton University, State University of New York

Homo sapiens, our own species, evolved in Africa sometime between 300,000 and 200,000 years ago. Anthropologists are pretty confident in that estimate, based on fossil, genetic and archaeological evidence.

Then what happened? How modern humans spread throughout the rest of the world is one of the most active areas of research in human evolutionary studies.

The earliest fossil evidence of our species outside of Africa is found at a site called Misliya cave, in the Middle East, and dates to around 185,000 years ago. While additional H. sapiens fossils are found from around 120,000 years ago in this same region, it seems modern humans reached Europe much later.

Understanding when our species migrated out of Africa can reveal insights into present-day biological, behavioral and cultural diversity. While we Homo sapiens are the only humans alive today, our species coexisted with different human lineages in the past, including Neandertals and Denisovans. Scientists are interested in when and where H. sapiens encountered these other kinds of humans.

Our recent reanalysis of a fossil jawbone from a Spanish site called Banyoles is raising new questions about when our species may have migrated to Europe.

Homo sapiens fossils found in Europe

The first documented discoveries of human fossils were in Europe, just before Darwin’s 1859 publication of “The Origin of Species.” Ideas of evolution were being actively debated within European universities and scientific societies.

Many of the earliest fossil findings were Neandertals, a species that evolved in Europe by 250,000 years ago and became extinct around 40,000 years ago. They are also our closest evolutionary relatives and, because of ancient interbreeding, the genomes of people today include Neandertal DNA. Because of their early historical presence, Neandertal fossils had a big influence on how early researchers thought about human evolution.

The first fossil evidence of Neandertals was found in 1856 during quarrying activities from the Neander Tal (Neander Valley) in Germany. Paleontologists took the hint and started to search for human fossils in other caves and exposed areas that preserved ancient sediments.

More than a decade later, in 1868, paleontologists uncovered H. sapiens fossils at the site of Cro-Magnon in southern France. For much of the 20th century, the 30,000-year-old Cro-Magnon fossils represented the earliest fossil evidence of our species in Europe.

More recently, evidence for an earlier H. sapiens presence in Europe has come from two sites in Eastern Europe, including a partial skull from Zlatý kůň Cave in Czechia dating to 45,000 years ago, as well as more fragmentary remains from Bacho Kiro Cave in Bulgaria dating to around 44,000 years ago. Ancient DNA analysis has confirmed that the fossils from these sites represent H. sapiens. Additional, potentially earlier, evidence is represented by a single tooth dating to 54,000 years ago from the Grotte Mandrin Cave in France.

This is where the human fossil from Banyoles comes into the story.

YouTube video
A new look at an old fossil find potentially pushes back the date when Homo sapiens lived in Europe.

Reinvestigating a ‘Neandertal’ mandible

Over a century ago in 1889, a fossil human lower jaw, or mandible, was found at a quarry near the town of Banyoles, in northeastern Spain. Pere Alsius, a prominent local pharmacist, first studied the mandible, and the fossil has been curated by his family ever since.

A number of anthropologists have studied the fossil over time, but it has not usually been included in discussions about H. sapiens in Europe. Most researchers instead argued it represented a Neandertal or showed Neandertal-like features, in part because the Banyoles fossil lacks a feature considered typical and diagnostic of our own species: a bony chin on the front of the mandible.

Researchers did not have a good idea of how old the Banyoles mandible was, with most believing it likely dated to the Middle Pleistocene (780,000-130,000 years ago). That age made it seem too old to represent H. sapiens. Thus, with the absence of a chin and the presumed early date, the designation as a Neandertal seemed to make sense.

Map showing the green and rocky terrain of Spain with fossil discovery sites indicated.
Map of the Iberian Peninsula indicating where the Banyoles mandible (yellow star) was found, along with Late Pleistocene Neandertal (orange triangles) and H. sapiens (white squares) sites.
Brian A. Keeling

Based on recent modern uranium-series and electron spin resonance dating, researchers now believe the Banyoles mandible is between 45,000 and 66,000 years old. This younger estimate overlaps with the early H. sapiens fossils from Eastern Europe.

Working with Spanish paleoanthropologists and archaeologists, we took another look at what species the fossil might represent. We relied on a CT scan to virtually reconstruct damaged or missing portions of the mandible and generated a 3D model of the complete fossil. Then, we studied its overall shape and distinctive anatomical features, comparing it to H. sapiens, Neandertals and other earlier human species.

Three side-by-side digital reconstructions of the Banyoles mandible, from side and above.
Virtual reconstruction of the 3D model of the Banyoles mandible. Highlighted piece in blue indicates a mirrored element that researchers used to fill out missing sections.
Brian A. Keeling

In contrast to earlier analyses, our results revealed that the Banyoles jawbone was most similar to H. sapiens fossils – not Neandertals.

When we examined the mandible’s bony features where muscle tendons and ligaments would have attached, it most closely resembled H. sapiens. We also found no unique bony features shared with the Neandertals. Additionally, when we used sophisticated 3D analysis techniques, we found that Banyoles’ overall shape was a better match with H. sapiens than with Neandertal individuals.

While nearly all of our evidence suggests this prehistoric human was indeed a member of our species, the lack of a chin remains puzzling. This feature is present in all human populations today and should be present in Banyoles if it is a member of our species.

Figuring out the closest match

How do we reconcile our results showing that Banyoles is a modern human with the fact that it lacks one of the most distinctive modern human features? We considered several possible scenarios.

When the mandible was discovered, it was still encased in a hard travertine block and only partially exposed. During initial cleaning and preparation of the specimen, it was accidentally dropped and the chin region was damaged. The fossil was subsequently reconstructed, with the damaged fragments aligned in their correct anatomical position, and the current state of the fossil does seem to accurately reflect an original chinless shape. Thus, the lack of a chin in Banyoles cannot be attributed to this initial incident.

Could the lack of a chin in the Banyoles fossil be a result of interbreeding with Neandertals, who also lacked a chin? Genetic evidence suggests that H. sapiens most likely interbred with Neandertals between 45,000 and 65,000 years ago, making this a possibility.

To assess this hypothesis, we compared Banyoles with an early H. sapiens mandible dating to about 42,000 years ago from a Romanian site called Peştera cu Oase. Ancient DNA analysis has revealed that the Oase individual had a Neandertal ancestor between four and six generations back, making it close to a hybrid individual. However, unlike Banyoles, this mandible shows a full chin along with some other Neandertal features. Since Banyoles shared no distinctive features with Neandertals, we ruled out the possibility of this individual representing interbreeding between Neandertals and H. sapiens.

Three different lower jaw bones side by side
Comparison of mandibles between H. sapiens, at left; Banyoles, center; and a Neandertal, at right.
Brian A. Keeling

We’re left with two possibilities. Banyoles may represent a hybrid individual between H. sapiens and a non-Neandertal archaic human lineage. This scenario might account for the absence of the chin as well as the lack of any other Neandertal features in Banyoles. However, scientists haven’t identified any such non-Neandertal archaic group in the fossil record of the European Late Pleistocene (129,000-11,700 years ago), making this hypothesis less likely.

Alternatively, Banyoles may document a previously unknown lineage of largely chinless H. sapiens in Europe. Possible support for this hypothesis comes from the fact that early H. sapiens fossils from Africa and the Middle East show a less prominent chin than do living humans.

Additionally, ancient DNA research has shown that H. sapiens populations in Europe before 35,000 years ago did not contribute to the modern European gene pool. Thus, we believe the least unlikely hypothesis is that Banyoles represents an individual from one of these early H. sapiens populations.

Our study of Banyoles demonstrates how new discoveries about our evolutionary past do not solely rely on new fossil discoveries, but can also come about through applying new methodologies to previously discovered fossils. If Banyoles is really a member of our species, it would potentially represent the earliest H. sapiens lineage documented to date in Europe. Future ancient DNA analysis could confirm or refute this surprising result. In the meantime, the 3D model of Banyoles is available for other researchers to study and form their own conclusions.The Conversation

Brian Anthony Keeling, Doctoral Candidate in Anthropology, Binghamton University, State University of New York and Rolf Quam, Associate Professor of Anthropology, Binghamton University, State University of New York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

AI is giving a boost to efforts to monitor health via radar

Published

on

theconversation.com – Chandler Bauder, Electronics Engineer, U.S. Naval Research Laboratory – 2025-04-30 07:48:00

AI-powered radar could enable contactless health monitoring in the home.
Chandler Bauder

Chandler Bauder, U.S. Naval Research Laboratory and Aly Fathy, University of Tennessee

If you wanted to check someone’s pulse from across the room, for example to remotely monitor an elderly relative, how could you do it? You might think it’s impossible, because common health-monitoring devices such as fingertip pulse oximeters and smartwatches have to be in contact with the body.

However, researchers are developing technologies that can monitor a person’s vital signs at a distance. One of those technologies is radar.

We are electrical engineers who study radar systems. We have combined advances in radar technology and artificial intelligence to reliably monitor breathing and heart rate without contacting the body.

Noncontact health monitoring has the potential to be more comfortable and easier to use than traditional methods, particularly for people looking to monitor their vital signs at home.

How radar works

Radar is commonly known for measuring the speed of cars, making weather forecasts and detecting obstacles at sea and in the air. It works by sending out electromagnetic waves that travel at the speed of light, waiting for them to bounce off objects in their path, and sensing them when they return to the device.

Radar can tell how far away things are, how fast they’re moving, and even their shape by analyzing the properties of the reflected waves.

Radar can also be used to monitor vital signs such as breathing and heart rate. Each breath or heartbeat causes your chest to move ever so slightly – movement that’s hard for people to see or feel. However, today’s radars are sensitive enough to detect these tiny movements, even from across a room.

Advantages of radar

There are other technologies that can be used to measure health remotely. Camera-based techniques can use infrared light to monitor changes in the surface of the skin in the same manner as pulse oximeters, revealing information about your heart’s activity. Computer vision systems can also monitor breathing and other activities, such as sleep, and they can detect when someone falls.

However, cameras often fail in cases where the body is obstructed by blankets or clothes, or when lighting is inadequate. There are also concerns that different skin tones reflect infrared light differently, causing inaccurate readings for people with darker skin. Additionally, depending on high-resolution cameras for long-term health monitoring brings up serious concerns about patient privacy.

side-by-side images, one of a person and the other a verticle series of nested blobs of color
Radar sees the world in terms of how strongly objects in its view reflect the transmitted signals. The resolution of images it can generate are much lower than images cameras produce.
Chandler Bauder

Radar, on the other hand, solves many of these problems. The wavelengths of the transmitted waves are much longer than those of visible or infrared light, allowing the waves to pass through blankets, clothing and even walls. The measurements aren’t affected by lighting or skin tone, making them more reliable in different conditions.

Radar imagery is also extremely low resolution – think old Game Boy graphics versus a modern 4K TV – so it doesn’t capture enough detail to be used to identify someone, but it can still monitor important activities. While it does project energy, the amount does not pose a health hazard. The health-monitoring radars operate at frequencies and power levels similar to the phone in your pocket.

Radar + AI

Radar is powerful, but it has a big challenge: It picks up everything that moves. Since it can detect tiny chest movements from the heart beating, it also picks up larger movements from the head, limbs or other people nearby. This makes it difficult for traditional processing techniques to extract vital signs clearly.

To address this problem we created a kind of “brain” to make the radar smarter. This brain, which we named mm-MuRe, is a neural network – a type of artificial intelligence – that learns directly from raw radar signals and estimates chest movements. This approach is called end-to-end learning. It means that, unlike other radar plus AI techniques, the network figures out on its own how to ignore the noise and focus only on the important signals.

a diagram with two cartoon representations of people on one side, a brain on the other and vertical curved lines in betwenn
In our study, we used AI to transform raw, unprocessed radar signals into vital signs waveforms of one or two people.
Chandler Bauder

We found that this AI enhancement not only gives more accurate results, it also works faster than traditional methods. It handles multiple people at once, for example an elderly couple, and adapts to new situations, even those it didn’t see during training – such as when people are sitting at different heights, riding in a car or standing close together.

Implications for health care

Reliable remote health monitoring using radar and AI could be a major boon for health care. With no need to touch the patient’s skin, risks of rashes, contamination and discomfort could be greatly reduced. It’s especially helpful in long-term care, where reducing wires and devices can make life significantly easier for patients and caregivers.

Imagine a nursing home where radar quietly watches over residents, alerting caregivers immediately if someone has breathing trouble, falls or needs help. It can be implemented as a home system that checks your breathing while you sleep – no wearables required. Doctors could even use radar to remotely monitor patients recovering from surgery or illness.

This technology is moving quickly toward real-world use. In the future, checking your health could be as simple as walking into a room, with invisible waves and smart AI working silently to take your vital signs.The Conversation

Chandler Bauder, Electronics Engineer, U.S. Naval Research Laboratory and Aly Fathy, Professor of Electrical Engineering, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post AI is giving a boost to efforts to monitor health via radar appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The article is focused on a scientific and technological development related to health monitoring using radar and artificial intelligence. It provides an overview of the research process, technical details, and potential health care applications without expressing a clear ideological stance. The tone remains neutral, emphasizing the technical capabilities and benefits of the technology, particularly in long-term care and home health monitoring. While it does mention potential privacy concerns with other methods like cameras, it does so without taking a political position, focusing instead on the advantages of radar. The content adheres to factual reporting and avoids overt bias or advocacy, presenting the information in a straightforward and informative manner.

Continue Reading

The Conversation

Forensics tool ‘reanimates’ the ‘brains’ of AIs that fail in order to understand what went wrong

Published

on

theconversation.com – David Oygenblik, Ph.D. Student in Electrical and Computer Engineering, Georgia Institute of Technology – 2025-04-30 07:47:00

Tesla crashes are only the most glaring of AI failures.
South Jordan Police Department via APPEAR

David Oygenblik, Georgia Institute of Technology and Brendan Saltaformaggio, Georgia Institute of Technology

From drones delivering medical supplies to digital assistants performing everyday tasks, AI-powered systems are becoming increasingly embedded in everyday life. The creators of these innovations promise transformative benefits. For some people, mainstream applications such as ChatGPT and Claude can seem like magic. But these systems are not magical, nor are they foolproof – they can and do regularly fail to work as intended.

AI systems can malfunction due to technical design flaws or biased training data. They can also suffer from vulnerabilities in their code, which can be exploited by malicious hackers. Isolating the cause of an AI failure is imperative for fixing the system.

But AI systems are typically opaque, even to their creators. The challenge is how to investigate AI systems after they fail or fall victim to attack. There are techniques for inspecting AI systems, but they require access to the AI system’s internal data. This access is not guaranteed, especially to forensic investigators called in to determine the cause of a proprietary AI system failure, making investigation impossible.

We are computer scientists who study digital forensics. Our team at the Georgia Institute of Technology has built a system, AI Psychiatry, or AIP, that can recreate the scenario in which an AI failed in order to determine what went wrong. The system addresses the challenges of AI forensics by recovering and “reanimating” a suspect AI model so it can be systematically tested.

Uncertainty of AI

Imagine a self-driving car veers off the road for no easily discernible reason and then crashes. Logs and sensor data might suggest that a faulty camera caused the AI to misinterpret a road sign as a command to swerve. After a mission-critical failure such as an autonomous vehicle crash, investigators need to determine exactly what caused the error.

Was the crash triggered by a malicious attack on the AI? In this hypothetical case, the camera’s faultiness could be the result of a security vulnerability or bug in its software that was exploited by a hacker. If investigators find such a vulnerability, they have to determine whether that caused the crash. But making that determination is no small feat.

Although there are forensic methods for recovering some evidence from failures of drones, autonomous vehicles and other so-called cyber-physical systems, none can capture the clues required to fully investigate the AI in that system. Advanced AIs can even update their decision-making – and consequently the clues – continuously, making it impossible to investigate the most up-to-date models with existing methods.

YouTube video
Researchers are working on making AI systems more transparent, but unless and until those efforts transform the field, there will be a need for forensics tools to at least understand AI failures.

Pathology for AI

AI Psychiatry applies a series of forensic algorithms to isolate the data behind the AI system’s decision-making. These pieces are then reassembled into a functional model that performs identically to the original model. Investigators can “reanimate” the AI in a controlled environment and test it with malicious inputs to see whether it exhibits harmful or hidden behaviors.

AI Psychiatry takes in as input a memory image, a snapshot of the bits and bytes loaded when the AI was operational. The memory image at the time of the crash in the autonomous vehicle scenario holds crucial clues about the internal state and decision-making processes of the AI controlling the vehicle. With AI Psychiatry, investigators can now lift the exact AI model from memory, dissect its bits and bytes, and load the model into a secure environment for testing.

Our team tested AI Psychiatry on 30 AI models, 24 of which were intentionally “backdoored” to produce incorrect outcomes under specific triggers. The system was successfully able to recover, rehost and test every model, including models commonly used in real-world scenarios such as street sign recognition in autonomous vehicles.

Thus far, our tests suggest that AI Psychiatry can effectively solve the digital mystery behind a failure such as an autonomous car crash that previously would have left more questions than answers. And if it does not find a vulnerability in the car’s AI system, AI Psychiatry allows investigators to rule out the AI and look for other causes such as a faulty camera.

Not just for autonomous vehicles

AI Psychiatry’s main algorithm is generic: It focuses on the universal components that all AI models must have to make decisions. This makes our approach readily extendable to any AI models that use popular AI development frameworks. Anyone working to investigate a possible AI failure can use our system to assess a model without prior knowledge of its exact architecture.

Whether the AI is a bot that makes product recommendations or a system that guides autonomous drone fleets, AI Psychiatry can recover and rehost the AI for analysis. AI Psychiatry is entirely open source for any investigator to use.

AI Psychiatry can also serve as a valuable tool for conducting audits on AI systems before problems arise. With government agencies from law enforcement to child protective services integrating AI systems into their workflows, AI audits are becoming an increasingly common oversight requirement at the state level. With a tool like AI Psychiatry in hand, auditors can apply a consistent forensic methodology across diverse AI platforms and deployments.

In the long run, this will pay meaningful dividends both for the creators of AI systems and everyone affected by the tasks they perform.The Conversation

David Oygenblik, Ph.D. Student in Electrical and Computer Engineering, Georgia Institute of Technology and Brendan Saltaformaggio, Associate Professor of Cybersecurity and Privacy, and Electrical and Computer Engineering, Georgia Institute of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Forensics tool ‘reanimates’ the ‘brains’ of AIs that fail in order to understand what went wrong appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The article focuses on the development of a forensic tool, AI Psychiatry, designed to investigate the failure of AI systems. It provides technical insights into how this tool can help investigate and address AI failures, particularly in autonomous vehicles, without promoting any ideological stance. The content is centered on technological advancements and their practical applications, with an emphasis on problem-solving and transparency in AI systems. The tone is neutral, focusing on factual reporting about AI forensics and the technical capabilities of the system. There is no discernible political bias in the article, as it largely sticks to technical and academic subjects without introducing political viewpoints.

Continue Reading

The Conversation

Young bats learn to be discriminating when listening for their next meal

Published

on

theconversation.com – Logan S. James, Research Associate in Animal Behavior, The University of Texas at Austin – 2025-04-29 18:07:00

A frog-eating bat approaches a túngara frog, one of its preferred foods.
Grant Maslowski

Logan S. James, The University of Texas at Austin; Rachel Page, Smithsonian Institution, and Ximena Bernal, Purdue University

It is late at night, and we are silently watching a bat in a roost through a night-vision camera. From a nearby speaker comes a long, rattling trill.

Cane toad’s rattling trill call.

The bat briefly perks up and wiggles its ears as it listens to the sound before dropping its head back down, uninterested.

Next from the speaker comes a higher-pitched “whine” followed by a “chuck.”

Túngara frog’s ‘whine chuck’ call.

The bat vigorously shakes its ears and then spreads its wings as it launches from the roost and dives down to attack the speaker.

Bats show tremendous variation in the foods they eat to survive. Some species specialize on fruits, others on insects, others on flower nectar. There are even species that catch fish with their feet.

Bat eating frog
The calls male frogs use to attract mates also attract eavesdropping predators. Here, a frog-eating bat consumes an unlucky male túngara frog.
Marcos Guerra, Smithsonian Tropical Research Institute

At the Smithsonian Tropical Research Institute in Panama, we’ve been studying one species, the fringe-lipped bat (Trachops cirrhosus), for decades. This bat is a carnivore that specializes in feeding on frogs.

Male frogs from many species call to attract female frogs. Frog-eating bats eavesdrop on those calls to find their next meal. But how do the bats come to associate sounds and prey?

We were interested in understanding how predators that eavesdrop on their prey acquire the ability to discriminate between tasty and dangerous meals. We combined our expertise on animal behavior, bat cognition and frog communication to investigate.

How do bats know the sound of a tasty meal?

There are nearly 8,000 frog and toad species in the world, and each one has a unique call. For instance, the first rattling call that we played from our speaker came from a large and toxic cane toad. The second “whine chuck” came from the túngara frog, a preferred prey species for these bats. Just as herpetologists can tell a frog species by its call, frog-eating bats can use these calls to identify the best meal.

Over the years, our research team has learned a great deal from frog-eating bats about how sound and echolocation are used to find prey, as well as the role of learning and memory in foraging success. In our newly published study, we focused on how associations between the sounds a bat hears and the prey quality it expects arise within the lifespan of an individual bat.

Bat capturing frog from a pond
Adult bats like the one pictured have extensive acoustic repertoires and remember specific frog calls year after year. Young bats must learn which calls to respond to – and, critically, which to ignore – over time through experience.
Grant Maslowski

We considered whether the associations between sound and a delicious meal are an evolved specialty that bats are born with. But this possibility seemed unlikely because the bat species we study has a large geographic distribution across Central and South America, and the species of frogs found across this range vary tremendously.

Instead, we hypothesized that bats learn to associate different sounds with food as they grow up. But we had to test this idea.

First, we and our collaborators spent time in the forest and at ponds to record the mating calls from 15 of the most common frog and toad species in our study area in Panama.

Researcher untangles a bat from a finely woven mistnet at night.
Rachel Page, one of the lead authors on the study, takes a bat out of a mist net in Panama.
Jorge Alemán, Smithsonian Tropical Research Institute

Then, we set up mist nets along streams in Soberanía National Park to capture wild bats for the study.

Frog call, bat response

For the testing, each bat was housed individually in a large, outdoor flight chamber. From a speaker on the ground in the center, we played calls from one frog species on loop for 30 seconds and measured the behavior of the bat, which was hanging from a cloth roost. As we expected, adult bats were generally uninterested in the sounds of species that were unpalatable, such as those with toxins or those that are too large for the bat to carry.

But it was a different story for young bats. Juveniles responded with significantly more predatory behaviors in response to the calls of toxic toads compared with the adults. They also responded more weakly than adults to the sounds of túngara frogs, a palatable, abundant prey that adult bats prefer.

Thus it seems that juvenile bats must learn the associations between sounds and food over the course of their lives. As they grow up, we believe they learn to ignore the calls of frogs that aren’t worth the trouble and zero in on the calls of frogs that will be a good meal.

To better understand how sounds drive prey associations, we measured the acoustic properties of the different calls. We found that some of the most noticeable features of the calls correlated with body size: Larger frogs produce lower-frequency calls – that is, their voices are deeper. Both the adult and juvenile bats responded more strongly to larger species, which would provide larger meals.

However, there was a clear exception in the responses of adults, where the toxic toads and very large frogs elicited much weaker responses than expected for their body size. This finding led us to hypothesize that bats have early biases to pay attention to sounds associated with larger body size. Then they must learn through experience that meal quality is not only about size. Some large meals are toxic or impossible to carry, making them unpalatable.

YouTube video
Once the researchers have studied each frog-eating bat for a few days, they safely release it where it was originally captured. Footage courtesy of Léna de Framond-Bénard and Eric de Framond-Bénard, compiled by Caroline Rogan.

After the bats spent a few days with us, we released each one back at its original site of capture. The bats departed, taking with them a small RFID tag, just like the ones pet owners use to identify their dogs and cats, in case we meet again as part of a future study.

As the bats go on with their lives in the wild, we continue our quest to deepen our understanding of the subtleties of information discrimination. How do individuals weed through information overload to make choices that make sense and benefit them? That’s the same challenge we all face each day.The Conversation

Logan S. James, Research Associate in Animal Behavior, The University of Texas at Austin; Rachel Page, Staff Scientist, Smithsonian Tropical Research Institute, Smithsonian Institution, and Ximena Bernal, Professor of Biological Sciences, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Young bats learn to be discriminating when listening for their next meal appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content of this article is a scientific and factual exploration of bat behavior, specifically focusing on the learning processes of young bats in identifying suitable prey based on sound cues. The language used is neutral, without any ideological stance or persuasive elements aimed at pushing a particular viewpoint. The piece primarily conveys research findings and observations made by scientists. The framing is academic and informative, with no evident political, social, or controversial implications influencing the tone. It adheres to neutral, factual reporting and does not present any discernible bias in terms of ideology or political orientation.

Continue Reading

Trending