Connect with us

The Conversation

Human brains and fruit fly brains are built similarly – visualizing how helps researchers better understand how both work



theconversation.com – Kristin Scaplen, Assistant Professor of Neuroscience, Bryant – 2024-04-15 07:28:06
Stepping through the brain reveals essential information about its structure and function.
Scaplen et al. 2021/eLife, CC BY

Kristin Scaplen, Bryant University

The human brain contains approximately 87 billion neurons. On average, each of these cells make thousands of different connections to facilitate communication across the brain. Neural communication is thought to underlie all brain functions – from experiencing and interpreting the world around you to remembering those experiences and controlling how your body responds.

But in this vast network of neural communication, precisely who is talking to whom, and what is the consequence of those individual conversations?

Understanding the details surrounding neural communication and how it's shaped by experience is one of the many focuses of neuroscience. However, this is complicated by the sheer number of microscopic connections there are to study in the human brain, many of which are often in flux, and that available tools are unable to adequate resolution.


As a consequence, many scientists like me have turned to simpler organisms, such as the fruit fly.

Figure of 15 microscopy images of a fruit fly brain, labeled blue, magenta, green.
This figure shows connections between different mushroom body neurons.
Scaplen et al. 2021/eLife, CC BY

Fruit flies, though pesky in the kitchen, are invaluable in the laboratory. Their brains are built in remarkably similar ways to those of humans. Importantly, scientists have developed tools that make fly brains significantly easier to study with a resolution that hasn't been achieved in other organisms.

My colleague Gilad Barnea, a neuroscientist at Brown University, and his team spent over 20 years developing a tool to visualize all of the microscopic connections between neurons within the brain.

Neurons communicate with each other by sending and receiving molecules called neurotransmitters between receptor proteins on their surface. Barnea's tool, trans-Tango, translates the activation of specific receptor proteins into gene expression that ultimately allow for visualization.

My team and I used trans-Tango to visualize all the neural connections of a learning and memory center, called the mushroom body, in the fruit fly brain.

GIF of black square that gradually reveals flickering green and red swatches surrounded then swallowed by dark blue in a roughly oblong shape
The starts close to the face of the fly and moves back, using genetics to express different proteins within neurons to visualize them. Green indicates the neuron of interest, red indicates the neuron it talks to and blue indicates all other brain cells.
Kristin Scaplen, CC BY-SA

Here, a cluster of approximately four neurons, labeled green, messages from the mushroom body, which is the L-shaped structure labeled blue in the center of the fly brain. You can step through the brain and see all the other neurons they likely communicate with, labeled red. The cell bodies of the neurons reside on the edges of the brain, and the locations where they receive messages from the mushroom body appear as green tangles invading a small oval compartment. Where these weblike green extensions mingle with red are thought to be where neurons communicate their processed message to other downstream neurons.

Stepping further into the brain, you can see the downstream neurons navigating to a single layer of a fan-shaped structure within the brain. This fan-shaped body is thought to modulate many functions, arousal, memory storage, locomotion and transforming sensory experiences into actions.

Not only did our images reveal previously unknown connections across the brain, but it also provides an to explore the consequences of those individual neural conversations. Fly brain connections were remarkably consistent but also varied slightly from one fly to another. These slight variations in connectivity are likely influenced by the fly's individual experiences, just like they are in people.

The beauty of trans-Tango lies in its flexibility. In addition to visualizing connections, scientists can use genes to manipulate neural activity and better understand how neural communication affects behavior. Because fly brains are similarly built to those of humans, researchers can use them to study how brain connections function and how they might be disrupted in disease. Ultimately, this will improve our understanding of our own brains and the human .The Conversation

Kristin Scaplen, Assistant Professor of Neuroscience, Bryant University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The Conversation

Animals self-medicate with plants − behavior people have observed and emulated for millennia



theconversation.com – Adrienne , Research Scholar, Classics and History and Philosophy of Science, Stanford – 2024-05-24 07:29:22

A goat with an arrow wound nibbles the medicinal herb dittany.

O. Dapper, CC BY

Adrienne Mayor, Stanford University

When a wild orangutan in Sumatra recently suffered a facial wound, apparently after fighting with another male, he did something that caught the attention of the scientists observing him.


The animal chewed the leaves of a liana vine – a plant not normally eaten by apes. Over several days, the orangutan carefully applied the juice to its wound, then covered it with a paste of chewed-up liana. The wound healed with only a faint scar. The tropical plant he selected has antibacterial and antioxidant properties and is known to alleviate pain, fever, bleeding and inflammation.

The striking story was picked up by media worldwide. In interviews and in their research paper, the scientists stated that this is “the first systematically documented case of active wound treatment by a wild animal” with a biologically active plant. The discovery will “ new insights into the origins of human wound care.”

left: four leaves next to a ruler. right: an orangutan in a treetop

Fibraurea tinctoria leaves and the orangutan chomping on some of the leaves.

Laumer et al, Sci Rep 14, 8932 (2024), CC BY

To me, the behavior of the orangutan sounded familiar. As a historian of ancient science who investigates what Greeks and Romans knew about plants and animals, I was reminded of similar cases reported by Aristotle, Pliny the Elder, Aelian and other naturalists from antiquity. A remarkable body of accounts from ancient to medieval times self-medication by many different animals. The animals used plants to treat illness, repel parasites, neutralize poisons and heal wounds.


The term zoopharmacognosy – “animal medicine knowledge” – was invented in 1987. But as the Roman natural historian Pliny pointed out 2,000 years ago, many animals have made medical discoveries useful for humans. Indeed, a large number of medicinal plants used in modern were first discovered by Indigenous peoples and past cultures who observed animals employing plants and emulated them.

What you can learn by watching animals

Some of the earliest written examples of animal self-medication appear in Aristotle's “History of Animals” from the fourth century BCE, such as the well-known habit of dogs to eat grass when ill, probably for purging and deworming.

Aristotle also noted that after hibernation, bears seek wild garlic as their first food. It is rich in vitamin C, iron and magnesium, healthful nutrients after a long winter's nap. The Latin name reflects this folk belief: Allium ursinum translates to “bear lily,” and the common name in many other languages refers to bears.

medieval image of a stag wounded by a hunter's arrow, while a doe is also wounded, but eats the herb dittany, causing the arrow to come out

As a hunter lands several arrows in his quarry, a wounded doe nibbles some growing dittany.

British Library, Harley MS 4751 (Harley Bestiary), folio 14v, CC BY


Pliny explained how the use of dittany, also known as wild oregano, to treat arrow wounds arose from watching wounded stags grazing on the herb. Aristotle and Dioscorides credited wild goats with the discovery. Vergil, Cicero, Plutarch, Solinus, Celsus and Galen claimed that dittany has the ability to expel an arrowhead and close the wound. Among dittany's many known phytochemical properties are antiseptic, anti-inflammatory and coagulating effects.

According to Pliny, deer also knew an antidote for toxic plants: wild artichokes. The leaves relieve nausea and stomach cramps and protect the liver. To cure themselves of spider bites, Pliny wrote, deer ate crabs washed up on the beach, and sick goats did the same. Notably, crab shells contain chitosan, which boosts the immune system.

When elephants accidentally swallowed chameleons hidden on green foliage, they ate olive leaves, a natural antibiotic to combat salmonella harbored by lizards. Pliny said ravens eat chameleons, but then ingest bay leaves to counter the lizards' toxicity. Antibacterial bay leaves relieve diarrhea and gastrointestinal distress. Pliny noted that blackbirds, partridges, jays and pigeons also eat bay leaves for digestive problems.

17th century etching of a weasel and a basilisk in conflict

A weasel wears a belt of rue as it attacks a basilisk in an illustration from a 1600s bestiary.

Wenceslaus Hollar/Wikimedia Commons, CC BY


Weasels were said to roll in the evergreen plant rue to counter wounds and snakebites. Fresh rue is toxic. Its medical value is unclear, but the dried plant is included in many traditional folk medicines. Swallows collect another toxic plant, celandine, to make a poultice for their chicks' eyes. Snakes emerging from hibernation rub their eyes on fennel. Fennel bulbs contain compounds that promote tissue repair and immunity.

According to the naturalist Aelian, who lived in the third century BCE, the Egyptians traced much of their medical knowledge to the wisdom of animals. Aelian described elephants treating spear wounds with olive flowers and oil. He also mentioned storks, partridges and turtledoves crushing oregano leaves and applying the paste to wounds.

The study of animals' remedies continued in the Middle Ages. An example from the 12th-century English compendium of animal lore, the Aberdeen Bestiary, tells of bears coating sores with mullein. Folk medicine prescribes this flowering plant to soothe pain and heal burns and wounds, thanks to its anti-inflammatory chemicals.

Ibn al-Durayhim's 14th-century manuscript “The Usefulness of Animals” reported that swallows healed nestlings' eyes with turmeric, another anti-inflammatory. He also noted that wild goats chew and apply sphagnum moss to wounds, just as the Sumatran orangutan did with liana. Sphagnum moss dressings neutralize bacteria and combat infection.


Nature's pharmacopoeia

Of course, these premodern observations were folk knowledge, not formal science. But the stories reveal long-term observation and imitation of diverse animal species self-doctoring with bioactive plants. Just as traditional Indigenous ethnobotany is leading to lifesaving drugs today, scientific testing of the ancient and medieval claims could to discoveries of new therapeutic plants.

Animal self-medication has become a rapidly growing scientific discipline. Observers observations of animals, from birds and rats to porcupines and chimpanzees, deliberately employing an impressive repertoire of medicinal substances. One surprising observation is that finches and sparrows collect cigarette butts. The nicotine kills mites in bird nests. Some veterinarians even allow ailing dogs, horses and other domestic animals to choose their own prescriptions by sniffing various botanical compounds.

Mysteries remain. No one knows how animals sense which plants cure sickness, heal wounds, repel parasites or otherwise promote . Are they intentionally responding to particular health crises? And how is their knowledge transmitted? What we do know is that we humans have been learning healing secrets by watching animals self-medicate for millennia.The Conversation

Adrienne Mayor, Research Scholar, Classics and History and Philosophy of Science, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

What Philadelphians need to know about the city’s 7,000-camera surveillance system



theconversation.com – Albert Fox Cahn, Practitioner-in-Residence, Information Law Institute, New York – 2024-05-24 07:28:38

Surveillance cameras are getting cheaper, more powerful and more ubiquitous.

Denniro/iStock via Getty Images Plus

Albert Fox Cahn, New York University

The Philadelphia Inquirer recently investigated Philadelphia's use of what it described as a “little-scrutinized, 7,000-camera system that is exposing residents across the to heightened surveillance with few rules or safeguards against abuse.” The article detailed how Philadelphia narcotics cops not only allegedly failed to disclose their use of video surveillance in arrest reports or to prosecutors, but also that the video footage at times proved officers were lying when they testified.


The Conversation U.S. talked to Albert Fox Cahn, founder and executive director of the nonprofit Surveillance Technology Oversight Project and a practitioner-in-residence at NYU School of Law, about what these new video can do and the privacy and other issues they raise.

What can these cameras do?

The closed-circuit television, or CCTV, cameras most Americans pass each day may look interchangeable, but a lot has changed behind the lens in recent years. As video surveillance cameras have become cheaper and more ubiquitous, they have also grown more powerful – featuring increasingly high-definition images and the ability to pan, tilt and zoom. But the most significant change to cameras like those used in Philadelphia is the networks that police departments set up to aggregate these countless images of city residents' lives.

A variety of AI tools can also harvest this data in new ways that some may find alarming.

Automated license plate reader software can both track drivers across the city in real time and create a long-term log of their cars' movements. Want to know where a driver is now or was parked two years ago? Just check the database.


And pedestrians are no less prone to surveillance. Facial recognition software can scan images to automatically identify individuals and track them across the city.

How widespread is this technology?

According to the Inquirer's investigation, Philadelphia's camera network grew at an astounding pace. In the past decade, the city has gone from 216 cameras to a network of more than 7,000 cameras operated by and transportation .

But those are just the cameras that city officials directly control and can access in real time.

In addition, police routinely turn to the images captured by private surveillance cameras. This includes everything from multimillion-dollar, internet-enabled camera systems at large stores, offices and universities to the individual cameras that homeowners or small-business owners screw into their door frames or exteriors. The public simply has no idea how many of these private cameras are in operation or how often their data is requested.


How is this different from traditional police video surveillance?

Traditional cameras offered a narrow, grainy perspective on a single fixed place. These systems not only collected much less data than contemporary cameras, but they also retained far less.

A single CCTV camera at a bank might police identify a suspect in a robbery, but it poses no privacy threat beyond that. It is confined to a small space where privacy concerns are minimal and security concerns are high. But mass camera deployments create a fundamentally different model, collecting far more information on all of us and creating far greater potential for misuse.

Police have attempted these techniques for decades, but the technology simply wasn't up to the task. When the City of London Police deployed its so-called “ring of steel” security system in the 1990s, fewer than two dozen cameras tried to track the cars entering a tiny portion of the British capital, surveilling roughly a square kilometer of the city's financial core. Officers manually jotted down vehicle plate numbers and surveilled drivers' profile photos.

The labor-intensive exercise was impossible to scale.


To deploy such a system across an entire city would likely have taken every police officer in the city and then some. Through automation, technology enables this mass surveillance by reducing the marginal cost of tracking, allowing police to expand monitoring far more broadly than would have been financially or pragmatically possible before.

People walk past a police van in street underneath elevated train

Security cameras hang from the elevated train tracks at Kensington and Allegheny avenues in North Philadelphia.

Spencer Platt/Getty Images

What privacy concerns does it raise?

A single camera can capture our image; a citywide camera system can reconstruct our lives. Networked camera systems like those in Philadelphia, when combined with smartphones and other internet-enabled devices, allow officers to reconstruct an individual's movements for days or weeks at a time, all without any court oversight.

While it would take a warrant to install a GPS tracker on a resident's car, police can recreate GPS-like location tracking without a warrant, all thanks to mass camera systems. And facial recognition in municipal cameras threatens the First Amendment, which protects of speech, religion and peaceful assembly. The police are armed with a way to track nearly every person at a political protest, clinic or house of worship. Such surveillance melts away the anonymity that is indispensable to an open society.


Are there other risks or unintended consequences?

I believe giving thousands of city employees the keys to a small surveillance is a recipe for disaster.

The Philadelphia Inquirer found that the city has policies that forbid zooming in on residents for amusement, spying on someone by zooming in through their window, or blatant racial profiling. But what it didn't find was evidence that these safeguards were being enforced.

When thousands of employees can spy on their neighbors, romantic partners and business rivals on a whim, it raises the question: Who watches the watchers?

At least for now, the grim answer appears to be no one.The Conversation

Albert Fox Cahn, Practitioner-in-Residence, Information Law Institute, New York University


This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Phone cameras can take in more light than the human eye − that’s why low-light events like the northern lights often look better through your phone camera



theconversation.com – Douglas Goodwin, Visiting Assistant Professor in Studies, Scripps College – 2024-05-23 07:29:41

A May 2024 solar storm made the northern lights visible across parts of the northern U.S.

AP Photo/Lindsey Wasson

Douglas Goodwin, Scripps College

Smartphone cameras have significantly improved in recent years. Computational photography and AI allow these devices to capture stunning images that can surpass what we see with the naked eye. Photos of the northern lights, or aurora borealis, one particularly striking example.


If you saw the northern lights during the geomagnetic storms in May 2024, you might have noticed that your smartphone made the photos look even more vivid than reality.

Auroras, known as the northern lights (aurora borealis) or southern lights (aurora australis) occur when the solar wind disturbs Earth's magnetic field. They appear as streaks of color across the sky.

Two images of the northern lights, the left labeled 'eye' and the right labeled 'camera.' The 'eye' image is darker with the colors more muted.

The left side shows the aurora as seen with the naked eye. The right side reveals how a smartphone camera can capture brighter and more colorful lights.

Douglas Goodwin

What makes photos of these even more striking than they appear to the eye? As a professor of computational photography, I've seen how the latest smartphone features overcome the limitations of human vision.


Your eyes in the dark

Human eyes are remarkable. They allow you to see footprints in a sun-soaked desert and pilot vehicles at high speeds. However, your eyes perform less impressively in low light.

Human eyes contain two types of cells that respond to light – rods and cones. Rods are numerous and much more sensitive to light. Cones handle color but need more light to function. As a result, at night our vision relies heavily on rods and misses color.

A diagram of a human eye, with a zoomed panel showing rod and cone receptors. The rods are cylindrical, while the cones are conical.

Rods and cones in your eyes are photoreceptors that black and white as well as color.

Blume, C., Garbazza, C. & Spitschan, M., CC BY-SA

The result is like wearing dark sunglasses to watch a . At night, colors appear washed out and muted. Similarly, under a sky, the vibrant hues of the aurora are present but often too dim for your eyes to see clearly.


In low light, your brain prioritizes motion detection and shape recognition to you navigate. This trade-off means the ethereal colors of the aurora are often invisible to the naked eye. Technology is the only way to increase their brightness.

Taking the perfect picture

Smartphones have revolutionized how people capture the world. These compact devices use multiple cameras and advanced sensors to gather more light than the human eye can, even in low-light conditions. They achieve this through longer exposure times – how long the camera takes in light – larger apertures and increasing the ISO, the amount of light your camera lets in.

But smartphones do more than adjust these settings. They also leverage computational photography to enhance your images using digital techniques and algorithms. Image stabilization reduces the camera's shakiness, and exposure settings optimize the amount of light the camera captures.

Multi-image processing creates the perfect by stacking multiple images together. A setting called night mode can balance colors in low light, while LiDAR capabilities in some phones keep your images in precise focus.


A diagram showing a stack of grainy images flattened down to one clear image.

Image stacking involves aligning and combining several noisy photos to enhance the final image's quality. Averaging these images together suppresses random sensor noise. This results in a clearer and more detailed picture than any of the photos alone.

Douglas Goodwin

LiDAR stands for light detection and ranging, and phones with this setting emit laser pulses to calculate the distances to objects in the scene quickly in any kind of light. LiDAR generates a depth map of the to improve focus and make objects in your photos stand out.

Two images, the left labeled 'optical' and the right labeled 'depth' of a person dancing. The 'optical' image shows how the person would look normally in the photo, while the 'depth' image shows their silhouette in white against a black background.

Smartphone cameras don't just capture flat images – they collect depth information too. The left side shows a regular photo, while the right side illustrates the depth map, with lighter pixels closer to the camera and darker ones farther away. Normally hidden, this depth data enables smartphones to apply effects such as artificial background blur to mimic the look of the northern lights against a night sky.

Douglas Goodwin

Artificial intelligence tools in your smartphone camera can further enhance your photos by optimizing the settings, applying bursts of light and using super-resolution techniques to get really fine detail. They can even identify faces in your photos.


AI processing in your smartphone's camera

While there's plenty you can do with a smartphone camera, regular cameras do have larger sensors and superior optics, providing more control over the images you take. Camera manufacturers like Nikon, Sony and Canon typically avoid tampering with the image, instead letting the photographer take creative control.

These cameras offer photographers the flexibility of shooting in raw format, which allows you to keep more of each image's data for editing and often produces higher-quality results.

Unlike dedicated cameras, modern smartphone cameras use AI while and after you snap a picture to enhance your photos' quality. While you're taking a photo, AI tools will analyze the scene you're pointing the camera at and adjust settings such as exposure, white balance and ISO, while recognizing the subject you're shooting and stabilizing the image. These make sure you get a great photo when you hit the button.

You can often find features that use AI such as high dynamic range, night mode and portrait mode, enabled by default or accessible within your camera settings.


AI algorithms further enhance your photos by refining details, reducing blur and applying effects such as color correction after you take the photo.

All these features help your camera take photos in low-light conditions and contributed to the stunning aurora photos you may have captured with your phone camera.

While the human eye struggles to fully appreciate the northern lights' otherworldly hues at night, modern smartphone cameras overcome this limitation. By leveraging AI and computational photography techniques, your devices allow you to see the bold colors of solar storms in the atmosphere, boosting color and capturing otherwise invisible details that even the keenest eye will miss.The Conversation

Douglas Goodwin, Visiting Assistant Professor in Media Studies, Scripps College

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

News from the South