Connect with us

The Conversation

AI could improve your life by removing bottlenecks between what you want and what you get

Published

on

AI could improve your life by removing bottlenecks between what you want and what you get

Want to turn many experiences from the equivalent of ordering from a menu to getting a personalized meal? AI is poised to help.
Julia Garan/iStock via Getty Images

Bruce Schneier, Harvard Kennedy School

Artificial intelligence is poised to upend much of society, removing human limitations inherent in many systems. One such limitation is information and logistical bottlenecks in decision-making.

Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires. Artificial intelligence has the potential to remove that limitation. And it has the potential to drastically change how democracy functions.

AI researcher Tantum Collins and I, a public-interest technology scholar, call this AI overcoming “lossy bottlenecks.” Lossy is a term from information theory that refers to imperfect communications channels – that is, channels that lose information.

Multiple-choice practicality

Imagine your next sit-down dinner and being able to have a long conversation with a chef about your meal. You could end up with a bespoke dinner based on your desires, the chef’s abilities and the available ingredients. This is possible if you are cooking at home or hosted by accommodating friends.

But it is infeasible at your average restaurant: The limitations of the kitchen, the way supplies have to be ordered and the realities of restaurant cooking make this kind of rich interaction between diner and chef impossible. You get a menu of a few dozen standardized options, with the possibility of some modifications around the edges.

That’s a lossy bottleneck. Your wants and desires are rich and multifaceted. The array of culinary outcomes are equally rich and multifaceted. But there’s no scalable way to connect the two. People are forced to use multiple-choice systems like menus to simplify decision-making, and they lose so much information in the process.

People are so used to these bottlenecks that we don’t even notice them. And when we do, we tend to assume they are the inevitable cost of scale and efficiency. And they are. Or, at least, they were.

The possibilities

Artificial intelligence has the potential to overcome this limitation. By storing rich representations of people’s preferences and histories on the demand side, along with equally rich representations of capabilities, costs and creative possibilities on the supply side, AI systems enable complex customization at scale and low cost. Imagine walking into a restaurant and knowing that the kitchen has already started work on a meal optimized for your tastes, or being presented with a personalized list of choices.

There have been some early attempts at this. People have used ChatGPT to design meals based on dietary restrictions and what they have in the fridge. It’s still early days for these technologies, but once they get working, the possibilities are nearly endless. Lossy bottlenecks are everywhere.

Imagine a future AI that knows your dietary wants and needs so well that you wouldn’t need to use detail prompts for meal plans, let alone iterate on them as the nutrition coach in this video does with ChatGPT.

Take labor markets. Employers look to grades, diplomas and certifications to gauge candidates’ suitability for roles. These are a very coarse representation of a job candidate’s abilities. An AI system with access to, for example, a student’s coursework, exams and teacher feedback as well as detailed information about possible jobs could provide much richer assessments of which employment matches do and don’t make sense.

Or apparel. People with money for tailors and time for fittings can get clothes made from scratch, but most of us are limited to mass-produced options. AI could hugely reduce the costs of customization by learning your style, taking measurements based on photos, generating designs that match your taste and using available materials. It would then convert your selections into a series of production instructions and place an order to an AI-enabled robotic production line.

Or software. Today’s computer programs typically use one-size-fits-all interfaces, with only minor room for modification, but individuals have widely varying needs and working styles. AI systems that observe each user’s interaction styles and know what that person wants out of a given piece of software could take this personalization far deeper, completely redesigning interfaces to suit individual needs.

Removing democracy’s bottleneck

These examples are all transformative, but the lossy bottleneck that has the largest effect on society is in politics. It’s the same problem as the restaurant. As a complicated citizen, your policy positions are probably nuanced, trading off between different options and their effects. You care about some issues more than others and some implementations more than others.

If you had the knowledge and time, you could engage in the deliberative process and help create better laws than exist today. But you don’t. And, anyway, society can’t hold policy debates involving hundreds of millions of people. So you go to the ballot box and choose between two – or if you are lucky, four or five – individual representatives or political parties.

Imagine a system where AI removes this lossy bottleneck. Instead of trying to cram your preferences to fit into the available options, imagine conveying your political preferences in detail to an AI system that would directly advocate for specific policies on your behalf. This could revolutionize democracy.

a diagram of six vertical columns composed of squares of various white, grey and black shades
Ballots are bottlenecks that funnel a voter’s diverse views into a few options. AI representations of individual voters’ desires overcome this bottleneck, promising enacted policies that better align with voters’ wishes.
Tantum Collins, CC BY-ND

One way is by enhancing voter representation. By capturing the nuances of each individual’s political preferences in a way that traditional voting systems can’t, this system could lead to policies that better reflect the desires of the electorate. For example, you could have an AI device in your pocket – your future phone, for instance – that knows your views and wishes and continually votes in your name on an otherwise overwhelming number of issues large and small.

Combined with AI systems that personalize political education, it could encourage more people to participate in the democratic process and increase political engagement. And it could eliminate the problems stemming from elected representatives who reflect only the views of the majority that elected them – and sometimes not even them.

On the other hand, the privacy concerns resulting from allowing an AI such intimate access to personal data are considerable. And it’s important to avoid the pitfall of just allowing the AIs to figure out what to do: Human deliberation is crucial to a functioning democracy.

Also, there is no clear transition path from the representative democracies of today to these AI-enhanced direct democracies of tomorrow. And, of course, this is still science fiction.

First steps

These technologies are likely to be used first in other, less politically charged, domains. Recommendation systems for digital media have steadily reduced their reliance on traditional intermediaries. Radio stations are like menu items: Regardless of how nuanced your taste in music is, you have to pick from a handful of options. Early digital platforms were only a little better: “This person likes jazz, so we’ll suggest more jazz.”

Today’s streaming platforms use listener histories and a broad set of features describing each track to provide each user with personalized music recommendations. Similar systems suggest academic papers with far greater granularity than a subscription to a given journal, and movies based on more nuanced analysis than simply deferring to genres.

A world without artificial bottlenecks comes with risks – loss of jobs in the bottlenecks, for example – but it also has the potential to free people from the straightjackets that have long constrained large-scale human decision-making. In some cases – restaurants, for example – the impact on most people might be minor. But in others, like politics and hiring, the effects could be profound.The Conversation

Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Did you miss our previous article…
https://www.biloxinewsevents.com/?p=316456

The Conversation

Cyberattacks shake voters’ trust in elections, regardless of party

Published

on

theconversation.com – Ryan Shandler, Professor of Cybersecurity and International Relations, Georgia Institute of Technology – 2025-06-27 07:29:00


American democracy faces a crisis of trust, with nearly half of Americans doubting election fairness. This mistrust stems not only from polarization and misinformation but also from unease about the digital infrastructure behind voting. While over 95% of ballots are now counted electronically, this complexity fuels skepticism, especially amid foreign disinformation campaigns that amplify doubts about election security. A study during the 2024 election showed that exposure to cyberattack reports, even unrelated to elections, significantly undermines voter confidence, particularly among those using digital voting machines. To protect democracy, it’s vital to pair secure technology with public education and treat trust as a national asset.

An election worker installs a touchscreen voting machine.
Ethan Miller/Getty Images

Ryan Shandler, Georgia Institute of Technology; Anthony J. DeMattee, Emory University, and Bruce Schneier, Harvard Kennedy School

American democracy runs on trust, and that trust is cracking.

Nearly half of Americans, both Democrats and Republicans, question whether elections are conducted fairly. Some voters accept election results only when their side wins. The problem isn’t just political polarization – it’s a creeping erosion of trust in the machinery of democracy itself.

Commentators blame ideological tribalism, misinformation campaigns and partisan echo chambers for this crisis of trust. But these explanations miss a critical piece of the puzzle: a growing unease with the digital infrastructure that now underpins nearly every aspect of how Americans vote.

The digital transformation of American elections has been swift and sweeping. Just two decades ago, most people voted using mechanical levers or punch cards. Today, over 95% of ballots are counted electronically. Digital systems have replaced poll books, taken over voter identity verification processes and are integrated into registration, counting, auditing and voting systems.

This technological leap has made voting more accessible and efficient, and sometimes more secure. But these new systems are also more complex. And that complexity plays into the hands of those looking to undermine democracy.

In recent years, authoritarian regimes have refined a chillingly effective strategy to chip away at Americans’ faith in democracy by relentlessly sowing doubt about the tools U.S. states use to conduct elections. It’s a sustained campaign to fracture civic faith and make Americans believe that democracy is rigged, especially when their side loses.

This is not cyberwar in the traditional sense. There’s no evidence that anyone has managed to break into voting machines and alter votes. But cyberattacks on election systems don’t need to succeed to have an effect. Even a single failed intrusion, magnified by sensational headlines and political echo chambers, is enough to shake public trust. By feeding into existing anxiety about the complexity and opacity of digital systems, adversaries create fertile ground for disinformation and conspiracy theories.

Just before the 2024 presidential election, Director of the Cybersecurity and Infrastructure Security Agency Jen Easterly explains how foreign influence campaigns erode trust in U.S. elections.

Testing cyber fears

To test this dynamic, we launched a study to uncover precisely how cyberattacks corroded trust in the vote during the 2024 U.S. presidential race. We surveyed more than 3,000 voters before and after election day, testing them using a series of fictional but highly realistic breaking news reports depicting cyberattacks against critical infrastructure. We randomly assigned participants to watch different types of news reports: some depicting cyberattacks on election systems, others on unrelated infrastructure such as the power grid, and a third, neutral control group.

The results, which are under peer review, were both striking and sobering. Mere exposure to reports of cyberattacks undermined trust in the electoral process – regardless of partisanship. Voters who supported the losing candidate experienced the greatest drop in trust, with two-thirds of Democratic voters showing heightened skepticism toward the election results.

But winners too showed diminished confidence. Even though most Republican voters, buoyed by their victory, accepted the overall security of the election, the majority of those who viewed news reports about cyberattacks remained suspicious.

The attacks didn’t even have to be related to the election. Even cyberattacks against critical infrastructure such as utilities had spillover effects. Voters seemed to extrapolate: “If the power grid can be hacked, why should I believe that voting machines are secure?”

Strikingly, voters who used digital machines to cast their ballots were the most rattled. For this group of people, belief in the accuracy of the vote count fell by nearly twice as much as that of voters who cast their ballots by mail and who didn’t use any technology. Their firsthand experience with the sorts of systems being portrayed as vulnerable personalized the threat.

It’s not hard to see why. When you’ve just used a touchscreen to vote, and then you see a news report about a digital system being breached, the leap in logic isn’t far.

Our data suggests that in a digital society, perceptions of trust – and distrust – are fluid, contagious and easily activated. The cyber domain isn’t just about networks and code. It’s also about emotions: fear, vulnerability and uncertainty.

Firewall of trust

Does this mean we should scrap electronic voting machines? Not necessarily.

Every election system, digital or analog, has flaws. And in many respects, today’s high-tech systems have solved the problems of the past with voter-verifiable paper ballots. Modern voting machines reduce human error, increase accessibility and speed up the vote count. No one misses the hanging chads of 2000.

But technology, no matter how advanced, cannot instill legitimacy on its own. It must be paired with something harder to code: public trust. In an environment where foreign adversaries amplify every flaw, cyberattacks can trigger spirals of suspicion. It is no longer enough for elections to be secure − voters must also perceive them to be secure.

That’s why public education surrounding elections is now as vital to election security as firewalls and encrypted networks. It’s vital that voters understand how elections are run, how they’re protected and how failures are caught and corrected. Election officials, civil society groups and researchers can teach how audits work, host open-source verification demonstrations and ensure that high-tech electoral processes are comprehensible to voters.

We believe this is an essential investment in democratic resilience. But it needs to be proactive, not reactive. By the time the doubt takes hold, it’s already too late.

Just as crucially, we are convinced that it’s time to rethink the very nature of cyber threats. People often imagine them in military terms. But that framework misses the true power of these threats. The danger of cyberattacks is not only that they can destroy infrastructure or steal classified secrets, but that they chip away at societal cohesion, sow anxiety and fray citizens’ confidence in democratic institutions. These attacks erode the very idea of truth itself by making people doubt that anything can be trusted.

If trust is the target, then we believe that elected officials should start to treat trust as a national asset: something to be built, renewed and defended. Because in the end, elections aren’t just about votes being counted – they’re about people believing that those votes count.

And in that belief lies the true firewall of democracy.The Conversation

Ryan Shandler, Professor of Cybersecurity and International Relations, Georgia Institute of Technology; Anthony J. DeMattee, Data Scientist and Adjunct Instructor, Emory University, and Bruce Schneier, Adjunct Lecturer in Public Policy, Harvard Kennedy School

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Cyberattacks shake voters’ trust in elections, regardless of party appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This article presents a balanced and fact-focused analysis of trust issues surrounding American elections, emphasizing concerns shared across the political spectrum. It highlights the complexity of digital voting infrastructure and the external threats posed by misinformation and foreign influence without promoting partisan viewpoints. The tone is neutral, grounded in data and research, avoiding ideological framing or advocacy. The piece calls for bipartisan solutions like public education and institutional trust-building, reflecting a centrist perspective that prioritizes democratic resilience over partisan blame.

Continue Reading

The Conversation

Toxic algae blooms are lasting longer than before in Lake Erie − why that’s a worry for people and pets

Published

on

theconversation.com – Gregory J. Dick, Professor of Biology, University of Michigan – 2025-06-26 14:38:00


Federal scientists forecast a mild to moderate harmful algal bloom season in Lake Erie for 2025, though even moderate blooms pose health risks. Harmful algal blooms, mainly caused by excess phosphorus and nitrogen runoff from agriculture, produce toxins harmful to humans, pets, and ecosystems. Recent DNA research revealed new toxins, including microcystins and saxitoxins, raising emerging concerns. Climate change exacerbates blooms by increasing water temperatures and heavy rainfall. Blooms now start earlier and last longer. Reducing nutrient runoff through improved farming practices and wetland restoration, like Ohio’s H2Ohio program, is essential to mitigating future blooms and protecting water quality.

A satellite image from Aug. 13, 2024, shows an algal bloom covering approximately 320 square miles (830 square km) of Lake Erie. By Aug. 22, it had nearly doubled in size.
NASA Earth Observatory

Gregory J. Dick, University of Michigan

Federal scientists released their annual forecast for Lake Erie’s harmful algal blooms on June 26, 2025, and they expect a mild to moderate season. However, anyone who comes in contact with toxic algae can face health risks. And 2014, when toxins from algae blooms contaminated the water supply in Toledo, Ohio, was a moderate year, too.

We asked Gregory J. Dick, who leads the Cooperative Institute for Great Lakes Research, a federally funded center at the University of Michigan that studies harmful algal blooms among other Great Lakes issues, why they’re such a concern.

A bar chart shows 2025's forecast to be more severe than 2023 but less than 2024.
The National Oceanic and Atmospheric Administration’s prediction for harmful algal bloom severity in Lake Erie compared with past years.
NOAA

1. What causes harmful algal blooms?

Harmful algal blooms are dense patches of excessive algae growth that can occur in any type of water body, including ponds, reservoirs, rivers, lakes and oceans. When you see them in freshwater, you’re typically seeing cyanobacteria, also known as blue-green algae.

These photosynthetic bacteria have inhabited our planet for billions of years. In fact, they were responsible for oxygenating Earth’s atmosphere, which enabled plant and animal life as we know it.

An illustration of algae bloom sources shows a farm field, city and large body of water.
The leading source of harmful algal blooms today is nutrient runoff from fertilized farm fields.
Michigan Sea Grant

Algae are natural components of ecosystems, but they cause trouble when they proliferate to high densities, creating what we call blooms.

Harmful algal blooms form scums at the water surface and produce toxins that can harm ecosystems, water quality and human health. They have been reported in all 50 U.S. states, all five Great Lakes and nearly every country around the world. Blue-green algae blooms are becoming more common in inland waters.

The main sources of harmful algal blooms are excess nutrients in the water, typically phosphorus and nitrogen.

Historically, these excess nutrients mainly came from sewage and phosphorus-based detergents used in laundry machines and dishwashers that ended up in waterways. U.S. environmental laws in the early 1970s addressed this by requiring sewage treatment and banning phosphorus detergents, with spectacular success.

How pollution affected Lake Erie in the 1960s, before clean water regulations.

Today, agriculture is the main source of excess nutrients from chemical fertilizer or manure applied to farm fields to grow crops. Rainstorms wash these nutrients into streams and rivers that deliver them to lakes and coastal areas, where they fertilize algal blooms. In the U.S., most of these nutrients come from industrial-scale corn production, which is largely used as animal feed or to produce ethanol for gasoline.

Climate change also exacerbates the problem in two ways. First, cyanobacteria grow faster at higher temperatures. Second, climate-driven increases in precipitation, especially large storms, cause more nutrient runoff that has led to record-setting blooms.

2. What does your team’s DNA testing tell us about Lake Erie’s harmful algal blooms?

Harmful algal blooms contain a mixture of cyanobacterial species that can produce an array of different toxins, many of which are still being discovered.

When my colleagues and I recently sequenced DNA from Lake Erie water, we found new types of microcystins, the notorious toxins that were responsible for contaminating Toledo’s drinking water supply in 2014.

These novel molecules cannot be detected with traditional methods and show some signs of causing toxicity, though further studies are needed to confirm their human health effects.

A young woman and dog walk along a shoreline with blue-green algae in the water.
Blue-green algae blooms in freshwater, like this one near Toledo in 2014, can be harmful to humans, causing gastrointestinal symptoms, headache, fever and skin irritation. They can be lethal for pets.
Ty Wright for The Washington Post via Getty Images

We also found organisms responsible for producing saxitoxin, a potent neurotoxin that is well known for causing paralytic shellfish poisoning on the Pacific Coast of North America and elsewhere.

Saxitoxins have been detected at low concentrations in the Great Lakes for some time, but the recent discovery of hot spots of genes that make the toxin makes them an emerging concern.

Our research suggests warmer water temperatures could boost its production, which raises concerns that saxitoxin will become more prevalent with climate change. However, the controls on toxin production are complex, and more research is needed to test this hypothesis. Federal monitoring programs are essential for tracking and understanding emerging threats.

3. Should people worry about these blooms?

Harmful algal blooms are unsightly and smelly, making them a concern for recreation, property values and businesses. They can disrupt food webs and harm aquatic life, though a recent study suggested that their effects on the Lake Erie food web so far are not substantial.

But the biggest impact is from the toxins these algae produce that are harmful to humans and lethal to pets.

The toxins can cause acute health problems such as gastrointestinal symptoms, headache, fever and skin irritation. Dogs can die from ingesting lake water with harmful algal blooms. Emerging science suggests that long-term exposure to harmful algal blooms, for example over months or years, can cause or exacerbate chronic respiratory, cardiovascular and gastrointestinal problems and may be linked to liver cancers, kidney disease and neurological issues.

A large round structure offshore is surrounded by blue-green algae.
The water intake system for the city of Toledo, Ohio, is surrounded by an algae bloom in 2014. Toxic algae got into the water system, resulting in residents being warned not to touch or drink their tap water for three days.
AP Photo/Haraz N. Ghanbari

In addition to exposure through direct ingestion or skin contact, recent research also indicates that inhaling toxins that get into the air may harm health, raising concerns for coastal residents and boaters, but more research is needed to understand the risks.

The Toledo drinking water crisis of 2014 illustrated the vast potential for algal blooms to cause harm in the Great Lakes. Toxins infiltrated the drinking water system and were detected in processed municipal water, resulting in a three-day “do not drink” advisory. The episode affected residents, hospitals and businesses, and it ultimately cost the city an estimated US$65 million.

4. Blooms seem to be starting earlier in the year and lasting longer – why is that happening?

Warmer waters are extending the duration of the blooms.

In 2025, NOAA detected these toxins in Lake Erie on April 28, earlier than ever before. The 2022 bloom in Lake Erie persisted into November, which is rare if not unprecedented.

Scientific studies of western Lake Erie show that the potential cyanobacterial growth rate has increased by up to 30% and the length of the bloom season has expanded by up to a month from 1995 to 2022, especially in warmer, shallow waters. These results are consistent with our understanding of cyanobacterial physiology: Blooms like it hot – cyanobacteria grow faster at higher temperatures.

5. What can be done to reduce the likelihood of algal blooms in the future?

The best and perhaps only hope of reducing the size and occurrence of harmful algal blooms is to reduce the amount of nutrients reaching the Great Lakes.

In Lake Erie, where nutrients come primarily from agriculture, that means improving agricultural practices and restoring wetlands to reduce the amount of nutrients flowing off of farm fields and into the lake. Early indications suggest that Ohio’s H2Ohio program, which works with farmers to reduce runoff, is making some gains in this regard, but future funding for H2Ohio is uncertain.

In places like Lake Superior, where harmful algal blooms appear to be driven by climate change, the solution likely requires halting and reversing the rapid human-driven increase in greenhouse gases in the atmosphere.The Conversation

Gregory J. Dick, Professor of Biology, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Toxic algae blooms are lasting longer than before in Lake Erie − why that’s a worry for people and pets appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This article presents a neutral and factual overview of the harmful algal blooms in Lake Erie, relying on scientific data and expert analysis without promoting a political agenda. It references federal and academic research, explains causes like agricultural runoff and climate change, and discusses practical mitigation efforts such as agricultural practice improvements and wetland restoration. The tone is informative and balanced, avoiding partisan framing or ideological language. While it touches on environmental issues that can be politically charged, the article remains focused on evidence-based explanations and policy-neutral recommendations.

Continue Reading

The Conversation

The Vera C. Rubin Observatory will help astronomers investigate dark matter, continuing the legacy of its pioneering namesake

Published

on

theconversation.com – Samantha Thompson, Astronomy Curator, National Air and Space Museum, Smithsonian Institution – 2025-06-24 07:35:00


Everything visible in space, including stars and planets, accounts for only about 15% of the universe’s matter; the rest is dark matter, which is invisible but detectable through its gravitational effects. The Vera C. Rubin Observatory, starting its 10-year mission with the largest digital camera ever built, will capture detailed images of billions of galaxies to study dark matter’s role in the universe’s structure. Vera Rubin’s pioneering work in the 1960s revealed stars in galaxies move faster than visible matter predicts, suggesting unseen mass. Her legacy continues as astronomers use data to explore dark matter’s mysteries.

The Rubin Observatory is scheduled to release its first images in 2025.
RubinObs/NOIRLab/SLAC/NSF/DOE/AURA/B. Quint

Samantha Thompson, Smithsonian Institution

Everything in space – from the Earth and Sun to black holes – accounts for just 15% of all matter in the universe. The rest of the cosmos seems to be made of an invisible material astronomers call dark matter.

Astronomers know dark matter exists because its gravity affects other things, such as light. But understanding what dark matter is remains an active area of research.

With the release of its first images this month, the Vera C. Rubin Observatory has begun a 10-year mission to help unravel the mystery of dark matter. The observatory will continue the legacy of its namesake, a trailblazing astronomer who advanced our understanding of the other 85% of the universe.

As a historian of astronomy, I’ve studied how Vera Rubin’s contributions have shaped astrophysics. The observatory’s name is fitting, given that its data will soon provide scientists with a way to build on her work and shed more light on dark matter.

Wide view of the universe

From its vantage point in the Chilean Andes mountains, the Rubin Observatory will document everything visible in the southern sky. Every three nights, the observatory and its 3,200 megapixel camera will make a record of the sky.

This camera, about the size of a small car, is the largest digital camera ever built. Images will capture an area of the sky roughly 45 times the size of the full Moon. With a big camera with a wide field of view, Rubin will produce about five petabytes of data every year. That’s roughly 5,000 years’ worth of MP3 songs.

After weeks, months and years of observations, astronomers will have a time-lapse record revealing anything that explodes, flashes or moves – such as supernovas, variable stars or asteroids. They’ll also have the largest survey of galaxies ever made. These galactic views are key to investigating dark matter.

Galaxies are the key

Deep field images from the Hubble Space Telescope, the James Webb Space Telescope and others have visually revealed the abundance of galaxies in the universe. These images are taken with a long exposure time to collect the most light, so that even very faint objects show up.

Researchers now know that those galaxies aren’t randomly distributed. Gravity and dark matter pull and guide them into a structure that resembles a spider’s web or a tub of bubbles. The Rubin Observatory will expand upon these previous galactic surveys, increasing the precision of the data and capturing billions more galaxies.

In addition to helping structure galaxies throughout the universe, dark matter also distorts the appearance of galaxies through an effect referred to as gravitational lensing.

Light travels through space in a straight line − unless it gets close to something massive. Gravity bends light’s path, which distorts the way we see it. This gravitational lensing effect provides clues that could help astronomers locate dark matter. The stronger the gravity, the bigger the bend in light’s path.

Many galaxies, represented as bright dots, some blurred, against a dark background.
The white galaxies seen here are bound in a cluster. The gravity from the galaxies and the dark matter bends the light from the more distant galaxies, creating contorted and magnified images of them.
NASA, ESA, CSA and STScI

Discovering dark matter

For centuries, astronomers tracked and measured the motion of planets in the solar system. They found that all the planets followed the path predicted by Newton’s laws of motion, except for Uranus. Astronomers and mathematicians reasoned that if Newton’s laws are true, there must be some missing matter – another massive object – out there tugging on Uranus. From this hypothesis, they discovered Neptune, confirming Newton’s laws.

With the ability to see fainter objects in the 1930s, astronomers began tracking the motions of galaxies.

California Institute of Technology astronomer Fritz Zwicky coined the term dark matter in 1933, after observing galaxies in the Coma Cluster. He calculated the mass of the galaxies based on their speeds, which did not match their mass based on the number of stars he observed.

He suspected that the cluster could contain an invisible, missing matter that kept the galaxies from flying apart. But for several decades he lacked enough observational evidence to support his theory.

A woman adjusting a large piece of equipment.
Vera Rubin operates the Carnegie spectrograph at Kitt Peak National Observatory in Tucson.
Carnegie Institution for Science, CC BY

Enter Vera Rubin

In 1965, Vera Rubin became the first women hired onto the scientific staff at the Carnegie Institution’s Department of Terrestrial Magnetism in Washington, D.C.

She worked with Kent Ford, who had built an extremely sensitive spectrograph and was looking to apply it to a scientific research project. Rubin and Ford used the spectrograph to measure how fast stars orbit around the center of their galaxies.

In the solar system, where most of the mass is within the Sun at the center, the closest planet, Mercury, moves faster than the farthest planet, Neptune.

“We had expected that as stars got farther and farther from the center of their galaxy, they would orbit slower and slower,” Rubin said in 1992.

What they found in galaxies surprised them. Stars far from the galaxy’s center were moving just as fast as stars closer in.

“And that really leads to only two possibilities,” Rubin explained. “Either Newton’s laws don’t hold, and physicists and astronomers are woefully afraid of that … (or) stars are responding to the gravitational field of matter which we don’t see.”

Data piled up as Rubin created plot after plot. Her colleagues didn’t doubt her observations, but the interpretation remained a debate. Many people were reluctant to accept that dark matter was necessary to account for the findings in Rubin’s data.

Rubin continued studying galaxies, measuring how fast stars moved within them. She wasn’t interested in investigating dark matter itself, but she carried on with documenting its effects on the motion of galaxies.

A quarter with a woman looking upwards engraved onto it.
A U.S quarter honors Vera Rubin’s contributions to our understanding of dark matter.
United States Mint, CC BY

Vera Rubin’s legacy

Today, more people are aware of Rubin’s observations and contributions to our understanding of dark matter. In 2019, a congressional bill was introduced to rename the former Large Synoptic Survey Telescope to the Vera C. Rubin Observatory. In June 2025, the U.S. Mint released a quarter featuring Vera Rubin.

Rubin continued to accumulate data about the motions of galaxies throughout her career. Others picked up where she left off and have helped advance dark matter research over the past 50 years.

In the 1970s, physicist James Peebles and astronomers Jeremiah Ostriker and Amos Yahil created computer simulations of individual galaxies. They concluded, similarly to Zwicky, that there was not enough visible matter in galaxies to keep them from flying apart.

They suggested that whatever dark matter is − be it cold stars, black holes or some unknown particle − there could be as much as 10 times the amount of dark matter than ordinary matter in galaxies.

Throughout its 10-year run, the Rubin Observatory should give even more researchers the opportunity to add to our understanding of dark matter.The Conversation

Samantha Thompson, Astronomy Curator, National Air and Space Museum, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post The Vera C. Rubin Observatory will help astronomers investigate dark matter, continuing the legacy of its pioneering namesake appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The content is focused entirely on scientific topics related to astronomy, dark matter, and the legacy of astronomer Vera Rubin without engaging in political rhetoric or ideological framing. Its tone is neutral, educational, and fact-based, presenting information grounded in scientific research and historical context. As such, it does not lean toward any particular political bias but maintains an objective, centrist stance typical of purely scientific communication.

Continue Reading

Trending