Connect with us

The Conversation

How AI could take over elections – and undermine democracy

Published

on

An AI-driven political campaign could be all things to all people. Eric Smalley, TCUS; Biodiversity Heritage Library/Flickr; Taymaz Valley/Flickr, CC BY-ND

Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways?

Sen. Josh Hawley asked OpenAI CEO Sam Altman this question in a May 16, 2023, U.S. Senate hearing on artificial intelligence. Altman replied that he was indeed concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

Altman did not elaborate, but he might have had something like this scenario in mind. Imagine that soon, political technologists develop a machine called Clogger – a political campaign in a black box. Clogger relentlessly pursues just one objective: to maximize the chances that its candidate – the campaign that buys the services of Clogger Inc. – prevails in an election.

While platforms like Facebook, Twitter and YouTube use forms of AI to get users to spend more time on their sites, Clogger’s AI would have a different objective: to change people’s voting behavior.

How Clogger would work

As a political scientist and a legal scholar who study the intersection of technology and democracy, we believe that something like Clogger could use automation to dramatically increase the scale and potentially the effectiveness of behavior manipulation and microtargeting techniques that political campaigns have used since the early 2000s. Just as advertisers use your browsing and social media history to individually target commercial and political ads now, Clogger would pay attention to you – and hundreds of millions of other voters – individually.

It would offer three advances over the current state-of-the-art algorithmic behavior manipulation. First, its language model would generate messages — texts, social media and email, perhaps including images and videos — tailored to you personally. Whereas advertisers strategically place a relatively small number of ads, language models such as ChatGPT can generate countless unique messages for you personally – and millions for others – over the course of a campaign.

Second, Clogger would use a technique called reinforcement learning to generate a succession of messages that become increasingly more likely to change your vote. Reinforcement learning is a machine-learning, trial-and-error approach in which the computer takes actions and gets feedback about which work better in order to learn how to accomplish an objective. Machines that can play Go, Chess and many video games better than any human have used reinforcement learning.

How reinforcement learning works.

Third, over the course of a campaign, Clogger’s messages could evolve in order to take into account your responses to the machine’s prior dispatches and what it has learned about changing others’ minds. Clogger would be able to carry on dynamic “conversations” with you – and millions of other people – over time. Clogger’s messages would be similar to ads that follow you across different websites and social media.

The nature of AI

Three more features – or bugs – are worth noting.

First, the messages that Clogger sends may or may not be political in content. The machine’s only goal is to maximize vote share, and it would likely devise strategies for achieving this goal that no human campaigner would have thought of.

One possibility is sending likely opponent voters information about nonpolitical passions that they have in sports or entertainment to bury the political messaging they receive. Another possibility is sending off-putting messages – for example incontinence advertisements – timed to coincide with opponents’ messaging. And another is manipulating voters’ social media friend groups to give the sense that their social circles support its candidate.

Second, Clogger has no regard for truth. Indeed, it has no way of knowing what is true or false. Language model “hallucinations” are not a problem for this machine because its objective is to change your vote, not to provide accurate information.

Third, because it is a black box type of artificial intelligence, people would have no way to know what strategies it uses.

The field of explainable AI aims to open the black box of many machine-learning models so people can understand how they work.

Clogocracy

If the Republican presidential campaign were to deploy Clogger in 2024, the Democratic campaign would likely be compelled to respond in kind, perhaps with a similar machine. Call it Dogger. If the campaign managers thought that these machines were effective, the presidential contest might well come down to Clogger vs. Dogger, and the winner would be the client of the more effective machine.

Political scientists and pundits would have much to say about why one or the other AI prevailed, but likely no one would really know. The president will have been elected not because his or her policy proposals or political ideas persuaded more Americans, but because he or she had the more effective AI. The content that won the day would have come from an AI focused solely on victory, with no political ideas of its own, rather than from candidates or parties.

In this very important sense, a machine would have won the election rather than a person. The election would no longer be democratic, even though all of the ordinary activities of democracy – the speeches, the ads, the messages, the voting and the counting of votes – will have occurred.

The AI-elected president could then go one of two ways. He or she could use the mantle of election to pursue Republican or Democratic party policies. But because the party ideas may have had little to do with why people voted the way that they did – Clogger and Dogger don’t care about policy views – the president’s actions would not necessarily reflect the will of the voters. Voters would have been manipulated by the AI rather than freely choosing their political leaders and policies.

Another path is for the president to pursue the messages, behaviors and policies that the machine predicts will maximize the chances of reelection. On this path, the president would have no particular platform or agenda beyond maintaining power. The president’s actions, guided by Clogger, would be those most likely to manipulate voters rather than serve their genuine interests or even the president’s own ideology.

Avoiding Clogocracy

It would be possible to avoid AI election manipulation if candidates, campaigns and consultants all forswore the use of such political AI. We believe that is unlikely. If politically effective black boxes were developed, the temptation to use them would be almost irresistible. Indeed, political consultants might well see using these tools as required by their professional responsibility to help their candidates win. And once one candidate uses such an effective tool, the opponents could hardly be expected to resist by disarming unilaterally.

Enhanced privacy protection would help. Clogger would depend on access to vast amounts of personal data in order to target individuals, craft messages tailored to persuade or manipulate them, and track and retarget them over the course of a campaign. Every bit of that information that companies or policymakers deny the machine would make it less effective.

Strong data privacy laws could help steer AI away from being manipulative.

Another solution lies with elections commissions. They could try to ban or severely regulate these machines. There’s a fierce debate about whether such “replicant” speech, even if it’s political in nature, can be regulated. The U.S.’s extreme free speech tradition leads many leading academics to say it cannot.

But there is no reason to automatically extend the First Amendment’s protection to the product of these machines. The nation might well choose to give machines rights, but that should be a decision grounded in the challenges of today, not the misplaced assumption that James Madison’s views in 1789 were intended to apply to AI.

European Union regulators are moving in this direction. Policymakers revised the European Parliament’s draft of its Artificial Intelligence Act to designate “AI systems to influence voters in campaigns” as “high risk” and subject to regulatory scrutiny.

One constitutionally safer, if smaller, step, already adopted in part by European internet regulators and in California, is to prohibit bots from passing themselves off as people. For example, regulation might require that campaign messages come with disclaimers when the content they contain is generated by machines rather than humans.

This would be like the advertising disclaimer requirements – “Paid for by the Sam Jones for Congress Committee” – but modified to reflect its AI origin: “This AI-generated ad was paid for by the Sam Jones for Congress Committee.” A stronger version could require: “This AI-generated message is being sent to you by the Sam Jones for Congress Committee because Clogger has predicted that doing so will increase your chances of voting for Sam Jones by 0.0002%.” At the very least, we believe voters deserve to know when it is a bot speaking to them, and they should know why, as well.

The possibility of a system like Clogger shows that the path toward human collective disempowerment may not require some superhuman artificial general intelligence. It might just require overeager campaigners and consultants who have powerful new tools that can effectively push millions of people’s many buttons.

Learn what you need to know about artificial intelligence by signing up for our newsletter series of four emails delivered over the course of a week. You can read all our stories on generative AI at TheConversation.com.

Archon Fung consults for Apple University.

Lawrence Lessig does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

———-
Read More

By: Archon Fung, Professor of Citizenship and Self-Government, Harvard Kennedy School
Title: How AI could take over elections – and undermine democracy
Sourced From: theconversation.com/how-ai-could-take-over-elections-and-undermine-democracy-206051
Published Date: Fri, 02 Jun 2023 13:42:24 +0000

The Conversation

When you lose your health insurance, you may also lose your primary doctor – and that hurts your health

Published

on

theconversation.com – Jane Tavares, Senior Research Fellow and Lecturer of Gerontology, UMass Boston – 2025-06-17 07:36:00


Losing health insurance or switching to plans with limited preventive care disrupts the critical bond with primary care providers, leading to missed checkups, late diagnoses, worsening health, and higher medical expenses. Research shows that consistent care improves health, lowers costs, and ensures timely preventive services. Millions risk losing Medicaid coverage amid congressional budget debates, threatening these vital connections for low-income and disabled Americans. Uninsured individuals postpone care, leading to emergencies that raise costs across the health system. Medicaid acts as a health lifeline, enabling ongoing care and preventing crisis-driven treatment. Cutting funding could fracture care relationships, harming health outcomes and increasing system-wide costs.

Seeing the same doctor on a regular basis is good for your health.
Morsa Images/DigitalVision via Getty Images

Jane Tavares, UMass Boston and Marc Cohen, UMass Boston

When you lose your health insurance or switch to a plan that skimps on preventive care, something critical breaks.

The connection to your primary care provider, usually a doctor, gets severed. You stop getting routine checkups. Warning signs get missed. Medical problems that could have been caught early become emergencies. And because emergencies are both dangerous and expensive, your health gets worse while your medical bills climb.

As gerontology researchers who study health and financial well-being in later life, we’ve analyzed how someone’s ties to the health care system strengthen or unravel depending on whether they have insurance coverage. What we’ve found is simple: Staying connected to a trusted doctor keeps you healthier and saves the system money. Breaking that link does just the opposite.

And that’s exactly what has us worried right now. Members of Congress are debating whether to make major cuts to Medicaid and other social safety net programs. If the Senate passes its own version of the tax-and-spending package that the House approved in May 2025, millions of Americans will soon face exactly this kind of disruption – with big consequences for their health and well-being.

How people end up uninsured

Someone can lose their health insurance for a number of reasons. For many Americans, coverage is tied to employment. Being fired, retiring before you turn 65 and become eligible to enroll in the Medicare program, or even getting a new job can mean losing insurance. Others wind up uninsured due to a different array of changes: moving to a different state, getting divorced or aging out of a parent’s plan after their 26th birthday.

And those who buy their own coverage may find that they can no longer afford the premiums. In 2024, average premiums on the individual market exceeded more than US$600 per month for many adults, even with subsidies.

Government-sponsored insurance programs can also leave you vulnerable to this predicament. The Senate is currently considering its own version of a tax-and-spending bill the House of Representatives passed in May that would make cuts and changes to Medicaid. If the provisions in the House bill are enacted, millions of Americans who get health insurance through Medicaid – a health insurance program jointly run by the federal government and the states that is mainly for people who have low incomes or disabilities – would lose their coverage, according to the nonpartisan Congressional Budget Office.

Medicaid was established in the 1960s, explains a scholar of the program’s history.

Consequences of becoming uninsured

Health insurance is more than a way to pay medical bills; it’s a doorway into the health care system itself. It connects people to health care providers who come to know their medical history, their medications and their personal circumstances.

When that door closes, the effects are immediate. Uninsured people are much less likely to have a usual source of care – typically a doctor or another primary care provider or clinic you know and trust. That relationship acts as a foundation for managing chronic conditions, staying current with preventive screenings and getting guidance when new symptoms arise.

Researchers have found that adults who go uninsured for even six months become significantly more likely to postpone care or forgo it altogether to save money. In practical terms, this means they’re less likely to be examined by someone who knows their medical history and can spot red flags early.

The Affordable Care Act, the landmark health care law enacted during the Obama administration, made the number of Americans without insurance plummet. The share of people without insurance fell from 16% in 2010 to 7.7% in 2023.

The people who got insurance coverage, particularly those who were middle age, saw big improvements in their health.

Researching the results

In research that looked at data collected from 2014 to 2020, we followed what happened to 12,000 adults who were 50 or older and lived across the nation.

Our research team analyzed how their experiences changed when they lost, and sometimes later regained, a regular source of care during those six years.

Many of the participants in this study had multiple chronic conditions like diabetes, hypertension and heart disease.

The results were striking.

Those who didn’t see the same provider on a regular basis were far less likely to feel heard or respected by health care professionals. They had fewer medical appointments, filled fewer prescriptions and were less likely to follow through with recommended treatments.

Their health also deteriorated considerably over the six years. Their blood pressure and blood sugar levels rose, and they had more elevated indicators of kidney impairment compared with their counterparts who had regular care providers.

The longer they went without consistent health care, the worse these clinical markers became.

Warning signs

Preventive care is one of the best tools that both patients and their health care providers have to head off major health problems. This care includes screenings like cholesterol and blood pressure checks, mammograms, PAP smears and prostate exams, as well as routine vaccinations. But most people only get preventive care when they stay engaged with the health care system.

And that’s far more likely when you have stable and comprehensive health insurance coverage.

Our research team also examined what happened to preventive care based on whether the participants had a regular doctor. We found that those who kept seeing the same providers were almost three times more likely to get basic preventive services than those who did not.

Over time, these missed preventive care opportunities can add up to a big problem. They can turn what could have been a manageable issue into an emergency room visit or a long, expensive hospital stay.

For example, imagine a man in his 50s who no longer gets cholesterol screenings after losing insurance coverage. Over several years, his undiagnosed high cholesterol leads to a heart attack that could have been prevented with early medication. Or a woman who skips mammograms because of out-of-pocket costs, only to face a late-stage cancer diagnosis that might have been caught years earlier.

People in scrubs work and mill about in a hospital emergency room.
Waiting too long to deal with a health condition can mean you make a trip to the emergency room, increasing the cost of care for you and others.
FS Productions/Tetra images via Getty Images

Shifting the costs

Patients whose conditions take too long to be diagnosed aren’t the only ones who pay the price.

We also studied how stable care relationships affect health care spending. To do this, we linked Medicare claims cost data to our original study and tracked the medical costs of the same adults age 50 and older from 2014 to 2020. One of our key findings is that people with regular care providers were 38% less likely to incur above-average health care costs.

These savings aren’t just for patients – they ripple through the entire health care system. Primary care stability lowers costs for both public and private health insurers and, ultimately, for taxpayers.

But when people lose their health care coverage, those savings disappear.

Emergency rooms see more uninsured patients seeking care that could have been handled earlier and more cheaply in a clinic or doctor’s office. While hospitals are legally required to provide emergency care regardless of a patient’s ability to pay, much of the resulting cost goes unreimbursed.

Hospitals foot the bill for about two-thirds of those losses. They pass the other third along to private insurance companies through higher hospital fees. Those insurers, in turn, raise their customers’ premiums. Larger taxpayer subsidies can then be required to keep hospitals open.

Seeing Medicaid as a lifeline

For the nearly 80 million Americans enrolled in Medicaid, the program provides more than coverage.

It contributes to the health care stability our research shows is critical for good health. Medicaid makes it possible for many Americans with serious medical conditions to have a regular doctor, get routine preventive services and have someone to turn to when symptoms arise – even when they have low incomes. It helps prevent health care from becoming purely crisis-driven.

As Congress considers cutting Medicaid funding by hundreds of billions of dollars, we believe that lawmakers should realize that scaling back coverage would break the fragile links between millions of patients and the providers who know them best.The Conversation

Jane Tavares, Senior Research Fellow and Lecturer of Gerontology, UMass Boston and Marc Cohen, Professor of Gerontology, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post When you lose your health insurance, you may also lose your primary doctor – and that hurts your health appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Left-Leaning

This article presents a strong case in favor of maintaining or expanding government-funded health insurance programs like Medicaid, using empirical research to emphasize the benefits of continuous coverage and the harms of potential cuts. While it relies on data and expert opinion, the framing consistently warns against proposed Republican-led cuts to Medicaid, characterizing them as harmful and disruptive. The language portrays these policy shifts in a negative light, without presenting counterarguments or alternative fiscal perspectives, which contributes to a left-leaning tone in support of social safety nets and expansive health care coverage.

Continue Reading

The Conversation

Making facsimiles of the dead raises ethical quandaries

Published

on

theconversation.com – Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston – 2025-06-17 07:36:00


AI reanimations of the dead—used in courtrooms, concerts, and classrooms—are raising serious ethical concerns. These deepfakes may lack the consent of the deceased and risk distorting their legacy. While some argue they can educate or inspire, critics say such re-creations manipulate emotions and potentially exploit the dead for political, legal, or commercial gain. Unlike griefbots, which help loved ones cope, reanimations present a curated illusion that may conflict with the person’s real beliefs. By making the dead “speak” again, we risk cheapening their memory and overlooking our own ability to reflect, imagine, and interpret their lives with integrity.

This screenshot of an AI-generated video depicts Christopher Pelkey, who was killed in 2021.
Screenshot: Stacey Wales/YouTube

Nir Eisikovits, UMass Boston and Daniel J. Feldman, UMass Boston

Christopher Pelkey was shot and killed in a road range incident in 2021. On May 8, 2025, at the sentencing hearing for his killer, an AI video reconstruction of Pelkey delivered a victim impact statement. The trial judge reported being deeply moved by this performance and issued the maximum sentence for manslaughter.

As part of the ceremonies to mark Israel’s 77th year of independence on April 30, 2025, officials had planned to host a concert featuring four iconic Israeli singers. All four had died years earlier. The plan was to conjure them using AI-generated sound and video. The dead performers were supposed to sing alongside Yardena Arazi, a famous and still very much alive artist. In the end Arazi pulled out, citing the political atmosphere, and the event didn’t happen.

In April, the BBC created a deep-fake version of the famous mystery writer Agatha Christie to teach a “maestro course on writing.” Fake Agatha would instruct aspiring murder mystery authors and “inspire” their “writing journey.”

The use of artificial intelligence to “reanimate” the dead for a variety of purposes is quickly gaining traction. Over the past few years, we’ve been studying the moral implications of AI at the Center for Applied Ethics at the University of Massachusetts, Boston, and we find these AI reanimations to be morally problematic.

Before we address the moral challenges the technology raises, it’s important to distinguish AI reanimations, or deepfakes, from so-called griefbots. Griefbots are chatbots trained on large swaths of data the dead leave behind – social media posts, texts, emails, videos. These chatbots mimic how the departed used to communicate and are meant to make life easier for surviving relations. The deepfakes we are discussing here have other aims; they are meant to promote legal, political and educational causes.

Chris Pelkey was shot and killed in 2021. This AI ‘reanimation’ of him was presented in court as a victim impact statement.

Moral quandaries

The first moral quandary the technology raises has to do with consent: Would the deceased have agreed to do what their likeness is doing? Would the dead Israeli singers have wanted to sing at an Independence ceremony organized by the nation’s current government? Would Pelkey, the road-rage victim, be comfortable with the script his family wrote for his avatar to recite? What would Christie think about her AI double teaching that class?

The answers to these questions can only be deduced circumstantially – from examining the kinds of things the dead did and the views they expressed when alive. And one could ask if the answers even matter. If those in charge of the estates agree to the reanimations, isn’t the question settled? After all, such trustees are the legal representatives of the departed.

But putting aside the question of consent, a more fundamental question remains.

What do these reanimations do to the legacy and reputation of the dead? Doesn’t their reputation depend, to some extent, on the scarcity of appearance, on the fact that the dead can’t show up anymore? Dying can have a salutary effect on the reputation of prominent people; it was good for John F. Kennedy, and it was good for Israeli Prime Minister Yitzhak Rabin.

The fifth-century B.C. Athenian leader Pericles understood this well. In his famous Funeral Oration, delivered at the end of the first year of the Peloponnesian War, he asserts that a noble death can elevate one’s reputation and wash away their petty misdeeds. That is because the dead are beyond reach and their mystique grows postmortem. “Even extreme virtue will scarcely win you a reputation equal to” that of the dead, he insists.

Do AI reanimations devalue the currency of the dead by forcing them to keep popping up? Do they cheapen and destabilize their reputation by having them comment on events that happened long after their demise?

In addition, these AI representations can be a powerful tool to influence audiences for political or legal purposes. Bringing back a popular dead singer to legitimize a political event and reanimating a dead victim to offer testimony are acts intended to sway an audience’s judgment.

It’s one thing to channel a Churchill or a Roosevelt during a political speech by quoting them or even trying to sound like them. It’s another thing to have “them” speak alongside you. The potential of harnessing nostalgia is supercharged by this technology. Imagine, for example, what the Soviets, who literally worshipped Lenin’s dead body, would have done with a deep fake of their old icon.

Good intentions

You could argue that because these reanimations are uniquely engaging, they can be used for virtuous purposes. Consider a reanimated Martin Luther King Jr., speaking to our currently polarized and divided nation, urging moderation and unity. Wouldn’t that be grand? Or what about a reanimated Mordechai Anielewicz, the commander of the Warsaw Ghetto uprising, speaking at the trial of a Holocaust denier like David Irving?

But do we know what MLK would have thought about our current political divisions? Do we know what Anielewicz would have thought about restrictions on pernicious speech? Does bravely campaigning for civil rights mean we should call upon the digital ghost of King to comment on the impact of populism? Does fearlessly fighting the Nazis mean we should dredge up the AI shadow of an old hero to comment on free speech in the digital age?

a man in a suit and tie stands in front of a microphone
No one can know with certainty what Martin Luther King Jr. would say about today’s society.
AP Photo/Chick Harrity

Even if the political projects these AI avatars served were consistent with the deceased’s views, the problem of manipulation – of using the psychological power of deepfakes to appeal to emotions – remains.

But what about enlisting AI Agatha Christie to teach a writing class? Deep fakes may indeed have salutary uses in educational settings. The likeness of Christie could make students more enthusiastic about writing. Fake Aristotle could improve the chances that students engage with his austere Nicomachean Ethics. AI Einstein could help those who want to study physics get their heads around general relativity.

But producing these fakes comes with a great deal of responsibility. After all, given how engaging they can be, it’s possible that the interactions with these representations will be all that students pay attention to, rather than serving as a gateway to exploring the subject further.

Living on in the living

In a poem written in memory of W.B. Yeats, W.H. Auden tells us that, after the poet’s death, Yeats “became his admirers.” His memory was now “scattered among a hundred cities,” and his work subject to endless interpretation: “the words of a dead man are modified in the guts of the living.”

The dead live on in the many ways we reinterpret their words and works. Auden did that to Yeats, and we’re doing it to Auden right here. That’s how people stay in touch with those who are gone. In the end, we believe that using technological prowess to concretely bring them back disrespects them and, perhaps more importantly, is an act of disrespect to ourselves – to our capacity to abstract, think and imagine.The Conversation

Nir Eisikovits, Professor of Philosophy and Director, Applied Ethics Center, UMass Boston and Daniel J. Feldman, Senior Research Fellow, Applied Ethics Center, UMass Boston

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Making facsimiles of the dead raises ethical quandaries appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This content presents a balanced and thoughtful discussion on the ethical implications of using AI to “reanimate” deceased individuals’ likenesses. It raises concerns about consent, legacy, manipulation, and the moral responsibilities involved, without advocating strongly for or against a particular political ideology. The examples and references span different contexts and perspectives, from legal and political uses to educational applications, addressing both potential risks and benefits. The tone is analytical, focused on ethical considerations rather than partisan viewpoints, aligning it with a centrist approach to the topic.

Continue Reading

The Conversation

Robots run out of energy long before they run out of work to do − feeding them could change that

Published

on

theconversation.com – James Pikul, Associate Professor of Mechanical Engineering, University of Wisconsin-Madison – 2025-06-02 07:45:00


Earlier this year, a robot completed a half-marathon in just under 2 hours 40 minutes, showcasing impressive agility but limited endurance. Unlike animals that store energy in dense fat, robots rely on lithium-ion batteries, which offer far less energy density and require frequent recharging, limiting operational time. Current robots like Boston Dynamics’ Spot function for around 90 minutes per charge, far less than biological endurance. New battery chemistries and fast-charging technologies may help, but challenges remain. Researchers are exploring bioinspired “robotic metabolism” systems, where robots “digest” fuels and circulate energy like blood, promising enhanced endurance, adaptability, and resilience beyond current limitations.

Robots can run, but they can’t go the distance.
AP Photo/Ng Han Guan

James Pikul, University of Wisconsin-Madison

Earlier this year, a robot completed a half-marathon in Beijing in just under 2 hours and 40 minutes. That’s slower than the human winner, who clocked in at just over an hour – but it’s still a remarkable feat. Many recreational runners would be proud of that time. The robot kept its pace for more than 13 miles (21 kilometers).

But it didn’t do so on a single charge. Along the way, the robot had to stop and have its batteries swapped three times. That detail, while easy to overlook, speaks volumes about a deeper challenge in robotics: energy.

Modern robots can move with incredible agility, mimicking animal locomotion and executing complex tasks with mechanical precision. In many ways, they rival biology in coordination and efficiency. But when it comes to endurance, robots still fall short. They don’t tire from exertion – they simply run out of power.

As a robotics researcher focused on energy systems, I study this challenge closely. How can researchers give robots the staying power of living creatures – and why are we still so far from that goal? Though most robotics research into the energy problem has focused on better batteries, there is another possibility: Build robots that eat.

Robots move well but run out of steam

Modern robots are remarkably good at moving. Thanks to decades of research in biomechanics, motor control and actuation, machines such as Boston Dynamics’ Spot and Atlas can walk, run and climb with an agility that once seemed out of reach. In some cases, their motors are even more efficient than animal muscles.

But endurance is another matter. Spot, for example, can operate for just 90 minutes on a full charge. After that, it needs nearly an hour to recharge. These runtimes are a far cry from the eight- to 12-hour shifts expected of human workers – or the multiday endurance of sled dogs.

The issue isn’t how robots move – it’s how they store energy. Most mobile robots today use lithium-ion batteries, the same type found in smartphones and electric cars. These batteries are reliable and widely available, but their performance improves at a slow pace: Each year new lithium-ion batteries are about 7% better than the previous generation. At that rate, it would take a full decade to merely double a robot’s runtime.

Robots such as Boston Dynamic’s Atlas are remarkably capable – for relatively short amounts of time.

Animals store energy in fat, which is extraordinarily energy dense: nearly 9 kilowatt-hours per kilogram. That’s about 68 kWh total in a sled dog, similar to the energy in a fully charged Tesla Model 3. Lithium-ion batteries, by contrast, store just a fraction of that, about 0.25 kilowatt-hours per kilogram. Even with highly efficient motors, a robot like Spot would need a battery dozens of times more powerful than today’s to match the endurance of a sled dog.

And recharging isn’t always an option. In disaster zones, remote fields or on long-duration missions, a wall outlet or a spare battery might be nowhere in sight.

In some cases, robot designers can add more batteries. But more batteries mean more weight, which increases the energy required to move. In highly mobile robots, there’s a careful balance between payload, performance and endurance. For Spot, for example, the battery already makes up 16% of its weight.

Some robots have used solar panels, and in theory these could extend runtime, especially for low-power tasks or in bright, sunny environments. But in practice, solar power delivers very little power relative to what mobile robots need to walk, run or fly at practical speeds. That’s why energy harvesting like solar panels remains a niche solution today, better suited for stationary or ultra-low-power robots.

Why it matters

These aren’t just technical limitations. They define what robots can do.

A rescue robot with a 45-minute battery might not last long enough to complete a search. A farm robot that pauses to recharge every hour can’t harvest crops in time. Even in warehouses or hospitals, short runtimes add complexity and cost.

If robots are to play meaningful roles in society assisting the elderly, exploring hazardous environments and working alongside humans, they need the endurance to stay active for hours, not minutes.

New battery chemistries such as lithium-sulfur and metal-air offer a more promising path forward. These systems have much higher theoretical energy densities than today’s lithium-ion cells. Some approach levels seen in animal fat. When paired with actuators that efficiently convert electrical energy from the battery to mechanical work, they could enable robots to match or even exceed the endurance of animals with low body fat. But even these next-generation batteries have limitations. Many are difficult to recharge, degrade over time or face engineering hurdles in real-world systems.

Fast charging can help reduce downtime. Some emerging batteries can recharge in minutes rather than hours. But there are trade-offs. Fast charging strains battery life, increases heat and often requires heavy, high-power charging infrastructure. Even with improvements, a fast-charging robot still needs to stop frequently. In environments without access to grid power, this doesn’t solve the core problem of limited onboard energy. That’s why researchers are exploring alternatives such as “refueling” robots with metal or chemical fuels – much like animals eat – to bypass the limits of electrical charging altogether.

illustration off a humanoid robot putting a metal nut into its mouth
Robots could one day harvest energy from high-energy-density materials such as aluminum through synthetic digestive and vascular systems.
Yichao Shi and James Pikul

An alternative: Robotic metabolism

In nature, animals don’t recharge, they eat. Food is converted into energy through digestion, circulation and respiration. Fat stores that energy, blood moves it and muscles use it. Future robots could follow a similar blueprint with synthetic metabolisms.

Some researchers are building systems that let robots “digest” metal or chemical fuels and breathe oxygen. For example, synthetic, stomachlike chemical reactors could convert high-energy materials such as aluminum into electricity.

This builds on the many advances in robot autonomy, where robots can sense objects in a room and navigate to pick them up, but here they would be picking up energy sources.

Other researchers are developing fluid-based energy systems that circulate like blood. One early example, a robotic fish, tripled its energy density by using a multifunctional fluid instead of a standard lithium-ion battery. That single design shift delivered the equivalent of 16 years of battery improvements, not through new chemistry but through a more bioinspired approach. These systems could allow robots to operate for much longer stretches of time, drawing energy from materials that store far more energy than today’s batteries.

In animals, the energy system does more than just provide energy. Blood helps regulate temperature, deliver hormones, fight infections and repair wounds. Synthetic metabolisms could do the same. Future robots might manage heat using circulating fluids or heal themselves using stored or digested materials. Instead of a central battery pack, energy could be stored throughout the body in limbs, joints and soft, tissuelike components.

This approach could lead to machines that aren’t just longer-lasting but more adaptable, resilient and lifelike.

The bottom line

Today’s robots can leap and sprint like animals, but they can’t go the distance.

Their bodies are fast, their minds are improving, but their energy systems haven’t caught up. If robots are going to work alongside humans in meaningful ways, we’ll need to give them more than intelligence and agility. We’ll need to give them endurance.The Conversation

James Pikul, Associate Professor of Mechanical Engineering, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Robots run out of energy long before they run out of work to do − feeding them could change that appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This article presents a factual, science- and technology-focused discussion about the challenges of energy storage in robotics. It reports on current limitations and future research directions without advocating any political ideology or policy stance. The tone is neutral and informative, emphasizing technical innovation and potential benefits without framing the topic in a partisan context. There is no language or framing that suggests a left- or right-leaning bias; instead, it adheres to objective reporting of scientific progress and challenges.

Continue Reading

Trending