Connect with us

The Conversation

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

Published

on

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

Markets are increasingly driven by decisions made by AI.
PhonlamaiPhoto/iStock via Getty Images

Pawan Jain, West Virginia University

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

Program trading fuels Black Monday

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it’s composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

a bunch of black and white newspaper front pages are layered on top of each other with words like panic and crash and wall street
This is how papers across the country headlined the stock market plunge on Black Monday, Oct. 19, 1987.
AP Photo

HFT: Program trading on steroids

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

Benefits of AI trading

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they don’t charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

a crowd of people move around a large room with big screens all over the place
Stock exchanges used to be packed with traders buying and selling securities, as in this scene from 1983. Today’s trading floors are increasingly empty as AI-powered computers handle more and more of the work.
AP Photo/Richard Drew

The downsides

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That’s because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

Enter ChatGPT

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone’s deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isn’t much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks won’t be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there’s a risk of being left behind by rivals.

But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.The Conversation

Pawan Jain, Assistant Professor of Finance, West Virginia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Robots run out of energy long before they run out of work to do − feeding them could change that

Published

on

theconversation.com – James Pikul, Associate Professor of Mechanical Engineering, University of Wisconsin-Madison – 2025-06-02 07:45:00


Earlier this year, a robot completed a half-marathon in just under 2 hours 40 minutes, showcasing impressive agility but limited endurance. Unlike animals that store energy in dense fat, robots rely on lithium-ion batteries, which offer far less energy density and require frequent recharging, limiting operational time. Current robots like Boston Dynamics’ Spot function for around 90 minutes per charge, far less than biological endurance. New battery chemistries and fast-charging technologies may help, but challenges remain. Researchers are exploring bioinspired “robotic metabolism” systems, where robots “digest” fuels and circulate energy like blood, promising enhanced endurance, adaptability, and resilience beyond current limitations.

Robots can run, but they can’t go the distance.
AP Photo/Ng Han Guan

James Pikul, University of Wisconsin-Madison

Earlier this year, a robot completed a half-marathon in Beijing in just under 2 hours and 40 minutes. That’s slower than the human winner, who clocked in at just over an hour – but it’s still a remarkable feat. Many recreational runners would be proud of that time. The robot kept its pace for more than 13 miles (21 kilometers).

But it didn’t do so on a single charge. Along the way, the robot had to stop and have its batteries swapped three times. That detail, while easy to overlook, speaks volumes about a deeper challenge in robotics: energy.

Modern robots can move with incredible agility, mimicking animal locomotion and executing complex tasks with mechanical precision. In many ways, they rival biology in coordination and efficiency. But when it comes to endurance, robots still fall short. They don’t tire from exertion – they simply run out of power.

As a robotics researcher focused on energy systems, I study this challenge closely. How can researchers give robots the staying power of living creatures – and why are we still so far from that goal? Though most robotics research into the energy problem has focused on better batteries, there is another possibility: Build robots that eat.

Robots move well but run out of steam

Modern robots are remarkably good at moving. Thanks to decades of research in biomechanics, motor control and actuation, machines such as Boston Dynamics’ Spot and Atlas can walk, run and climb with an agility that once seemed out of reach. In some cases, their motors are even more efficient than animal muscles.

But endurance is another matter. Spot, for example, can operate for just 90 minutes on a full charge. After that, it needs nearly an hour to recharge. These runtimes are a far cry from the eight- to 12-hour shifts expected of human workers – or the multiday endurance of sled dogs.

The issue isn’t how robots move – it’s how they store energy. Most mobile robots today use lithium-ion batteries, the same type found in smartphones and electric cars. These batteries are reliable and widely available, but their performance improves at a slow pace: Each year new lithium-ion batteries are about 7% better than the previous generation. At that rate, it would take a full decade to merely double a robot’s runtime.

Robots such as Boston Dynamic’s Atlas are remarkably capable – for relatively short amounts of time.

Animals store energy in fat, which is extraordinarily energy dense: nearly 9 kilowatt-hours per kilogram. That’s about 68 kWh total in a sled dog, similar to the energy in a fully charged Tesla Model 3. Lithium-ion batteries, by contrast, store just a fraction of that, about 0.25 kilowatt-hours per kilogram. Even with highly efficient motors, a robot like Spot would need a battery dozens of times more powerful than today’s to match the endurance of a sled dog.

And recharging isn’t always an option. In disaster zones, remote fields or on long-duration missions, a wall outlet or a spare battery might be nowhere in sight.

In some cases, robot designers can add more batteries. But more batteries mean more weight, which increases the energy required to move. In highly mobile robots, there’s a careful balance between payload, performance and endurance. For Spot, for example, the battery already makes up 16% of its weight.

Some robots have used solar panels, and in theory these could extend runtime, especially for low-power tasks or in bright, sunny environments. But in practice, solar power delivers very little power relative to what mobile robots need to walk, run or fly at practical speeds. That’s why energy harvesting like solar panels remains a niche solution today, better suited for stationary or ultra-low-power robots.

Why it matters

These aren’t just technical limitations. They define what robots can do.

A rescue robot with a 45-minute battery might not last long enough to complete a search. A farm robot that pauses to recharge every hour can’t harvest crops in time. Even in warehouses or hospitals, short runtimes add complexity and cost.

If robots are to play meaningful roles in society assisting the elderly, exploring hazardous environments and working alongside humans, they need the endurance to stay active for hours, not minutes.

New battery chemistries such as lithium-sulfur and metal-air offer a more promising path forward. These systems have much higher theoretical energy densities than today’s lithium-ion cells. Some approach levels seen in animal fat. When paired with actuators that efficiently convert electrical energy from the battery to mechanical work, they could enable robots to match or even exceed the endurance of animals with low body fat. But even these next-generation batteries have limitations. Many are difficult to recharge, degrade over time or face engineering hurdles in real-world systems.

Fast charging can help reduce downtime. Some emerging batteries can recharge in minutes rather than hours. But there are trade-offs. Fast charging strains battery life, increases heat and often requires heavy, high-power charging infrastructure. Even with improvements, a fast-charging robot still needs to stop frequently. In environments without access to grid power, this doesn’t solve the core problem of limited onboard energy. That’s why researchers are exploring alternatives such as “refueling” robots with metal or chemical fuels – much like animals eat – to bypass the limits of electrical charging altogether.

illustration off a humanoid robot putting a metal nut into its mouth
Robots could one day harvest energy from high-energy-density materials such as aluminum through synthetic digestive and vascular systems.
Yichao Shi and James Pikul

An alternative: Robotic metabolism

In nature, animals don’t recharge, they eat. Food is converted into energy through digestion, circulation and respiration. Fat stores that energy, blood moves it and muscles use it. Future robots could follow a similar blueprint with synthetic metabolisms.

Some researchers are building systems that let robots “digest” metal or chemical fuels and breathe oxygen. For example, synthetic, stomachlike chemical reactors could convert high-energy materials such as aluminum into electricity.

This builds on the many advances in robot autonomy, where robots can sense objects in a room and navigate to pick them up, but here they would be picking up energy sources.

Other researchers are developing fluid-based energy systems that circulate like blood. One early example, a robotic fish, tripled its energy density by using a multifunctional fluid instead of a standard lithium-ion battery. That single design shift delivered the equivalent of 16 years of battery improvements, not through new chemistry but through a more bioinspired approach. These systems could allow robots to operate for much longer stretches of time, drawing energy from materials that store far more energy than today’s batteries.

In animals, the energy system does more than just provide energy. Blood helps regulate temperature, deliver hormones, fight infections and repair wounds. Synthetic metabolisms could do the same. Future robots might manage heat using circulating fluids or heal themselves using stored or digested materials. Instead of a central battery pack, energy could be stored throughout the body in limbs, joints and soft, tissuelike components.

This approach could lead to machines that aren’t just longer-lasting but more adaptable, resilient and lifelike.

The bottom line

Today’s robots can leap and sprint like animals, but they can’t go the distance.

Their bodies are fast, their minds are improving, but their energy systems haven’t caught up. If robots are going to work alongside humans in meaningful ways, we’ll need to give them more than intelligence and agility. We’ll need to give them endurance.The Conversation

James Pikul, Associate Professor of Mechanical Engineering, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Robots run out of energy long before they run out of work to do − feeding them could change that appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

This article presents a factual, science- and technology-focused discussion about the challenges of energy storage in robotics. It reports on current limitations and future research directions without advocating any political ideology or policy stance. The tone is neutral and informative, emphasizing technical innovation and potential benefits without framing the topic in a partisan context. There is no language or framing that suggests a left- or right-leaning bias; instead, it adheres to objective reporting of scientific progress and challenges.

Continue Reading

The Conversation

Our trans health study was terminated by the government – the effects of abrupt NIH grant cuts ripple across science and society

Published

on

theconversation.com – Jae A. Puckett, Associate Professor of Psychology, Michigan State University – 2025-06-02 07:44:00


The Trump administration abruptly terminated federally funded research on transgender and nonbinary health, including a four-year NIH-supported study on resilience in these communities. This termination, based on ideological grounds, undermines decades of scientific progress, dismissing valid research and harming both the scientific workforce and community trust. The project had collected extensive data and developed new resilience measures, but funding cuts jeopardize the careers of researchers and reduce future training opportunities. The loss wastes millions of taxpayer dollars and halts valuable insights into improving trans health, while government reports contradict established science on gender-affirming care, promoting misinformation instead.

Funding cuts to trans health research are part of the Trump administration’s broader efforts to medically and legally restrict trans rights.
AP Photo/Lindsey Wasson

Jae A. Puckett, Michigan State University and Paz Galupo, Washington University in St. Louis

Given the Trump administration’s systematic attempts to medically and legally disenfranchise trans people, and its abrupt termination of grants focused on LGBTQ+ health, we can’t say that the notice of termination we received regarding our federally funded research on transgender and nonbinary people’s health was unexpected.

As researchers who study the experiences of trans and nonbinary people, we have collectively dedicated nearly 50 years of our scientific careers to developing ways to address the health disparities negatively affecting these communities. The National Institutes of Health had placed a call for projects on this topic, and we had successfully applied for their support for our four-year study on resilience in trans communities.

However, our project on trans health became one of the hundreds of grants that have been terminated on ideological grounds. The termination notice stated that the grant no longer fit agency priorities and claimed that this work was not based on scientific research.

Screenshot of email
Termination notice sent to the authors from the National Institutes of Health.
Jae A. Puckett and Paz Galupo, CC BY-ND

These grant terminations undermine decades of science on gender diversity by dismissing research findings and purging data. During Trump’s current term, the NIH’s Sexual and Gender Minority Research Office was dismantled, references to LGBTQ+ people were removed from health-related websites, and datasets were removed from public access.

The effects of ending research on trans health ripple throughout the scientific community, the communities served by this work and the U.S. economy.

Studying resilience

Research focused on the mental health of trans and nonbinary people has grown substantially in recent years. Over time, this work has expanded beyond understanding the hardships these communities face to also study their resilience and positive life experiences.

Resilience is often understood as an ability to bounce back from challenges. For trans and nonbinary people experiencing gender-based stigma and discrimination, resilience can take several forms. This might look like simply continuing to survive in a transphobic climate, or it might take the form of being a role model for other trans and nonbinary people.

As a result of gender-based stigma and discrimination, trans and nonbinary people experience a range of health disparities, from elevated rates of psychological distress to heightened risk for chronic health conditions and poor physical health. In the face of these challenges and growing anti-trans legislation in the U.S., we believe that studying resilience in these communities can provide insights into how to offset the harms of these stresses.

Studies show anti-trans legislation is harming the mental health of LGBTQ+ youth.

With the support of the NIH, we began our work in earnest in 2022. The project was built on many years of research from our teams preceding the grant. From the beginning, we collaborated with trans and nonbinary community members to ensure our research would be attuned to the needs of the community.

At the time our grant was terminated, we were nearing completion of Year 3 of our four-year project. We had collected data from over 600 trans and nonbinary participants across the U.S. and started to follow their progress over time. We had developed a new way to measure resilience among trans and nonbinary people and were about to publish a second measure specifically tailored to people of color.

The termination of our grant and others like it harms our immediate research team, the communities we worked with and the field more broadly.

Loss of scientific workforce

For many researchers in trans health, the losses from these cuts go beyond employment.

Our project had served as a training opportunity for the students and early career professionals involved in the study, providing them with the research experience and mentorship necessary to advance their careers. But with the termination of our funding, two full-time researchers and at least three students will lose their positions. The three lead scientists have lost parts of their salaries and dedicated research time.

These NIH cuts will likely result in the loss of much of the next generation of trans researchers and the contributions they would have made to science and society. Our team and other labs in similar situations will be less likely to work with graduate students due to a lack of available funding to pay and support them. This changes the landscape for future scientists, as it means there will be fewer opportunities for individuals interested in these areas of research to enter graduate training programs.

Building with Harvard insignia banners hanging between pillars, a student in a cap and gown walking past
The Trump administration has directly penalized universities across the country for ‘ideological overreach.’
Zhu Ziyu/VCG via Getty Images

As universities struggle to address federal funding cuts, junior academics will be less likely to gain tenure, and faculty in grant-funded positions may lose their jobs. Universities may also become hesitant to hire people who work in these areas because their research has essentially been banned from federal funding options.

Loss of community trust

Trans and nonbinary people have often been studied under opportunistic and demeaning circumstances. This includes when researchers collect data for their own gains but return little to the communities they work with, or when they do research that perpetuates theories that pathologize those communities. As a result, many are often reluctant to participate in research.

To overcome this reluctance, we grounded our study on community input. We involved an advisory board composed of local trans and nonbinary community members who helped to inform how we conducted our study and measured our findings.

Our work on resilience has been inspired by feedback we received from previous research participants who said that “[trans people] matter even when not in pain.”

Abruptly terminating projects like these can break down trust between researchers and the populations they study.

Loss of scientific knowledge

Research that focuses on the strengths of trans and nonbinary communities is in its infancy. The termination of our grant has led to the loss of the insights our study would have provided on ways to improve health among trans and nonbinary people and future work that would have built off our findings. Resilience is a process that takes time to unfold, and we had not finished the longitudinal data collection in our study – nor will we have the protected time to publish and share other findings from this work.

Meanwhile, the Department of Health and Human Services released a May 2025 report stating that there is not enough evidence to support gender-affirming care for young people, contradicting decades of scientific research. Scientists, researchers and medical professional organizations have widely criticized the report as misrepresenting study findings, dismissing research showing benefits to gender-affirming care, and promoting misinformation rejected by major medical associations. Instead, the report recommends “exploratory therapy,” which experts have likened to discredited conversion therapy.

Hands clapping beside a small trans flag on top of a pile of signs, one reading 'WE'RE STILL HERE,'
Transgender and nonbinary people continue to exist, regardless of legislation.
Kayla Bartkowski/Getty Images

Despite claims that there is insufficient research on gender-affirming care and more data is needed on the health of trans and nonbinary people, the government has chosen to divest from actual scientific research about trans and nonbinary people’s lives.

Loss of taxpayer dollars

The termination of our grant means we are no longer able to achieve the aims of the project, which depended on the collection and analysis of data over time. This wastes the three years of NIH funding already spent on the project.

Scientists and experts who participated in the review of our NIH grant proposal rated our project more highly than 96% of the projects we competed against. Even so, the government made the unscientific choice to override these decisions and terminate our work.

Millions of taxpayer dollars have already been invested in these grants to improve the health of not only trans and nonbinary people, but also American society as a whole. With the termination of these grants, few will get to see the benefits of this investment.The Conversation

Jae A. Puckett, Associate Professor of Psychology, Michigan State University and Paz Galupo, Professor of Sexual Health and Education, Washington University in St. Louis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Our trans health study was terminated by the government – the effects of abrupt NIH grant cuts ripple across science and society appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Left-Leaning

This content strongly critiques actions taken by the Trump administration and associated federal agencies, particularly regarding the termination of funding for transgender and nonbinary health research. It emphasizes harm caused to LGBTQ+ communities, highlights scientific consensus supporting gender-affirming care, and portrays the policy decisions as ideologically driven and detrimental to both communities and scientific progress. The language and framing align with perspectives commonly found on the political left, especially those advocating for LGBTQ+ rights and inclusion, while opposing conservative policies perceived as hostile to these groups.

Continue Reading

The Conversation

Prime numbers, the building blocks of mathematics, have fascinated for centuries − now technology is revolutionizing the search for them

Published

on

theconversation.com – Jeremiah Bartz, Associate Professor of Mathematics, University of North Dakota – 2025-05-30 07:47:00


Prime numbers, numbers greater than one divisible only by one and themselves, have fascinated humanity for millennia, evidenced by artifacts like the 20,000-year-old Ishango bone and the Babylonian Plimpton 322 tablet. Greek mathematicians around 500 B.C.E. first understood primes, while Euler proved their infinitude circa 300 B.C.E. Arab scholars advanced prime theory, including the fundamental theorem of arithmetic. Mersenne primes, of form (2^p – 1), offer a key to finding large primes. The Lucas-Lehmer test enables efficient identification, enhanced by computers since the 1950s. Collaborative efforts like GIMPS have discovered many large primes, with the current largest prime found in 2024, critical for encryption and cybersecurity.

Prime numbers are numbers that are not products of smaller whole numbers.
Jeremiah Bartz

Jeremiah Bartz, University of North Dakota

A shard of smooth bone etched with irregular marks dating back 20,000 years puzzled archaeologists until they noticed something unique – the etchings, lines like tally marks, may have represented prime numbers. Similarly, a clay tablet from 1800 B.C.E. inscribed with Babylonian numbers describes a number system built on prime numbers.

As the Ishango bone, the Plimpton 322 tablet and other artifacts throughout history display, prime numbers have fascinated and captivated people throughout history. Today, prime numbers and their properties are studied in number theory, a branch of mathematics and active area of research today.

A history of prime numbers

A long, thin shard of bone with small lines scratched into it.
Some scientists guess that the markings on the Ishango bone represent prime numbers.
Joeykentin/Wikimedia Commons, CC BY-SA

Informally, a positive counting number larger than one is prime if that number of dots can be arranged only into a rectangular array with one column or one row. For example, 11 is a prime number since 11 dots form only rectangular arrays of sizes 1 by 11 and 11 by 1. Conversely, 12 is not prime since you can use 12 dots to make an array of 3 by 4 dots, with multiple rows and multiple columns. Math textbooks define a prime number as a whole number greater than one whose only positive divisors are only 1 and itself.

Math historian Peter S. Rudman suggests that Greek mathematicians were likely the first to understand the concept of prime numbers, around 500 B.C.E.

Around 300 B.C.E., the Greek mathematician and logician Euclid proved that there are infinitely many prime numbers. Euclid began by assuming that there is a finite number of primes. Then he came up with a prime that was not on the original list to create a contradiction. Since a fundamental principle of mathematics is being logically consistent with no contradictions, Euclid then concluded that his original assumption must be false. So, there are infinitely many primes.

The argument established the existence of infinitely many primes, however it was not particularly constructive. Euclid had no efficient method to list all the primes in an ascending list.

a diagram showing prime numbers as dots in rows, with composite numbers as dots arranged in rectangles of at least two rows of dots, with the same number of dots in each row.
Prime numbers, when expressed as that number of dots, can be arranged only in a single row or column, rather than a square or rectangle.
David Eppstein/Wikimedia Commons

In the middle ages, Arab mathematicians advanced the Greeks’ theory of prime numbers, referred to as hasam numbers during this time. The Persian mathematician Kamal al-Din al-Farisi formulated the fundamental theorem of arithmetic, which states that any positive integer larger than one can be expressed uniquely as a product of primes.

From this view, prime numbers are the basic building blocks for constructing any positive whole number using multiplication – akin to atoms combining to make molecules in chemistry.

Prime numbers can be sorted into different types. In 1202, Leonardo Fibonacci introduced in his book “Liber Abaci: Book of Calculation” prime numbers of the form (2p – 1) where p is also prime.

Today, primes in this form are called Mersenne primes after the French monk Marin Mersenne. Many of the largest known primes follow this format.

Several early mathematicians believed that a number of the form (2p – 1) is prime whenever p is prime. But in 1536, mathematician Hudalricus Regius noticed that 11 is prime but not (211 – 1), which equals 2047. The number 2047 can be expressed as 23 times 89, disproving the conjecture.

While not always true, number theorists realized that the (2p – 1) shortcut often produces primes and gives a systematic way to search for large primes.

The search for large primes

The number (2p – 1) is much larger relative to the value of p and provides opportunities to identify large primes.

When the number (2p – 1) becomes sufficiently large, it is much harder to check whether (2p – 1) is prime – that is, if (2p – 1) dots can be arranged only into a rectangular array with one column or one row.

Fortunately, Édouard Lucas developed a prime number test in 1878, later proved by Derrick Henry Lehmer in 1930. Their work resulted in an efficient algorithm for evaluating potential Mersenne primes. Using this algorithm with hand computations on paper, Lucas showed in 1876 that the 39-digit number (2127 – 1) equals 170,141,183,460,469,231,731,687,303,715,884,105,727, and that value is prime.

Also known as M127, this number remains the largest prime verified by hand computations. It held the record for largest known prime for 75 years.

Researchers began using computers in the 1950s, and the pace of discovering new large primes increased. In 1952, Raphael M. Robinson identified five new Mersenne primes using a Standard Western Automatic Computer to carry out the Lucas-Lehmer prime number tests.

As computers improved, the list of Mersenne primes grew, especially with the Cray supercomputer’s arrival in 1964. Although there are infinitely many primes, researchers are unsure how many fit the type (2p – 1) and are Mersenne primes.

By the early 1980s, researchers had accumulated enough data to confidently believe that infinitely many Mersenne primes exist. They could even guess how often these prime numbers appear, on average. Mathematicians have not found proof so far, but new data continues to support these guesses.

George Woltman, a computer scientist, founded the Great Internet Mersenne Prime Search, or GIMPS, in 1996. Through this collaborative program, anyone can download freely available software from the GIMPS website to search for Mersenne prime numbers on their personal computers. The website contains specific instructions on how to participate.

GIMPS has now identified 18 Mersenne primes, primarily on personal computers using Intel chips. The program averages a new discovery about every one to two years.

The largest known prime

Luke Durant, a retired programmer, discovered the current record for the largest known prime, (2136,279,841 – 1), in October 2024.

Referred to as M136279841, this 41,024,320-digit number was the 52nd Mersenne prime identified and was found by running GIMPS on a publicly available cloud-based computing network.

This network used Nvidia chips and ran across 17 countries and 24 data centers. These advanced chips provide faster computing by handling thousands of calculations simultaneously. The result is shorter run times for algorithms such as prime number testing.

A small rectangle metal chip reading 'nVIDIA'
New and increasingly powerful computer chips have allowed prime-number hunters to find increasingly larger primes.
Fritzchens Fritz/Flickr

The Electronic Frontier Foundation is a civil liberty group that offers cash prizes for identifying large primes. It awarded prizes in 2000 and 2009 for the first verified 1 million-digit and 10 million-digit prime numbers.

Large prime number enthusiasts’ next two challenges are to identify the first 100 million-digit and 1 billion-digit primes. EFF prizes of US$150,000 and $250,000, respectively, await the first successful individual or group.

Eight of the 10 largest known prime numbers are Mersenne primes, so GIMPS and cloud computing are poised to play a prominent role in the search for record-breaking large prime numbers.

Large prime numbers have a vital role in many encryption methods in cybersecurity, so every internet user stands to benefit from the search for large prime numbers. These searches help keep digital communications and sensitive information safe.

This story was updated on May 30, 2025 to correct the name of the Greek mathematician Euclid and to correct the factors of 2047.The Conversation

Jeremiah Bartz, Associate Professor of Mathematics, University of North Dakota

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More

The post Prime numbers, the building blocks of mathematics, have fascinated for centuries − now technology is revolutionizing the search for them appeared first on theconversation.com



Note: The following A.I. based commentary is not part of the original article, reproduced above, but is offered in the hopes that it will promote greater media literacy and critical thinking, by making any potential bias more visible to the reader –Staff Editor.

Political Bias Rating: Centrist

The article presents a factual, educational overview of the history and significance of prime numbers, focusing on mathematics and technological advancements without promoting any political or ideological stance. Its tone is neutral and informative, aimed at explaining mathematical concepts and recent developments in prime number research. The content does not include partisan language or viewpoints and remains centered on scientific progress and historical context, making it a balanced, non-partisan piece.

Continue Reading

Trending