fbpx
Connect with us

The Conversation

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

Published

on

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments

Markets are increasingly driven by decisions made by AI.
PhonlamaiPhoto/iStock via Getty Images

Pawan Jain, West Virginia University

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

I've been researching financial markets and algorithmic trading for 14 years. While AI offers lots of , the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street's past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

Program trading fuels Black Monday

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Advertisement

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it's composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

Advertisement
a bunch of black and white newspaper front pages are layered on top of each other with words like panic and crash and wall street
This is how papers across the country headlined the stock market plunge on Black Monday, Oct. 19, 1987.
AP Photo

HFT: Program trading on steroids

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

Benefits of AI trading

These AI-based, high-frequency traders operate very differently than people do.

Advertisement

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they don't charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

Advertisement
a crowd of people move around a large room with big screens all over the place
Stock exchanges used to be packed with traders buying and selling securities, as in this scene from 1983. Today's trading floors are increasingly empty as AI-powered computers handle more and more of the work.
AP Photo/Richard Drew

The downsides

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility – a measure of how rapidly and unpredictably prices move up and down – increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

Advertisement

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That's because as the number of these traders increases in the marketplace, the similarity in these algorithms can to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

Enter ChatGPT

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone's deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Advertisement

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isn't much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

Advertisement

For now, at least, it seems most won't be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there's a risk of being left behind by rivals.

But the risks to financial markets, the global and everyone are also great, so I hope they tread carefully.The Conversation

Pawan Jain, Assistant Professor of Finance, West Virginia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

Engineering cells to broadcast their behavior can help scientists study their inner workings

Published

on

theconversation.com – Scott Coyle, Assistant Professor of Biochemistry, of Wisconsin- – 2024-05-31 07:16:17

Protein wave oscillations open a window into living cells.

Scott Coyle and Rohith Rajasekaran, CC BY-ND

Scott Coyle, University of Wisconsin-Madison

Waves are ubiquitous in nature and technology. Whether it's the rise and fall of ocean tides or the swinging of a clock's pendulum, the predictable rhythms of waves create a signal that is easy to track and distinguish from other types of .

Advertisement

Electronic devices use radio waves to send and data, like your laptop and Wi-Fi router or cellphone and cell tower. Similarly, scientists can use a different type of wave to transmit a different type of data: signals from the invisible processes and dynamics underlying how cells make decisions.

I am a synthetic biologist, and my research group developed a technology that sends a wave of engineered proteins traveling through a human cell to a window into the hidden activities that power cells when they're healthy and harm cells when they go haywire.

Waves are a powerful engineering tool

The oscillating behavior of waves is one reason they're powerful patterns in engineering.

For example, controlled and predictable changes to wave oscillations can be used to encode data, such as voice or information. In the case of radio, each station is assigned a unique electromagnetic wave that oscillates at its own frequency. These are the numbers you see on the radio dial.

Advertisement

Animated diagram depicting a signal wave (smooth hills and valleys), AM waves (more waves fit into the shape of hills and valleys) and FM waves (clusters of waves that spread apart slightly at the valleys of the signal)

Waves can be modulated to carry different types of information, such as FM and AM radio.

Berserkerus/Wikimedia Commons, CC BY-SA

Scientists can extend this strategy to living cells. My team used waves of proteins to turn a cell into a microscopic radio station, broadcasting data about its activity in real time to study its behavior.

Turning cells into radio stations

Studying the inside of cells requires a kind of wave that can specifically connect to and interact with the machinery and components of a cell.

Animation of cyan and mangenta waves forming a spiral

Bacterial proteins MinD (cyan) and MinE (magenta) can self-organize into spiral patterns.

CellfOrganized/Wikimedia Commons, CC BY-SA

Advertisement

While electronic devices are built from wires and transistors, cells are built from and controlled by a diverse collection of chemical building blocks called proteins. Proteins perform an array of functions within the cell, from extracting energy from sugar to deciding whether the cell should grow.

Protein waves are generally rare in nature, but some bacteria naturally generate waves of two proteins called MinD and MinE – typically referred to together as MinDE – to them divide. My team discovered that putting MinDE into human cells causes the proteins to reorganize themselves into a stunning array of waves and patterns.

On their own, MinDE protein waves do not interact with other proteins in human cells. However, we found that MinDE could be readily engineered to react to the activity of specific human proteins responsible for making decisions about whether to grow, send signals to neighboring cells, move around and divide.

Left: population of hundreds of human cells displaying protein oscillations. Right: decoded cell state data from each individual cell within the population, color-coded by activity

Putting MinDE into human cells produces visual patterns that can signal changes to protein activity in the cell.

Scott Coyle and Chih-Chia Chang, CC BY-ND

Advertisement

The protein dynamics driving these cellular functions are typically difficult to detect and study in living cells because the activity of proteins is generally invisible to even high-power microscopes. The disruption of these protein patterns is at the core of many cancers and developmental disorders.

We engineered connections between MinDE protein waves and the activity of proteins responsible for key cellular processes. Now, the activity of these proteins trigger changes in the frequency or amplitude of the protein wave, just like an AM/FM radio. Using microscopes, we can detect and record the unique signals individual cells are broadcasting and then decode them to recover the dynamics of these cellular processes.

We have only begun to scratch the surface of how scientists can use protein waves to study cells. If the history of waves in technology is any indicator, their potential is vast.The Conversation

Scott Coyle, Assistant Professor of Biochemistry, University of Wisconsin-Madison

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

The rush to return humans to the Moon and build lunar bases could threaten opportunities for astronomy

Published

on

theconversation.com – Martin Elvis, Senior Astrophysicist, Smithsonian Institution – 2024-05-30 07:16:31

A lunar base on the Moon would include solar panels for power generation, and equipment for keeping astronauts alive on the surface.

ESA – P. Carril

Martin Elvis, Smithsonian Institution

The 2020s have already seen many lunar landing attempts, although several of them have crashed or toppled over. With all the excitement surrounding the prospect of humans returning to the Moon, both commercial interests and scientists stand to gain.

Advertisement

The Moon is uniquely suitable for researchers to build telescopes they can't put on Earth because it doesn't have as much satellite interference as Earth, nor a magnetic field blocking out radio waves. But only recently have astronomers like me started thinking about potential conflicts between the desire to expand knowledge of the universe on one side and geopolitical rivalries and commercial gain on the other, and how to balance those interests.

As an astronomer and the co-chair of the International Astronomical Union's working group Astronomy from the Moon, I'm on the hook to investigate this question.

Everyone to the south pole

By 2035 – just 10 or so years away – American and Chinese rockets could be carrying humans to long-term lunar bases.

Both bases are planned for the same small near the south pole because of the near-constant solar power available in this region and the rich source of that scientists believe could be found in the Moon's darkest regions nearby.

Advertisement

Unlike the Earth, the Moon is not tilted relative to its path around the Sun. As a result, the Sun circles the horizon near the poles, almost never setting on some crater rims. There, the never-setting Sun casts long shadows over nearby craters, hiding their floors from direct sunlight for the past 4 years, 90% of the age of the solar system.

These craters are basically pits of eternal darkness. And it's not just dark down there, it's also cold: below -418 degrees Fahrenheit (-250 degrees Celsius). It's so cold that scientists predict that water in the form of ice at the bottom of these craters – likely brought by ancient asteroids colliding with the Moon's surface – will not melt or evaporate away for a very long time.

A close-up shot of the Moon's surface, with the left half covered in shadow, and the right half visible, with gray craters. Tiny blue dots in the center indicate PSRs.

Dark craters on the Moon, parts of which are indicated here in blue, never get sunlight. Scientists think some of these permanently shadowed regions could contain water ice.

NASA's Goddard Space Flight Center

Surveys from lunar orbit suggest that these craters, called permanently shadowed regions, could hold half a billion tons of water.

Advertisement

The constant sunlight for solar power and proximity to frozen water makes the Moon's poles attractive for human bases. The bases will also need water to drink, wash up and grow crops to feed hungry astronauts. It is hopelessly expensive to bring long-term water supplies from Earth, so a local watering hole is a big deal.

Telescopes on the Moon

For decades, astronomers had ignored the Moon as a potential site for telescopes because it was simply infeasible to build them there. But human bases open up new opportunities.

The radio-sheltered far side of the Moon, the part we never see from Earth, makes recording very low frequency radio waves accessible. These are likely to contain signatures of the universe's “Dark Ages,” a time before any or galaxies formed.

Astronomers could also put gravitational wave detectors at the poles, since these detectors are extraordinarily sensitive, and the Moon's polar regions don't have earthquakes to disturb them as they do on Earth.

Advertisement

A lunar gravitational wave detector could let scientists collect data from pairs of black holes orbiting each other very closely right before they merge. Predicting where and when they will merge tells astronomers where and when to look for a flash of light that they would otherwise miss. With those extra clues, scientists could learn how these black holes are born and how they evolve.

The cold at the lunar poles also makes infrared telescopes vastly more sensitive by shifting the telescopes' black body radiation to longer wavelengths. These telescopes could give astronomers new tools to look for on Earth-like planets beyond the solar system.

And more ideas keep coming. The first radio antennae are to land on the far side next year.

Conflicting interests

But the rush to build bases on the Moon could interfere with the very conditions that make the Moon so attractive for research in the first place. Although the Moon's surface area is greater than Africa's, human explorers and astronomers want to visit the same few kilometer-sized locations.

Advertisement

But activities that will sustain a human presence on the Moon, such as mining for water, will create vibrations that could ruin a gravitational wave telescope.

Also, many elements found on the Moon are extremely valuable back on Earth. Liquid hydrogen and oxygen make precious rocket propellant, and helium-3 is a rare substance used to improve quantum computers.

But one of the few places rich in helium-3 on the Moon is found in one of the most likely places to put a far-side, Dark Ages radio telescope.

Finally, there are at least two internet and GPS satellite constellations planned to orbit the Moon a few years from now. Unintentional radio emissions from these satellites could render a Dark Ages telescope useless.

Advertisement

The time is now

But compromise isn't out of the question. There might be a few alternative spots to place each telescope.

In 2024, the International Astronomical Union put together the working group Astronomy from the Moon to start defining which sites astronomers want to preserve for their work. This entails ranking the sites by their importance for each type of telescope and beginning to with a key United Nations committee. These steps may help astronomers, astronauts from multiple countries and private interests share the Moon.The Conversation

Martin Elvis, Senior Astrophysicist, Smithsonian Institution

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

AI is cracking a hard problem – giving computers a sense of smell

Published

on

theconversation.com – Ambuj Tewari, Professor of Statistics, of Michigan – 2024-05-30 07:15:55

A rose by any other name would not smell as sweet to a robot.

estt/iStock via Getty Images

Ambuj Tewari, University of Michigan

Over 100 years ago, Alexander Graham Bell asked the readers of National Geographic to do something bold and fresh – “to found a new science.” He pointed out that sciences based on the measurements of sound and light already existed. But there was no science of odor. Bell asked his readers to “measure a smell.”

Advertisement

Today, smartphones in most people's pockets impressive built-in capabilities based on the sciences of sound and light: voice assistants, facial recognition and photo enhancement. The science of odor does not offer anything comparable. But that situation is changing, as advances in machine olfaction, also called “digitized smell,” are finally answering Bell's call to action.

Research on machine olfaction faces a formidable due to the complexity of the human sense of smell. Whereas human vision mainly relies on receptor cells in the retina – rods and three types of cones – smell is experienced through about 400 types of receptor cells in the nose.

Machine olfaction starts with sensors that detect and identify molecules in the . These sensors serve the same purpose as the receptors in your nose.

But to be useful to people, machine olfaction needs to go a step further. The system needs to know what a certain molecule or a set of molecules smells like to a human. For that, machine olfaction needs machine learning.

Advertisement

Applying machine learning to smells

Machine learning, and particularly a kind of machine learning called deep learning, is at the core of remarkable advances such as voice assistants and facial recognition apps.

Machine learning is also key to digitizing smells because it can learn to map the molecular structure of an odor-causing compound to textual odor descriptors. The machine learning model learns the words humans tend to use – for example, “sweet” and “dessert” – to describe what they experience when they encounter specific odor-causing compounds, such as vanillin.

a hand holds a device over a glass containing a translucent brown liquid

A university research prototype artificial nose can distinguish between coffee and whiskey.

Marcus Brandt/picture alliance via Getty Images

However, machine learning needs large datasets. The web has an unimaginably huge amount of audio, image and video content that can be used to train artificial intelligence systems that recognize sounds and pictures. But machine olfaction has long a data shortage problem, partly because most people cannot verbally describe smells as effortlessly and recognizably as they can describe sights and sounds. Without access to web-scale datasets, researchers weren't able to train really powerful machine learning models.

Advertisement

However, things started to change in 2015 when researchers launched the DREAM Olfaction Prediction Challenge. The competition released data collected by Andreas Keller and Leslie Vosshall, biologists who study olfaction, and invited teams from around the world to submit their machine learning models. The models had to predict odor labels like “sweet,” “flower” or “fruit” for odor-causing compounds based on their molecular structure.

The top performing models were published in a paper in the journal Science in 2017. A classic machine learning technique called random forest, which combines the output of multiple decision tree flow charts, turned out to be the winner.

I am a machine learning researcher with a longstanding interest in applying machine learning to chemistry and psychiatry. The DREAM challenge piqued my interest. I also felt a personal connection to olfaction. My traces its roots to the small town of Kannauj in northern India, which is India's perfume capital. Moreover, my father is a chemist who spent most of his career analyzing geological samples. Machine olfaction thus offered an irresistible opportunity at the intersection of perfumery, culture, chemistry and machine learning.

Progress in machine olfaction started picking up steam after the DREAM challenge concluded. During the pandemic, many cases of smell blindness, or anosmia, were reported. The sense of smell, which usually takes a back seat, rose in public consciousness. Additionally, a research project, the Pyrfume Project, made more and larger datasets publicly available.

Advertisement

Smelling deeply

By 2019, the largest datasets had grown from less than 500 molecules in the DREAM challenge to about 5,000 molecules. A Google Research team led by Alexander Wiltschko was finally able to bring the deep learning revolution to machine olfaction. Their model, based on a type of deep learning called graph neural networks, established state-of-the-art results in machine olfaction. Wiltschko is now the founder and of Osmo, whose mission is “giving computers a sense of smell.”

Recently, Wiltschko and his team used a graph neural network to create a “principal odor map,” where perceptually similar odors are placed closer to each other than dissimilar ones. This was not easy: Small changes in molecular structure can to large changes in olfactory perception. Conversely, two molecules with very different molecular structures can nonetheless smell almost the same.

Such progress in cracking the code of smell is not only intellectually exciting but also has highly promising applications, personalized perfumes and fragrances, better insect repellents, novel chemical sensors, early detection of disease, and more realistic augmented reality experiences. The future of machine olfaction looks bright. It also promises to smell good.The Conversation

Ambuj Tewari, Professor of Statistics, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

News from the South

Trending