Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.
And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.
I've been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street's past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.
Program trading fuels Black Monday
In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.
Advertisement
Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index – like the S&P 500 – and that of the stocks it's composed of.
As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways – on which over a trillion dollars worth of assets change hands every day – causing market volatility to increase dramatically.
Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.
In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.
Advertisement
HFT: Program trading on steroids
Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.
HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity – a difference in price of similar securities that can be exploited for profit – high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.
These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.
Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.
Benefits of AI trading
These AI-based, high-frequency traders operate very differently than people do.
Advertisement
The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.
And, so, just like most technologies, HFT provides several benefits to stock markets.
These traders typically buy and sell assets at prices very close to the market price, which means they don't charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.
High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.
Advertisement
The downsides
But speed and efficiency can also cause harm.
HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.
Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes – erasing and then restoring about $1 trillion in market value.
The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.
Advertisement
In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That's because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.
This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.
Enter ChatGPT
That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.
In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone's deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.
Advertisement
Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.
Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.
This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.
In addition, since market crashes are relatively rare, there isn't much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.
Advertisement
For now, at least, it seems most banks won't be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.
But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up – and there's a risk of being left behind by rivals.
But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.
Waves are ubiquitous in nature and technology. Whether it's the rise and fall of ocean tides or the swinging of a clock's pendulum, the predictable rhythms of waves create a signal that is easy to track and distinguish from other types of signals.
Advertisement
Electronic devices use radio waves to send and receive data, like your laptop and Wi-Fi router or cellphone and cell tower. Similarly, scientists can use a different type of wave to transmit a different type of data: signals from the invisible processes and dynamics underlying how cells make decisions.
The oscillating behavior of waves is one reason they're powerful patterns in engineering.
For example, controlled and predictable changes to wave oscillations can be used to encode data, such as voice or video information. In the case of radio, each station is assigned a unique electromagnetic wave that oscillates at its own frequency. These are the numbers you see on the radio dial.
Advertisement
Scientists can extend this strategy to living cells. My team used waves of proteins to turn a cell into a microscopic radio station, broadcasting data about its activity in real time to study its behavior.
Turning cells into radio stations
Studying the inside of cells requires a kind of wave that can specifically connect to and interact with the machinery and components of a cell.
While electronic devices are built from wires and transistors, cells are built from and controlled by a diverse collection of chemical building blocks called proteins. Proteins perform an array of functions within the cell, from extracting energy from sugar to deciding whether the cell should grow.
Protein waves are generally rare in nature, but some bacteria naturally generate waves of two proteins called MinD and MinE – typically referred to together as MinDE – to help them divide. My team discovered that putting MinDE into human cells causes the proteins to reorganize themselves into a stunning array of waves and patterns.
On their own, MinDE protein waves do not interact with other proteins in human cells. However, we found that MinDE could be readily engineered to react to the activity of specific human proteins responsible for making decisions about whether to grow, send signals to neighboring cells, move around and divide.
The protein dynamics driving these cellular functions are typically difficult to detect and study in living cells because the activity of proteins is generally invisible to even high-power microscopes. The disruption of these protein patterns is at thecore of many cancers and developmental disorders.
We engineered connections between MinDE protein waves and the activity of proteins responsible for key cellular processes. Now, the activity of these proteins trigger changes in the frequency or amplitude of the protein wave, just like an AM/FM radio. Using microscopes, we can detect and record the unique signals individual cells are broadcasting and then decode them to recover the dynamics of these cellular processes.
We have only begun to scratch the surface of how scientists can use protein waves to study cells. If the history of waves in technology is any indicator, their potential is vast.
The Moon is uniquely suitable for researchers to build telescopes they can't put on Earth because it doesn't have as much satellite interference as Earth, nor a magnetic field blocking out radio waves. But only recently have astronomers like me started thinking about potential conflicts between the desire to expand knowledge of the universe on one side and geopolitical rivalries and commercial gain on the other, and how to balance those interests.
Both bases are planned for the same small areas near the south pole because of the near-constant solar power available in this region and the rich source of water that scientists believe could be found in the Moon's darkest regions nearby.
Advertisement
Unlike the Earth, the Moon is not tilted relative to its path around the Sun. As a result, the Sun circles the horizon near the poles, almost never setting on some crater rims. There, the never-setting Sun casts long shadows over nearby craters, hiding their floors from direct sunlight for the past 4 billion years, 90% of the age of the solar system.
Surveys from lunar orbit suggest that these craters, called permanently shadowed regions, could hold half a billion tons of water.
Advertisement
The constant sunlight for solar power and proximity to frozen water makes the Moon's poles attractive for human bases. The bases will also need water to drink, wash up and grow crops to feed hungry astronauts. It is hopelessly expensive to bring long-term water supplies from Earth, so a local watering hole is a big deal.
Telescopes on the Moon
For decades, astronomers had ignored the Moon as a potential site for telescopes because it was simply infeasible to build them there. But human bases open up new opportunities.
Astronomers could also put gravitational wave detectors at the poles, since these detectors are extraordinarily sensitive, and the Moon's polar regions don't have earthquakes to disturb them as they do on Earth.
Advertisement
A lunar gravitational wave detector could let scientists collect data from pairs of black holes orbiting each other very closely right before they merge. Predicting where and when they will merge tells astronomers where and when to look for a flash of light that they would otherwise miss. With those extra clues, scientists could learn how these black holes are born and how they evolve.
But the rush to build bases on the Moon could interfere with the very conditions that make the Moon so attractive for research in the first place. Although the Moon's surface area is greater than Africa's, human explorers and astronomers want to visit the same few kilometer-sized locations.
But one of the few places rich in helium-3 on the Moon is found in one of the most likely places to put a far-side, Dark Ages radio telescope.
Finally, there are at least two internet and GPS satellite constellations planned to orbit the Moon a few years from now. Unintentional radio emissions from these satellites could render a Dark Ages telescope useless.
Advertisement
The time is now
But compromise isn't out of the question. There might be a few alternative spots to place each telescope.
In 2024, the International Astronomical Union put together the working group Astronomy from the Moon to start defining which sites astronomers want to preserve for their work. This entails ranking the sites by their importance for each type of telescope and beginning to talk with a key United Nations committee. These steps may help astronomers, astronauts from multiple countries and private interests share the Moon.
Over 100 years ago, Alexander Graham Bell asked the readers of National Geographic to do something bold and fresh – “to found a new science.” He pointed out that sciences based on the measurements of sound and light already existed. But there was no science of odor. Bell asked his readers to “measure a smell.”
Advertisement
Today, smartphones in most people's pockets provide impressive built-in capabilities based on the sciences of sound and light: voice assistants, facial recognition and photo enhancement. The science of odor does not offer anything comparable. But that situation is changing, as advances in machine olfaction, also called “digitized smell,” are finally answering Bell's call to action.
Research on machine olfaction faces a formidable challenge due to the complexity of the human sense of smell. Whereas human vision mainly relies on receptor cells in the retina – rods and three types of cones – smell is experienced through about 400 types of receptor cells in the nose.
Machine olfaction starts with sensors that detect and identify molecules in the air. These sensors serve the same purpose as the receptors in your nose.
But to be useful to people, machine olfaction needs to go a step further. The system needs to know what a certain molecule or a set of molecules smells like to a human. For that, machine olfaction needs machine learning.
Advertisement
Applying machine learning to smells
Machine learning, and particularly a kind of machine learning called deep learning, is at the core of remarkable advances such as voice assistants and facial recognition apps.
Machine learning is also key to digitizing smells because it can learn to map the molecular structure of an odor-causing compound to textual odor descriptors. The machine learning model learns the words humans tend to use – for example, “sweet” and “dessert” – to describe what they experience when they encounter specific odor-causing compounds, such as vanillin.
However, machine learning needs large datasets. The web has an unimaginably huge amount of audio, image and video content that can be used to train artificial intelligence systems that recognize sounds and pictures. But machine olfaction has long faced a data shortage problem, partly because most people cannot verbally describe smells as effortlessly and recognizably as they can describe sights and sounds. Without access to web-scale datasets, researchers weren't able to train really powerful machine learning models.
Advertisement
However, things started to change in 2015 when researchers launched the DREAM Olfaction Prediction Challenge. The competition released data collected by Andreas Keller and Leslie Vosshall, biologists who study olfaction, and invited teams from around the world to submit their machine learning models. The models had to predict odor labels like “sweet,” “flower” or “fruit” for odor-causing compounds based on their molecular structure.
The top performing models were published in a paper in the journal Science in 2017. A classic machine learning technique called random forest, which combines the output of multiple decision tree flow charts, turned out to be the winner.
I am a machine learning researcher with a longstanding interest in applying machine learning to chemistry and psychiatry. The DREAM challenge piqued my interest. I also felt a personal connection to olfaction. My family traces its roots to the small town of Kannauj in northern India, which is India's perfume capital. Moreover, my father is a chemist who spent most of his career analyzing geological samples. Machine olfaction thus offered an irresistible opportunity at the intersection of perfumery, culture, chemistry and machine learning.
Progress in machine olfaction started picking up steam after the DREAM challenge concluded. During the COVID-19 pandemic, many cases of smell blindness, or anosmia, were reported. The sense of smell, which usually takes a back seat, rose in public consciousness. Additionally, a research project, the Pyrfume Project, made more and larger datasets publicly available.
Advertisement
Smelling deeply
By 2019, the largest datasets had grown from less than 500 molecules in the DREAM challenge to about 5,000 molecules. A Google Research team led by Alexander Wiltschko was finally able to bring the deep learning revolution to machine olfaction. Their model, based on a type of deep learning called graph neural networks, established state-of-the-art results in machine olfaction. Wiltschko is now the founder and CEO of Osmo, whose mission is “giving computers a sense of smell.”
Recently, Wiltschko and his team used a graph neural network to create a “principal odor map,” where perceptually similar odors are placed closer to each other than dissimilar ones. This was not easy: Small changes in molecular structure can lead to large changes in olfactory perception. Conversely, two molecules with very different molecular structures can nonetheless smell almost the same.
Such progress in cracking the code of smell is not only intellectually exciting but also has highly promising applications, including personalized perfumes and fragrances, better insect repellents, novel chemical sensors, early detection of disease, and more realistic augmented reality experiences. The future of machine olfaction looks bright. It also promises to smell good.