fbpx
Connect with us

The Conversation

What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers

Published

on

What is quantum advantage? A quantum computing scientist explains an approaching milestone marking the arrival of extremely powerful computers

IBM's quantum computer got President Joe Biden's attention.
Mandel Ngan/AFP via Getty Images

Daniel Lidar, University of Southern California

Quantum advantage is the milestone the field of quantum computing is fervently working toward, where a quantum computer can solve problems that are beyond the reach of the most powerful non-quantum, or classical, computers.

Quantum refers to the scale of atoms and molecules where the laws of physics as we experience them break down and a different, counterintuitive set of laws apply. Quantum computers take advantage of these strange behaviors to solve problems.

There are some types of problems that are impractical for classical computers to solve, such as cracking -of-the-art encryption algorithms. Research in recent decades has shown that quantum computers have the potential to solve some of these problems. If a quantum computer can be built that actually does solve one of these problems, it will have demonstrated quantum advantage.

I am a physicist who studies quantum information processing and the control of quantum . I believe that this frontier of scientific and technological innovation not only promises groundbreaking advances in computation but also represents a broader surge in quantum technology, significant advancements in quantum cryptography and quantum sensing.

Advertisement

The source of quantum computing's power

Central to quantum computing is the quantum bit, or qubit. Unlike classical bits, which can only be in states of 0 or 1, a qubit can be in any state that is some combination of 0 and 1. This state of neither just 1 or just 0 is known as a quantum superposition. With every additional qubit, the number of states that can be represented by the qubits doubles.

This property is often mistaken for the source of the power of quantum computing. Instead, it comes down to an intricate interplay of superposition, interference and entanglement.

Interference involves manipulating qubits so that their states combine constructively during computations to amplify correct solutions and destructively to suppress the wrong answers. Constructive interference is what happens when the peaks of two waves – like sound waves or ocean waves – combine to create a higher peak. Destructive interference is what happens when a wave peak and a wave trough combine and cancel each other out. Quantum algorithms, which are few and difficult to devise, set up a sequence of interference patterns that yield the correct answer to a problem.

Entanglement establishes a uniquely quantum correlation between qubits: The state of one cannot be described independently of the others, no matter how far apart the qubits are. This is what Albert Einstein famously dismissed as “spooky action at a distance.” Entanglement's collective behavior, orchestrated through a quantum computer, enables computational speed-ups that are beyond the reach of classical computers.

Advertisement
The ones and zeros – and everything in between – of quantum computing.

Applications of quantum computing

Quantum computing has a range of potential uses where it can outperform classical computers. In cryptography, quantum computers pose both an opportunity and a challenge. Most famously, they have the potential to decipher current encryption algorithms, such as the widely used RSA scheme.

One consequence of this is that 's encryption protocols need to be reengineered to be resistant to future quantum attacks. This recognition has led to the burgeoning field of post-quantum cryptography. After a long , the National Institute of Standards and Technology recently selected four quantum-resistant algorithms and has begun the process of readying them so that around the world can use them in their encryption technology.

In addition, quantum computing can dramatically speed up quantum simulation: the ability to predict the outcome of experiments operating in the quantum realm. Famed physicist Richard Feynman envisioned this possibility more than 40 years ago. Quantum simulation offers the potential for considerable advancements in chemistry and materials science, aiding in such as the intricate modeling of molecular structures for drug discovery and enabling the discovery or creation of materials with novel properties.

Another use of quantum information technology is quantum sensing: detecting and measuring physical properties like electromagnetic energy, gravity, pressure and temperature with greater sensitivity and precision than non-quantum instruments. Quantum sensing has myriad applications in fields such as environmental monitoring, geological exploration, medical imaging and surveillance.

Advertisement

Initiatives such as the of a quantum internet that interconnects quantum computers are crucial steps toward bridging the quantum and classical computing worlds. This network could be secured using quantum cryptographic protocols such as quantum key distribution, which enables ultra-secure communication channels that are protected against computational attacks – including those using quantum computers.

Despite a growing application suite for quantum computing, developing new algorithms that make full use of the quantum advantage – in particular in machine learning – remains a critical area of ongoing research.

a metal apparatus with green laser light in the background
A prototype quantum sensor developed by MIT researchers can detect any frequency of electromagnetic waves.
Guoqing Wang, CC BY-NC-ND

Staying coherent and overcoming errors

The quantum computing field faces significant hurdles in hardware and software development. Quantum computers are highly sensitive to any unintentional interactions with their environments. This leads to the phenomenon of decoherence, where qubits rapidly degrade to the 0 or 1 states of classical bits.

Building large-scale quantum computing systems capable of delivering on the promise of quantum speed-ups requires overcoming decoherence. The key is developing effective methods of suppressing and correcting quantum errors, an area my own research is focused on.

In navigating these challenges, numerous quantum hardware and software startups have emerged alongside well-established technology industry players like Google and IBM. This industry interest, combined with significant investment from governments worldwide, underscores a collective recognition of quantum technology's transformative potential. These initiatives foster a rich ecosystem where academia and industry collaborate, accelerating progress in the field.

Advertisement

Quantum advantage coming into view

Quantum computing may one day be as disruptive as the arrival of generative AI. Currently, the development of quantum computing technology is at a crucial juncture. On the one hand, the field has already shown early signs of achieved a narrowly specialized quantum advantage. Researchers at Google and later a team of researchers in China demonstrated quantum advantage for generating a list of random numbers with certain properties. My research team demonstrated a quantum speed-up for a random number guessing game.

On the other hand, there is a tangible risk of entering a “quantum winter,” a period of reduced investment if practical results fail to materialize in the near term.

While the technology industry is working to deliver quantum advantage in products and services in the near term, academic research remains focused on investigating the fundamental principles underpinning this new science and technology. This ongoing basic research, fueled by enthusiastic cadres of new and bright of the type I encounter almost every day, ensures that the field will continue to progress.The Conversation

Daniel LidarUniversity of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Did you miss our previous article…
https://www.biloxinewsevents.com/?p=306860

The Conversation

3 ways AI can help farmers tackle the challenges of modern agriculture

Published

on

3 ways AI can help farmers tackle the challenges of modern agriculture

Farming is as much about data as hardware.
AP Photo/Nati Harnik

Joe Hollis, Iowa State University

For all the attention on flashy new artificial intelligence tools like ChatGPT, the challenges of regulating AI, and doomsday scenarios of superintelligent machines, AI is a useful tool in many fields. In fact, it has enormous potential to benefit humanity.

In agriculture, farmers are increasingly using AI-powered tools to tackle challenges that threaten human , the and food security. Researchers the market for these tools to reach US$12 billion by 2032.

As a researcher studying agricultural and rural policy, I see three promising developments in agricultural AI: federated learning, pest and disease detection and forecasting prices.

Pooling data without sharing it

Robotics, sensors and information technology are increasingly used in agriculture. These tools aim to farmers improve efficiency and reduce chemical use. In addition, data collected by these tools can be used in software that uses machine learning to improve management and -making. However, these applications typically require data sharing among stakeholders.

Advertisement

A survey of U.S. farmers found that more than half of respondents said they do not trust federal agencies or private companies with their data. This lack of trust is linked to concerns about sensitive information becoming compromised or being used to manipulate markets and regulations. Machine learning could reduce these concerns.

Federated learning is a technique that trains a machine learning algorithm on data from multiple parties without the parties having to reveal their data to each other. With federated learning, a farmer puts data on a local computer that the algorithm can access rather than sharing the data on a central server. This method increases privacy and reduces the risk of compromise.

If farmers can be persuaded to share their data this way, they can contribute to a collaborative system that helps them make better decisions and meet their sustainability goals. For example, farmers could pool data about conditions for their chickpea crops, and a model trained on all of their data could give each of them better forecasts for their chickpea yields than models trained only on their own data.

An AI-driven giant robot armed with lasers is a major threat – to weeds.

Detecting pests and disease

Farmer livelihoods and global food security are increasingly at risk from plant disease and pests. The Food and Agriculture Organization estimates that worldwide annual losses from disease and pests total $290 billion, with 40% of global crop production affected.

Advertisement

Farmers typically spray crops with chemicals to preempt outbreaks. However, the overuse of these chemicals is linked to harmful effects on human health, soil and water quality and biodiversity. Worryingly, many pathogens are becoming resistant to existing treatments, and developing new ones is proving to be difficult.

Reducing the amount of chemicals used is therefore paramount, and AI may be part of a solution.

The Consortium of International Agricultural Research Centers has created a mobile phone app that identifies pests and disease. The app, “Tumaini,” allows users to upload a of a suspected pest or disease, which the AI compares with a database of 50,000 images. The app also provides analysis and can recommend treatment programs.

If used with farm management tools, apps like this can improve farmers' ability to target their spraying and improve accuracy in deciding how much chemical to use. Ultimately, these efficiencies may reduce pesticide use, lessen the risk of resistance and prevent spillovers that cause harm to both humans and the environment.

Advertisement

Crystal ball for prices

Market volatility and fluctuating prices affect how farmers invest and decide what to grow. This uncertainty can also prevent farmers from taking risks on new developments.

AI can help reduce this uncertainty by forecasting prices. For example, services from companies such as Agtools, Agremo and GeoPard AI-powered farm decision tools. These tools allow for real-time analysis of price points and market data and present farmers with data on long-term trends that can help optimize production.

This data allows farmers to react to price changes and allows them to plan more strategically. If farmers' economic resilience improves, it increases the likelihood that they can invest in new opportunities and technologies that benefit both farms and the larger food system.

AI for good

Human innovation has always produced winners and losers. The dangers of AI are apparent, biased algorithms, data privacy violations and the manipulation of human behavior. However, it is also a technology that has the potential to solve many problems.

Advertisement

These uses for AI in agriculture are a cause for optimism among farmers. If the agriculture industry can promote the utility of these inventions while developing strong and sensible frameworks to minimize harms, AI can help reduce modern agriculture's impact on human health and the environment while helping improve global food security in the 21st century.The Conversation

Joe Hollis, PhD student in Rural Sociology and Sustainable Agriculture, Iowa State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

ChatGPT turns 1: AI chatbot’s success says as much about humans as technology

Published

on

ChatGPT turns 1: AI chatbot's success says as much about humans as technology

The drama surrounding OpenAI CEO Sam Altman, left − joined on stage here by Microsoft CEO Satya Nadella − has overshadowed the first anniversary of the company's ChatGPT.
AP Photo/Barbara Ortutay

Tim Gorichanaz, Drexel University

ChatGPT was launched on Nov. 30, 2022, ushering in what many have called artificial intelligence's breakout year. Within days of its release, ChatGPT went viral. Screenshots of conversations snowballed across social , and the use of ChatGPT skyrocketed to an extent that seems to have surprised even its maker, OpenAI. By January, ChatGPT was seeing 13 million unique visitors each day, setting a record for the fastest-growing user base of a consumer application.

Throughout this breakout year, ChatGPT has revealed the power of a good interface and the perils of hype, and it has sown the seeds of a new set of human behaviors. As a researcher who studies technology and human information behavior, I find that ChatGPT's influence in society as much from how people view and use it as the technology itself.

Generative AI like ChatGPT are becoming pervasive. Since ChatGPT's release, some mention of AI has seemed obligatory in presentations, conversations and articles. , OpenAI claims 100 million people use ChatGPT every week.

Besides people interacting with ChatGPT at home, employees at all levels up to the C-suite in businesses are using the AI chatbot. In tech, generative AI is being called the biggest platform since the iPhone, which debuted in 2007. All the major players are making AI bets, and venture in AI startups is booming.

Advertisement

Along the way, ChatGPT has raised numerous concerns, such as its implications for disinformation, fraud, intellectual property issues and discrimination. In my world of higher education, much of the discussion has surrounded cheating, which has become a focus of my own research this year.

Lessons from ChatGPT's first year

The of ChatGPT speaks foremost to the power of a good interface. AI has already been part of countless everyday products for well over a decade, from Spotify and Netflix to Facebook and Google Maps. The first version of GPT, the AI model that powers ChatGPT, dates back to 2018. And even OpenAI's other products, such as DALL-E, did not make the waves that ChatGPT did immediately upon its release. It was the chat-based interface that set off AI's breakout year.

There is something uniquely beguiling about chat. Humans are endowed with language, and conversation is a primary way people interact with each other and infer intelligence. A chat-based interface is a natural mode for interaction and a way for people to experience the “intelligence” of an AI system. The phenomenal success of ChatGPT shows again that user interfaces widespread adoption of technology, from the Macintosh to web browsers and the iPhone. Design makes the difference.

A man wearing glasses looks at a laptop screen, his hands poised over the keyboard, text on the screen
The chat in ChatGPT is just as important as the AI under the hood.
Nicolas Maeterlinck/Belga Mag/AFP via Getty Images

At the same time, one of the technology's principal strengths – generating convincing language – makes it well suited for producing false or misleading information. ChatGPT and other generative AI systems make it easier for criminals and propagandists to prey on human vulnerabilities. The potential of the technology to boost fraud and misinformation is one of the key rationales for regulating AI.

Amid the real promises and perils of generative AI, the technology has also provided another case study in the power of hype. This year has brought no shortage of articles on how AI is going to transform every aspect of society and how the proliferation of the technology is inevitable.

Advertisement

ChatGPT is not the first technology to be hyped as “the next big thing,” but it is perhaps unique in simultaneously being hyped as an existential risk. Numerous tech titans and even some AI researchers have warned about the risk of superintelligent AI systems emerging and wiping out humanity, though I believe that these fears are far-fetched.

The media environment favors hype, and the current venture funding climate further fuels AI hype in particular. Playing to people's hopes and fears is a recipe for anxiety with none of the ingredients for wise making.

What the future may hold

The AI floodgates opened in 2023, but the next year may bring a slowdown. AI is likely to meet technical limitations and encounter infrastructural hurdles such as chip manufacturing and server capacity. Simultaneously, AI regulation is likely to be on the way.

This slowdown should give for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.

Advertisement

ChatGPT and other generative AI systems will settle into people's workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.

But the outlook for 2024 isn't completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn't likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.

Political misinformation spread across social media in 2016 as well as in 2020, and it is virtually certain that generative AI will be used to continue those efforts in 2024. Even outside social media, conversations with ChatGPT and similar products can be sources of misinformation on their own.

As a result, another lesson that everyone – users of ChatGPT or not – will have to learn in the blockbuster technology's second year is to be vigilant when it comes to digital media of all kinds.The Conversation

Tim Gorichanaz, Assistant Teaching Professor of Information Science, Drexel University

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

MicroRNA is the master regulator of the genome − researchers are learning how to treat disease by harnessing the way it controls genes

Published

on

MicroRNA is the master regulator of the genome − researchers are learning how to treat disease by harnessing the way it controls genes

RNA is more than just a transitional between DNA and protein.
Kateryna Kon/Science Photo Library via Getty Images

Andrea Kasinski, Purdue University

The Earth formed 4.5 billion years ago, and less than a billion years after that. Although life as we know it is dependent on four major macromolecules – DNA, RNA, proteins and lipids – only one is thought to have been present at the beginning of life: RNA.

It is no surprise that RNA likely came first. It is the only one of those major macromolecules that can both replicate itself and catalyze chemical reactions, both of which are essential for life. Like DNA, RNA is made from individual nucleotides linked into chains. Scientists initially understood that genetic information flows in one direction: DNA is transcribed into RNA, and RNA is translated into proteins. That principle is called the central dogma of molecular biology. But there are many deviations.

One major example of an exception to the central dogma is that some RNAs are never translated or coded into proteins. This fascinating diversion from the central dogma is what led me to dedicate my scientific career to understanding how it works. Indeed, research on RNA has lagged behind the other macromolecules. Although there are multiple classes of these so-called noncoding RNAs, researchers like myself have started to focus a great deal of attention on short stretches of genetic material called microRNAs and their potential to treat various diseases, cancer.

MicroRNAs play a key role in regulating gene expression.

MicroRNAs and disease

Scientists regard microRNAs as master regulators of the genome due to their ability to bind to and alter the expression of many protein-coding RNAs. Indeed, a single microRNA can regulate anywhere from 10 to 100 protein-coding RNAs. Rather than translating DNA to proteins, they instead can bind to protein-coding RNAs to silence genes.

Advertisement

The reason microRNAs can regulate such a diverse pool of RNAs stems from their ability to bind to target RNAs they don't perfectly match up with. This means a single microRNA can often regulate a pool of targets that are all involved in similar processes in the cell, leading to an enhanced response.

Because a single microRNA can regulate multiple genes, many microRNAs can contribute to disease when they become dysfunctional.

In 2002, researchers first identified the role dysfunctional microRNAs play in disease through patients with a type of blood and bone marrow cancer called chronic lymphocytic leukemia. This cancer results from the loss of two microRNAs normally involved in blocking tumor cell growth. Since then, scientists have identified over 2,000 microRNAs in people, many of which are altered in various diseases.

The field has also developed a fairly solid understanding of how microRNA dysfunction contributes to disease. Changing one microRNA can change several other genes, resulting in a plethora of alterations that can collectively reshape the cell's physiology. For example, over half of all cancers have significantly reduced activity in a microRNA called miR-34a. Because miR-34a regulates many genes involved in preventing the growth and migration of cancer cells, losing miR-34a can increase the risk of developing cancer.

Advertisement

Researchers are looking into using microRNAs as for cancer, heart disease, neurodegenerative disease and others. While results in the laboratory have been promising, bringing microRNA treatments into the clinic has met multiple challenges. Many are related to inefficient delivery into target cells and poor stability, which limit their effectiveness.

Diagram showing a loop of microRNA binding to a strand of mRNA as it's being translated from DNA
MicroRNA can silence genes by binding to mRNA.
Kajsa Mollersen/Wikimedia Commons, CC BY-SA

Delivering microRNA to cells

One reason why delivering microRNA treatments into cells is difficult is because microRNA treatments need to be delivered specifically to diseased cells while avoiding healthy cells. Unlike mRNA COVID-19 vaccines that are taken up by scavenging immune cells whose job is to detect foreign materials, microRNA treatments need to fool the body into thinking they aren't foreign in order to avoid immune attack and get to their intended cells.

Scientists are studying various ways to deliver microRNA treatments to their specific target cells. One method garnering a great deal of attention relies on directly linking the microRNA to a ligand, a kind of small molecule that binds to specific proteins on the surface of cells. with healthy cells, diseased cells can have a disproportionate number of some surface proteins, or receptors. So, ligands can help microRNAs home specifically to diseased cells while avoiding healthy cells. The first ligand approved by the U.S. Food and Drug Administration to deliver small RNAs like microRNAs, N-acetylgalactosamine, or GalNAc, preferentially delivers RNAs to liver cells.

Identifying ligands that can deliver small RNAs to other cells requires finding receptors expressed at high enough levels on the surface of target cells. Typically, over one million copies per cell are needed in order to achieve sufficient delivery of the drug.

One ligand that stands out is folate, also referred to as vitamin B9, a small molecule critical during periods of rapid cell growth such as fetal . Because some tumor cells have over one million folate receptors, this ligand provides sufficient to deliver enough of a therapeutic RNA to target different types of cancer. For example, my laboratory developed a new molecule called FolamiR-34a – folate linked to miR-34a – that reduced the size of breast and lung cancer tumors in mice.

Advertisement
Microscopy image juxtaposing endothelial cells sprouting extensions to form new blood vessels and a cell bathed in microRNA unable to sprout
Tumors can exploit healthy cells to grow blood vessels that them nutrients, as seen in the endothelial cells to the left sprouting extensions. Exposing these cells to certain microRNAs, however, can disable that growth, as seen in the cell to the right.
Dudley Lab, University of Virginia School of Medicine/NIH via Flickr, CC BY-NC

Making microRNAs more stable

One of the other challenges with using small RNAs is their poor stability, which to their rapid degradation. As such, RNA-based treatments are generally short-lived in the body and require frequent doses to maintain a therapeutic effect.

To overcome this challenge, researchers are modifying small RNAs in various ways. While each RNA requires a specific modification pattern, successful changes can significantly increase their stability. This reduces the need for frequent dosing, subsequently decreasing treatment burden and cost.

For example, modified GalNAc-siRNAs, another form of small RNAs, reduces dosing from every few days to once every six months in nondividing cells. My team developed folate ligands linked to modified microRNAs for cancer treatment that reduced dosing from once every other day to once a . For diseases like cancer where cells are rapidly dividing and quickly diluting the delivered microRNA, this increase in activity is a significant advancement in the field. We anticipate this accomplishment will facilitate further development of this folate-linked microRNA as a cancer treatment in the years to come.

While there is still considerable work to be done to overcome the hurdles associated with microRNA treatments, it's clear that RNA shows promise as a therapeutic for many diseases.The Conversation

Andrea Kasinski, Associate Professor of Biological Sciences, Purdue University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

Did you miss our previous article…
https://www.biloxinewsevents.com/?p=309339

Continue Reading

News from the South

Trending