fbpx
Connect with us

The Conversation

Bridges can be protected from ship collisions – an expert on structures in disasters explains how

Published

on

Bridges can be protected from ship collisions – an expert on structures in disasters explains how

A cargo ship hit the Sunshine Skyway Bridge over Florida's Tampa Bay in 1980, collapsing one span and killing 35 people.
AP Photo/Jackie Green

Sherif El-Tawil, University of Michigan

The MV Dali, a 984-foot, 100,000-ton cargo ship, rammed into the Francis Scott Key Bridge when leaving Baltimore harbor on March 26, 2024, causing a portion of the bridge to collapse.

In an interview, of Michigan civil engineer Sherif El-Tawil explained how often ships collide with bridges, what can be done to protect bridges from collisions, and how a similar disaster in Florida in 1980 – just three years after the Key bridge opened – changed the way bridges are built.

This is not the first time a ship has taken out a bridge. What's the history of ship-bridge collisions?

This is an extremely rare event. To my knowledge, there are about 40 or so recorded events in the past 65 years that involved similar type of damage to a bridge caused by a ship. So they seem to occur on average about once every one and a half to two years around the world. When you consider that there are millions of bridges around the world – and most of them cross waterways – you can imagine how rare this is.

Advertisement

The most influential case was the 1980 Sunshine Skyway Bridge collision in Florida, which prompted the federal to take action in terms of developing guidelines for designing bridges for ship collision. By the early 1990s the provisions were developed and incorporated into the bridge design code, the AASHTO specifications. The American Association of State Highway and Transportation Officials produces the design code every bridge in the United States must conform to.

What was different about the Sunshine Skyway Bridge disaster from previous bridge collisions?

There were casualties. The fact that a crash could bring down a bridge, just like in the Baltimore situation, prompted the concern: Can we do something about it? And that something was those specifications that came out and eventually became incorporated in the national design document.

What those specifications say is that you either design the bridge for the impact force that a ship can deliver or you must protect the bridge against that impact force. So you must have a protective system. That's why I was surprised that this bridge did not have a protective system, some type of barrier, around it. I have not examined the structural plans of this bridge. All I could see is the pictures that were published online, but protective systems would be very visible and recognizable if they were there.

Advertisement
The Sunshine Skyway Bridge disaster in 1980 prompted improvements in bridge safety.

What is currently mandated for new bridge construction, and is it sufficient to handle today's massive cargo ships?

I estimate, based on the published speed and weight of the MV Dali, that the impact force was in the range of 30 million pounds. This is a massive force, and you need a massive structure to withstand that kind of force. But it is doable if you have a huge pier. That might dictate the design of the bridge and what it could look like. Most likely it could not be a truss bridge. It may be a cable stay bridge that has a very large tower that is capable of taking that load.

If you cannot design for that load, then you have to consider other alternatives. And that's what the specifications say. They're very clear about this. And those alternatives could be to build an island around the pier or a rock wall, or put dolphins – standalone structures set in the riverbed – adjacent to it, or put on fenders that absorb the energy so the ship doesn't in so fast. All of these are ways you can mitigate the impact.

Engineers design structures – and bridges are no exception – for a certain probability of failure, because if we didn't, the cost would be prohibitive. Theoretically, you could build a structure that would never fail, but you'd have to put infinite money into it. For a critical bridge of this type, we would consider an acceptable chance for failure to be 1 in 10,000 years.

Advertisement

Based on published information, I tried to compute what the probability of this event would be, and it turns out to be 1 in 100,000 years or so. The ship made a beeline directly to the pier that was vulnerable. It was just shocking to see such a rare event unfold.

The authority of the bridge must have considered protecting it, and the low probability of this occurring must have played a role in whether they would invest or not in protective measures. Because any type of construction in or on water is very expensive.

Is it feasible to protect older bridges?

I think so. For some of them it might be lower tech like the island idea. And it could use maybe rocks or concrete components that would prevent the ship from reaching the pier at all.

Advertisement

It was a massive ship with a flared bow. The lower part of the ship, which extends beyond the bow, I believe struck the foundation system, but the bow reached the pier. The pier was like an A shape, so the bow snapped one side of the A. could not the weight of the bridge and so the whole thing collapsed. If somebody kicks your feet from underneath you, you're just going to fall. That's exactly what happened.

captured the moment the Dali hit a pier of the Francis Scott Key Bridge.

How many bridges are vulnerable to ship collisions?

I don't know the number, but I know that bridges that are in this category, that are long span, major bridges like this, are probably less than 0.1% of the bridges in the U.S. And some of them do not necessarily cross waterways, so that's a subset that is an even lower percentage. So it's a rare event occurring to a rare kind of bridge.

Are cargo ships getting larger, and is that a consideration for protecting bridges?

Advertisement

I expect so because there is an economy of scale. Bigger ships would be cheaper for transporting goods. But I cannot envision that the designer of this bridge 50 years ago or so would have thought that a ship this size could impact the bridge. I'm sure they would have taken steps to address that. It just didn't cross their mind.

If this bridge had been designed to the current specifications, I believe it would have survived. There are two reasons a ship would deliver this kind of force: It's moving too fast or it's too heavy. And those two factors are taken into consideration in the impact force for which we design. So if we are taking those explicitly into consideration, then a bigger ship, yes, it's a bigger force, and we would design for that.

But let's go forward another 50 years and imagine you have a much larger ship that into being. At that time, bridges will have been designed for smaller ships, and you have the same problem all over again. It's hard to predict how big these things will go. You can design for current ships, but as they evolve, it's hard to predict many years into the future.

Are there other takeaways from this disaster?

Advertisement

The loss of this bridge, beyond the tragic loss of , is going to be felt for many months if not years. It's not a straightforward to replace a bridge of this magnitude, of this span distance. It's something that will require a lot of planning and a lot of resources to come back again to where we were before.The Conversation

Sherif El-Tawil, Professor of Civil and Environmental Engineering, University of Michigan

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

Viruses are doing mysterious things everywhere – AI can help researchers understand what they’re up to in the oceans and in your gut

Published

on

theconversation.com – Libusha , Associate Professor of and Computational Biology, Microbiology and Immunology, Albert Einstein College of Medicine – 2024-05-15 07:16:41

Many viral genetic sequences code for proteins that researchers haven't seen before.

KTSDesign/Science Photo Library via Getty Images

Libusha Kelly, Albert Einstein College of Medicine

Viruses are a mysterious and poorly understood force in microbial ecosystems. Researchers know they can infect, kill and manipulate human and bacterial cells in nearly every environment, from the oceans to your gut. But scientists don't yet have a full picture of how viruses affect their surrounding environments in large part because of their extraordinary diversity and ability to rapidly evolve.

Advertisement

Communities of microbes are difficult to study in a laboratory setting. Many microbes are challenging to cultivate, and their natural has many more features influencing their or failure than scientists can replicate in a lab.

So systems biologists like me often sequence all the DNA present in a sample – for example, a fecal sample from a patient – separate out the viral DNA sequences, then annotate the sections of the viral genome that code for proteins. These notes on the location, structure and other features of genes help researchers understand the functions viruses might carry out in the environment and help identify different kinds of viruses. Researchers annotate viruses by matching viral sequences in a sample to previously annotated sequences available in public databases of viral genetic sequences.

However, scientists are identifying viral sequences in DNA collected from the environment at a rate that far outpaces our ability to annotate those genes. This means researchers are publishing findings about viruses in microbial ecosystems using unacceptably small fractions of available data.

To improve researchers' ability to study viruses around the globe, my team and I have developed a novel approach to annotate viral sequences using artificial intelligence. Through protein language models akin to large language models like ChatGPT but specific to proteins, we were able to classify previously unseen viral sequences. This opens the door for researchers to not only learn more about viruses, but also to address biological questions that are difficult to answer with current techniques.

Advertisement

Annotating viruses with AI

Large language models use relationships between words in large datasets of text to potential answers to questions they are not explicitly “taught” the answer to. When you ask a chatbot “What is the capital of France?” for example, the model is not looking up the answer in a table of capital . Rather, it is using its on huge datasets of documents and information to infer the answer: “The capital of France is Paris.”

Similarly, protein language models are AI algorithms that are trained to recognize relationships between billions of protein sequences from environments around the world. Through this training, they may be able to infer something about the essence of viral proteins and their functions.

We wondered whether protein language models could answer this question: “Given all annotated viral genetic sequences, what is this new sequence's function?”

In our proof of concept, we trained neural networks on previously annotated viral protein sequences in pre-trained protein language models and then used them to predict the annotation of new viral protein sequences. Our approach allows us to probe what the model is “seeing” in a particular viral sequence that to a particular annotation. This helps identify candidate proteins of interest either based on their specific functions or how their genome is arranged, winnowing down the search of vast datasets.

Advertisement

Microscopy image of spherical bacteria colored bright green

Prochlorococcus is one of the many species of marine bacteria with proteins that researchers haven't seen before.

Anne Thompson/Chisholm Lab, MIT via Flickr

By identifying more distantly related viral gene functions, protein language models can complement current methods to provide new insights into microbiology. For example, my team and I were able to use our model to discover a previously unrecognized integrase – a type of protein that can move genetic information in and out of cells – in the globally abundant marine picocyanobacteria Prochlorococcus and Synechococcus. Notably, this integrase may be able to move genes in and out of these populations of bacteria in the oceans and enable these microbes to better adapt to changing environments.

Our language model also identified a novel viral capsid protein that is widespread in the global oceans. We produced the first picture of how its genes are arranged, showing it can contain different sets of genes that we believe indicates this virus serves different functions in its environment.

These preliminary findings represent only two of thousands of annotations our approach has provided.

Advertisement

Analyzing the unknown

Most of the hundreds of thousands of newly discovered viruses remain unclassified. Many viral genetic sequences match protein families with no known function or have never been seen before. Our work shows that similar protein language models could help study the threat and promise of our planet's many uncharacterized viruses.

While our study focused on viruses in the global oceans, improved annotation of viral proteins is critical for better understanding the role viruses play in and disease in the human body. We and other researchers have hypothesized that viral activity in the human gut microbiome might be altered when you're sick. This means that viruses may help identify stress in microbial communities.

However, our approach is also limited because it requires high-quality annotations. Researchers are developing newer protein language models that incorporate other “tasks” as part of their training, particularly predicting protein structures to detect similar proteins, to make them more powerful.

Making all AI tools available via FAIR Data Principles – data that is findable, accessible, interoperable and reusable – can help researchers at large realize the potential of these new ways of annotating protein sequences leading to discoveries that benefit human health.The Conversation

Libusha Kelly, Associate Professor of Systems and Computational Biology, Microbiology and Immunology, Albert Einstein College of Medicine

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Human differences in judgment lead to problems for AI

Published

on

theconversation.com – Mayank Kejriwal, Research Assistant Professor of Industrial & Engineering, of Southern California – 2024-05-14 07:14:06

Bias isn't the only human imperfection turning up in AI.

Emrah Turudu/Photodisc via Getty Images

Mayank Kejriwal, University of Southern California

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

Advertisement

If society could somehow bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Statistical noise

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Advertisement

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Behavioral economist Daniel Kahneman explains the concept of noise in human judgment.

Noise in the data

On the surface, it doesn't seem likely that noise could affect the performance of AI systems. After all, machines aren't affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two – in the best case, perfect agreement – the machine is approaching human-level common sense, according to the test.

Advertisement

So where would noise in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don't account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge – in other words, where there is noise. Researchers still don't know whether or how to weigh AI's answers in that situation, but a first step is acknowledging that the problem exists.

Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven't been any studies of possible noise in AI tests.

Advertisement

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high – even universal – agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a system's performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we're not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

Advertisement

On AI leaderboards, where large language models like the one that powers ChatGPT are , performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements.

Noise audits

What is the way forward? Returning to Kahneman's book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be .

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, to their adoption.The Conversation

Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

Iron fuels immune cells – and it could make asthma worse

Published

on

theconversation.com – Benjamin Hurrell, Assistant Professor of Research in Molecular Microbiology and Immunology, of Southern California – 2024-05-14 07:13:50

Iron carries oxygen throughout the body, but ironically, it can also make it harder to breathe for people with asthma.

Hiroshi Watanabe/Stone via Getty Images

Benjamin Hurrell, University of Southern California and Omid Akbari, University of Southern California

You've likely heard that you can get iron from eating spinach and steak. You might also know that it's an essential trace element that is a major component of hemoglobin, a protein in red blood cells that carries oxygen from your lungs to all parts of the body.

Advertisement

A lesser known important function of iron is its involvement in generating energy for certain immune cells.

In our lab's newly published research, we found that blocking or limiting iron uptake in immune cells could potentially ease up the symptoms of an asthma attack caused by allergens.

Immune cells that need iron

During an asthma attack, harmless allergens activate immune cells in your lungs called ILC2s. This causes them to multiply and release large amounts of cytokines – messengers that immune cells use to communicate – and to unwanted inflammation. The result is symptoms such as coughing and wheezing that make it feel like someone is squeezing your airways.

To assess the role iron plays in how ILC2s function in the lungs, we conducted a of experiments with ILC2s in the lab. We then confirmed our findings in mice with allergic asthma and in with different severities of asthma.

Advertisement

First, we found that ILC2s use a protein called transferrin receptor 1, or TfR1, to take up iron. When we blocked this protein as the ILC2s were undergoing activation, the cells were unable to use iron and could no longer multiply and cause inflammation as well as they did before.

We then used a chemical called an iron chelator to prevent ILC2s from using any iron at all. Iron chelators are like superpowered magnets for iron and are used in medical treatments to manage conditions where there's too much iron in the body.

When we deprived ILC2s with an iron chelator, the cells had to change their metabolism and switch to a different way of getting energy, like trading in a car for a bicycle. The cells weren't as effective at causing inflammation in the lungs anymore.

Person with one hand to chest and other hand clutching an inhaler

An asthma attack can feel like someone is squeezing your airways.

Mariia Siurtukova/Moment via Getty Images

Advertisement

Next, we limited cellular iron in mice with sensitive airways due to ILC2s. We did this in three different ways: by inhibiting TfR1, adding an iron chelator or inducing low overall iron levels using a synthetic protein called mini-hepcidin. Each of these methods helped reduce the mice's airway hyperreactivity – basically reducing the severity of their asthma symptoms.

Lastly, we looked at cells from patients with asthma. We noticed something interesting: the more TfR1 protein on their ILC2 cells, the worse their asthma symptoms. In other words, iron was playing a big role in how bad their asthma got. Blocking TfR1 and administering iron chelators both reduced ILC2 proliferation and cytokine production, suggesting that our findings in mice apply to human cells. This means we can move these findings from the lab to clinical trials as quickly as possible.

Iron therapy for asthma

Iron is like the conductor of an orchestra, instructing immune cells such as ILC2s how to behave during an asthma attack. Without enough iron, these cells can't cause as much trouble, which could mean fewer asthma symptoms.

Next, we're working on targeting a patient's immune cells during an asthma attack. If we can lower the amount of iron available to ILC2s without depleting overall iron levels in the body, this could mean a new therapy for asthma that tackles the root cause of the disease, not just the symptoms. Available treatments can control symptoms to keep patients alive, but they are not curing the disease. Iron-related therapies may offer a better solution for patients with asthma.

Advertisement

Our discovery applies to more than just asthma. It could be a -changer for other diseases where ILC2s are involved, such as eczema and type 2 diabetes. Who knew iron could be such a big deal to your immune system?The Conversation

Benjamin Hurrell, Assistant Professor of Research in Molecular Microbiology and Immunology, University of Southern California and Omid Akbari, Professor of Molecular Microbiology and Immunology, University of Southern California

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

News from the South

Trending