fbpx
Connect with us

The Conversation

DOJ funding pipeline subsidizes questionable big data surveillance technologies

Published

on

DOJ funding pipeline subsidizes questionable big data surveillance technologies

Predictive policing aimed to identify crime hot spots and ‘chronic' offenders but missed the mark.
Patrick T. Fallon for The Washington Post via Getty Images

Andrew Guthrie Ferguson, American University

Predictive policing has been shown to be an ineffective and biased policing tool. Yet, the Department of Justice has been the crime surveillance and analysis technology for years and continues to do so despite criticism from researchers, privacy advocates and members of .

Sen. Ron Wyden, D-Ore., and U.S. Rep. Yvette Clarke, D-N.Y., joined by five Democratic senators, called on Attorney General Merrick Garland to halt funding for predictive policing technologies in a letter issued Jan. 29, 2024. Predictive policing involves analyzing crime data in an attempt to identify where and when crimes are likely to occur and who is likely to commit them.

The request came months after the Department of Justice failed to answer basic questions about how predictive policing funds were being used and who was being harmed by arguably racially discriminatory algorithms that have never been proven to work as intended. The Department of Justice did not have answers to who was using the technology, how it was being evaluated and which communities were affected.

While focused on predictive policing, the senators' demand raises what I, a law professor who studies big data surveillance, see as a bigger issue: What is the Department of Justice's role in funding new surveillance technologies? The answer is surprising and reveals an entire ecosystem of how technology companies, police departments and academics benefit from the flow of federal dollars.

Advertisement

The money pipeline

The National Institute of Justice, the DOJ's research, development and evaluation arm, regularly provides seed money for grants and pilot projects to test out ideas like predictive policing. It was a National Institute of Justice grant that funded the first predictive policing conference in 2009 that launched the idea that past crime data could be through an algorithm to predict future criminal risk. The institute has given US$10 million dollars to predictive policing projects since 2009.

Because there was grant money available to test out new theories, academics and startup companies could afford to invest in new ideas. Predictive policing was just an academic theory until there was cash to start testing it in various police departments. Suddenly, companies launched with the financial security that federal grants could pay their early bills.

National Institute of Justice-funded research often turns into for-profit companies. Police departments also benefit from getting money to buy the new technology without having to dip into their local budgets. This dynamic is one of the hidden drivers of police technology.

How predictive policing works – and the harm it can cause.

Once a new technology gets big enough, another DOJ entity, the Bureau of Justice Assistance, funds projects with direct financial grants. The bureau funded police departments to test one of the biggest place-based predictive policing technologies – PredPol – in its early years. The bureau has also funded the purchase of other predictive technologies.

Advertisement

The Bureau of Justice Assistance funded one of the most infamous person-based predictive policing pilots in Los Angeles, operation LASER, which targeted “chronic offenders.” Both experiments – PredPol and LASER – failed to work as intended. The Los Angeles Office of the Inspector General identified the negative impact of the programs on the community – and the fact that the predictive theories did not work to reduce crime in any significant way.

As these DOJ entities' practices indicate, federal money not only seeds but feeds the growth of new policing technologies. Since 2005, the Bureau of Justice Assistance has given over $7.6 billion of federal money to , local and tribal law enforcement agencies for a host of projects. Some of that money has gone directly to new surveillance technologies. A quick skim through the public grants shows approximately $3 million directed to facial recognition, $8 million for ShotSpotter and $13 million to build and grow real-time crime centers. ShotSpotter (now rebranded as SoundThinking) is the leading brand of gunshot detection technology. Real-time crime centers combine security camera feeds and other data to provide surveillance for a city.

The questions not asked

None of this is necessarily nefarious. The Department of Justice is in the business of prosecution, so it is not surprising for it to fund prosecution tools. The National Institute of Justice exists as a research body inside the Office of Justice Programs, so its role in helping to promote data-driven policing strategies is not inherently problematic. The Bureau of Justice Assistance exists to assist local law enforcement through financial grants. The DOJ is feeding police surveillance power because it law enforcement interests.

The problem, as indicated by Sen. Wyden's letter, is that in subsidizing experimental surveillance technologies, the Department of Justice did not do basic risk assessment or racial justice evaluations before investing money in a new technological solution. As someone who has studied predictive policing for over a decade, I can say that the questions asked by the senators were not asked in the pilot projects.

Advertisement

Basic questions of who would be affected, whether there could be a racially discriminatory impact, how it would change policing and whether it worked were not raised in any serious way. Worse, the focus was on deploying something new, not double-checking whether it worked. If you are going to seed and feed a potentially dangerous technology, you also have an obligation to weed it out once it turns out to be harming people.

Only now, after activists have protested, after scholars have critiqued and after the original predictive policing companies have shut down or been bought by bigger companies, is the DOJ starting to ask the hard questions. In January 2024, the DOJ and the Department of Homeland Security asked for public comment to be included in a on law enforcement agencies' use of facial recognition technology, other technologies using biometric information and predictive algorithms.

Arising from a mandate under executive order 14074 on advancing effective, accountable policing and criminal justice practices to enhance public trust and public safety, the DOJ Office of Legal Policy is going to evaluate how predictive policing affects civil rights and civil liberties. I believe that this is a good step – although a decade too late.

Lessons not learned?

The bigger problem is that the same is happening again with other technologies. As one example, real-time crime centers are being built across America. Thousands of security cameras stream to a single command center that is linked to automated license plate , gunshot detection sensors and 911 calls. The centers also use video analytics technology to identify and track people and objects across a . And they tap into data about past crime.

Advertisement
A wall of monitors shows aerial and street views of a city
Real-time crime centers like this one in Albuquerque, N.M., enable police surveillance of entire cities.
AP Photo/Susan Montoya Bryan

Millions of federal dollars from the American Rescue Plan Act are going to cities with the specific designation to address crime, and some of those dollars have been diverted to build real-time crime centers. They're also being funded by the Bureau of Justice Assistance.

Real-time crime centers can do predictive analytics akin to predictive policing simply as a byproduct of all the data they collect in the ordinary course of a day. The centers can also scan entire cities with powerful computer vision-enabled cameras and react in real time. The capabilities of these advanced technologies make the civil liberties and racial justice fears around predictive policing pale in comparison.

So while the American public waits for answers about a technology, predictive policing, that had its heyday 10 years ago, the DOJ is seeding and feeding a far more invasive surveillance system with few questions asked. Perhaps things will go differently this time. Maybe the DOJ/DHS report on predictive algorithms will look inward at the department's own culpability in seeding the surveillance problems of tomorrow.The Conversation

Andrew Guthrie Ferguson, Professor of Law, American University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement

The Conversation

Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn’t create

Published

on

Early COVID-19 research is riddled with poor methods and low-quality results − a problem for science the pandemic worsened but didn't create

The pandemic spurred an increase in research, much of it with methodological holes.
Andriy Onufriyenko/Moment via Getty Images

Dennis M. Gorman, Texas A&M University

Early in the COVID-19 pandemic, researchers flooded journals with studies about the then-novel coronavirus. Many publications streamlined the peer- for COVID-19 papers while keeping acceptance rates relatively high. The assumption was that policymakers and the public would be able to identify valid and useful research among a very large volume of rapidly disseminated information.

However, in my review of 74 COVID-19 papers published in 2020 in the top 15 generalist public journals listed in Google Scholar, I found that many of these studies used poor quality methods. Several other reviews of studies published in medical journals have also shown that much early COVID-19 research used poor research methods.

Some of these papers have been cited many times. For example, the most highly cited public health publication listed on Google Scholar used data from a sample of 1,120 people, primarily well-educated young women, mostly recruited from social media over three days. Findings based on a small, self-selected convenience sample cannot be generalized to a broader population. And since the researchers ran more than 500 analyses of the data, many of the statistically significant results are likely chance occurrences. However, this study has been cited over 11,000 times.

A highly cited paper means a lot of people have mentioned it in their own work. But a high number of citations is not strongly linked to research quality, since researchers and journals can and manipulate these metrics. High citation of low-quality research increases the chance that poor evidence is being used to inform policies, further eroding public confidence in science.

Advertisement

Methodology matters

I am a public health researcher with a long-standing interest in research quality and integrity. This interest lies in a belief that science has helped solve important social and public health problems. Unlike the anti-science movement spreading misinformation about such successful public health measures as vaccines, I believe rational criticism is fundamental to science.

The quality and integrity of research depends to a considerable extent on its methods. Each type of study design needs to have certain features in order for it to valid and useful information.

For example, researchers have known for decades that for studies evaluating the effectiveness of an intervention, a control group is needed to know whether any observed effects can be attributed to the intervention.

Systematic reviews pulling together data from existing studies should describe how the researchers identified which studies to include, assessed their quality, extracted the data and preregistered their protocols. These features are necessary to ensure the review will cover all the available evidence and tell a reader which is worth attending to and which is not.

Advertisement

Certain types of studies, such as one-time surveys of convenience samples that aren't representative of the target population, collect and analyze data in a way that does not allow researchers to determine whether one variable caused a particular outcome.

Systematic reviews involve thoroughly identifying and extracting information from existing research.

All study designs have standards that researchers can consult. But adhering to standards slows research down. Having a control group doubles the amount of data that needs to be collected, and identifying and thoroughly reviewing every study on a topic takes more time than superficially reviewing some. Representative samples are harder to generate than convenience samples, and collecting data at two points in time is more work than collecting them all at the same time.

Studies comparing COVID-19 papers with non-COVID-19 papers published in the same journals found that COVID-19 papers tended to have lower quality methods and were less likely to adhere to standards than non-COVID-19 papers. COVID-19 papers rarely had predetermined hypotheses and plans for how they would report their findings or analyze their data. This meant there were no safeguards against dredging the data to find “statistically significant” results that could be selectively reported.

Such methodological problems were likely overlooked in the considerably shortened peer-review process for COVID-19 papers. One study estimated the average time from submission to acceptance of 686 papers on COVID-19 to be 13 days, compared with 110 days in 539 pre-pandemic papers from the same journals. In my study, I found that two online journals that published a very high volume of methodologically weak COVID-19 papers had a peer-review process of about three weeks.

Advertisement

Publish-or-perish culture

These quality control issues were present before the COVID-19 pandemic. The pandemic simply pushed them into overdrive.

Journals tend to favor positive, “novel” findings: that is, results that show a statistical association between variables and supposedly identify something previously unknown. Since the pandemic was in many ways novel, it provided an for some researchers to make bold claims about how COVID-19 would spread, what its effects on mental health would be, how it could be prevented and how it might be treated.

Person with head in hands, elbows planted on stacks of paperwork and books littering a desk, glasses and laptop on the side
Many researchers feel pressure to publish papers in order to advance their careers.
South_agency/E+ via Getty Images

Academics have worked in a publish-or-perish incentive system for decades, where the number of papers they publish is part of the metrics used to evaluate employment, promotion and tenure. The flood of mixed-quality COVID-19 information afforded an opportunity to increase their publication counts and boost citation metrics as journals sought and rapidly reviewed COVID-19 papers, which were more likely to be cited than non-COVID papers.

Online publishing has also contributed to the deterioration in research quality. Traditional academic publishing was limited in the quantity of articles it could generate because journals were packaged in a printed, physical document usually produced only once a month. In contrast, some of today's online mega-journals publish thousands of papers a month. Low-quality studies rejected by reputable journals can still find an outlet happy to publish it for a fee.

Healthy criticism

Criticizing the quality of published research is fraught with risk. It can be misinterpreted as throwing fuel on the raging fire of anti-science. My response is that a critical and rational approach to the production of knowledge is, in fact, fundamental to the very practice of science and to the functioning of an open society capable of solving complex problems such as a worldwide pandemic.

Advertisement

Publishing a large volume of misinformation disguised as science during a pandemic obscures true and useful knowledge. At worst, this can to bad public health practice and policy.

Science done properly produces information that allows researchers and policymakers to better understand the world and test ideas about how to improve it. This involves critically examining the quality of a study's designs, statistical methods, reproducibility and transparency, not the number of times it has been cited or tweeted about.

Science depends on a slow, thoughtful and meticulous approach to data collection, analysis and presentation, especially if it intends to provide information to enact effective public health policies. Likewise, thoughtful and meticulous peer review is unlikely with papers that appear in print only three weeks after they were first submitted for review. Disciplines that reward quantity of research over quality are also less likely to protect scientific integrity during crises.

Two scientists pipetting liquids under a fume hood, with another scientist in the background examining a sample
Rigorous science requires careful deliberation and attention, not haste.
Assembly/Stone via Getty Images

Public health heavily draws upon disciplines that are experiencing replication crises, such as psychology, biomedical science and biology. It is similar to these disciplines in terms of its incentive structure, study designs and analytic methods, and its inattention to transparent methods and replication. Much public health research on COVID-19 shows that it suffers from similar poor-quality methods.

Reexamining how the discipline rewards its scholars and assesses their scholarship can it better prepare for the next public health crisis.The Conversation

Dennis M. Gorman, Professor of Epidemiology and Biostatistics, Texas A&M University

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

The Conversation

Making the moral of the story stick − a media psychologist explains the research behind ‘Sesame Street,’ ‘Arthur’ and other children’s TV

Published

on

Making the moral of the story stick − a media psychologist explains the research behind ‘Sesame Street,' ‘Arthur' and other children's TV

's TV shows are typically designed to improve their viewers' cognitive, social and moral development.
U.S. Air Force photo by Staff Sgt. Scott Saldukas/Released via Flickr

Drew Cingel, University of California, Davis; Allyson Snyder, University of California, Davis; Jane Shawcroft, University of California, Davis, and Samantha Vigil, University of California, Davis

To adult viewers, educational media content for children, such as “Sesame Street” or “Daniel Tiger's Neighborhood,” may seem rather simplistic. The pacing is slow, key themes are often repeated and the visual aspects tend to be plain.

However, many people might be surprised to learn about the sheer amount of research that goes into the design choices many contemporary programs use.

For more than a decade, I have studied just that: how to design media to children's learning, particularly in moral development. My research, along with the work of many others, shows that children can learn important developmental and social skills through media.

History of research on children's media

Research on how to design children's media to support learning is not new.

Advertisement

When “Sesame Street” debuted in November 1969, it began a decadeslong practice of testing its content before airing it to ensure children learned the intended messages of each episode and enjoyed watching it. Some episodes included messages notoriously difficult to teach to young children, lessons about , divorce and racism.

Researchers at the Sesame Workshop hold focus groups at local preschools where participating children watch or interact with Sesame content. They test the children on whether they are engaged with, pay attention to and learn the intended message of the content. If the episode passes the test, then it moves on to the next stage of production.

Puppeteer holding muppet Abby Cadabby out for a child to engage with
The Sesame Workshop uses muppets to teach children about difficult topics.
AP Photo/Bebeto Matthews

If children do not learn the intended message, or are not engaged and attentive, then the episode goes back for editing. In some cases, such as a 1992 program designed to teach children about divorce, the entire episode is scrapped. In this case, children misunderstood some key information about divorce. “Sesame Street” did not include divorce in its content until 2012.

Designing children's media

With from the pioneering research of “Sesame Street,” along with research from other children's television shows both in the industry and in academia, the past few decades have seen many new insights on how best to design media to promote children's learning. These strategies are still shaping children's shows .

For example, you may have noticed that some children's television characters speak directly to the camera and pause for the child viewer at home to yell out an answer to their question. This design strategy, known as participatory cues, is famously used by the shows “Blue's Clues” and “Dora the Explorer.” Researchers found that participatory cues in TV are linked to increased vocabulary learning and content comprehension among young children. They also increase children's engagement with the educational content of the show over time, particularly as they learn the intended lesson and can give the character the correct answer.

Advertisement
Participatory cues are a prominent feature of children's shows like ‘Blues Clues.'

You may have also noticed that children's media often features jokes that seem to be aimed more at adults. These are often commentary about popular culture that require context children might not be aware of or involve more complex language that children might not understand. This is because children are more likely to learn when a supportive adult or older sibling is watching the show alongside them and helping explain or connect it to the child's . Known as active mediation, research has shown that talking about the goals, emotions and behaviors of media characters can help children learn from them and even improve aspects of their own emotional and social development.

Programs have also incorporated concrete examples of desired behaviors, such as treating a neurodiverse character fairly, rather than discussing the behaviors more abstractly. This is because children younger than about age 7 struggle with abstract thinking and may have difficulty generalizing content they learned from media and applying it to their own lives.

Research on an episode of “Arthur” found that a concrete example of a main character experiencing life through the eyes of another character with Asperger's syndrome improved the ability of child viewers to take another person's perspective. It also increased the nuance of their moral judgments and moral reasoning. Just a single viewing of that one episode can positively influence several aspects of a child's cognitive and moral development.

Teaching inclusion through media

One skill that has proven difficult to teach children through media is inclusivity. Multiple studies have shown that children are more likely to exclude others from their social group after viewing an episode explicitly designed to promote inclusion.

Advertisement

For example, an episode of “Clifford the Big Red Dog” involved Clifford and his moving to a new town. The townspeople initially did not want to include Clifford because he was too big, but they eventually learned the importance of getting to know others before making judgments about them. However, watching this episode did not make children more likely to play with or view disabled or overweight children favorably.

Based on my own work, I argue that one reason inclusivity can be difficult to teach in children's TV may be due to how narratives are structured. For example, many shows actually model antisocial behaviors during the first three-quarters of the episode before finally modeling prosocial behaviors at the end. This may inadvertently teach the wrong message, because children tend to focus on the behaviors modeled for the majority of the program.

My team and I conducted a recent study showing that including a 30-second clip prior to the episode that explains the inclusive message to children before they view the content can help increase prosocial behaviors and decrease stigmatization. Although this practice might not be common in children's TV at the moment, adult viewers can also fill this role by explaining the intended message of inclusivity to children before watching the episode.

Smiling parent sitting with two children watching TV together
Adult viewers watching TV alongside children can help kids apply the lessons the shows teach to their own lives.
miniseries/E+ via Getty Images

Parenting with media

Children's media is more complex than many people think. Although there is certainly a lot of media out there that may not use study-informed design practices, many shows do use research to ensure children have the best to learn from what they watch.

It can be difficult to be a parent or a child in a media-saturated world, particularly in deciding when children should begin to watch media and which media they should watch. But there are relatively simple strategies and supportive adults can use to leverage media to support their child's healthy development and future.

Advertisement

Parents and other adults can help children learn from media by watching alongside them and answering their questions. They can also read reviews of media to determine its quality and age appropriateness. Doing so can help children consume media in a healthy way.

We live in a media-saturated world, and restricting young children's media use is difficult for most families. With just a little effort, parents can model healthy ways to use media for their children and select research-informed media that promotes healthy development and well-being among the next generation.The Conversation

Drew Cingel, Associate Professor of Communication, University of California, Davis; Allyson Snyder, Ph.D. Candidate in Communication, University of California, Davis; Jane Shawcroft, Ph.D. Candidate in Communication, University of California, Davis, and Samantha Vigil, Ph.D. Candidate in Communication, University of California, Davis

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisement
Continue Reading

The Conversation

War in Ukraine at 2 years: Destruction seen from space – via radar

Published

on

War in Ukraine at 2 years: Destruction seen from space – via radar

Satellite radar data shows the complete destruction of the Ukrainian city of Bakhmut.
Xu et al. (2024), CC BY-NC-ND

Sylvain Barbot, USC Dornsife College of Letters, Arts and Sciences

As soldiers and citizens information from the front lines and affected of the war in Ukraine – two years old as of Feb. 24, 2024 – in quasi-real time, an active open-source intelligence community has formed to keep track of troop activity, destruction and other aspects of the war.

Remote sensing complements this approach, offering a safe means to study inaccessible or dangerous areas. For example, seismologists have documented the high pace of bombardments and firing of artillery around Kyiv during the first few months of the war.

Previously, Teng Wang, a professor at the Peking in China, and I – both Earth scientists – studied illegal nuclear tests in North Korea with satellite data.

Putting our skills to good use once again, we, with graduate student Hang Xu, have analyzed the development of the war from space. We exclusively used open-source, freely accessible data to ensure that all our findings could be reproduced, guaranteeing transparency and neutrality.

Advertisement

View from above

Sensors on satellites record electromagnetic waves radiated or reflected from Earth's surface with wavelengths ranging from hundreds of nanometers to tens of centimeters, enabling semi-continuous monitoring on a global scale, unimpeded by political boundaries and natural obstacles.

Optical images, the equivalent of photographs taken from , governments, researchers and journalists monitor troop movements on the front and the destruction of equipment and facilities. Although optical images are easily interpreted, they suffer from cloud cover and operate only during daylight.

To counter these issues, we used radars onboard satellites. Space-borne radar systems beam long-wavelength electromagnetic waves toward the Earth and then record the returning echos. These waves – about 0.4 to 4 inches (1 to 10 centimeters) – can penetrate clouds and smoke. Radar interferometry has already proved to be an invaluable tool to monitor widespread caused by natural disasters.

a pair of satellite views showing the same section of a city, one with intact buildings and green space and the other damaged or destroyed buildings and charred earth
Satellite photography like these ‘before' and ‘after' images can provide a visceral sense of the destruction in the war in Ukraine.
Satellite image (c) 2023 Maxar Technologies via Getty Images

Radar from space

and publicly available radar data for civilian applications is rare – the United States is to launch its first one in March 2024 – but the European Space Agency has made such data available since the early 1990s. Data from the European Space Agency's Sentinel-1 satellite radar is freely accessible via their data hub.

Two radar images formed over the same area can be used to detect changes to structures and other surfaces. Interferometry measures the difference in travel time between two radar signals, which is a measure of change in the shape or position of surfaces. Another measure of surface change is the coherence of the reflected – that is, the degree of similarity between two different images when comparing neighboring pixels at the same position in the two images. A large coherence implies little change and thus the preservation of a building or other structure. On the other hand, a loss of coherence in the context of a battlefield implies damage or destruction of a building or structure.

Advertisement

Sentinel-1 radar's spatial resolution of 66 feet (20  meters) over a swath of 255 miles (410  kilometers) combined with 12-day updates makes its radar data ideal for monitoring urban warfare. Previous research efforts have used satellite radar data to assess damage in Kyiv and Mariupol. We used the data to analyze the evolution of damage to over time during several lengthy battles.

four maps of a city with increasing amounts of the buidlings marked in red
Changes in radar data during the battle of Bahkmut show increasing amounts of destruction. Red pixels imply damaged or destroyed buildings.
Xu et al. (2024), CC BY-NC-ND

Measure of destruction

We flagged highly damaged areas by comparing radar coherence before and after the war, within the areas classified as artificial surfaces by the European Space Agency's WorldCover 2021 dataset. Using this approach, we first analyzed the battle of Bakhmut, one of the longest and bloodiest of the war, which began on Oct. 8, 2022, and ended with a Russian victory on May 20, 2023.

When Hang Xu showed Teng Wang and me the data he had processed, we were puzzled. We saw a checkerboard pattern all over the city. We quickly realized the horror of the situation. The only thing that survived after the yearlong battle was the network of roads in the city. All buildings had partially or completely collapsed due to the continuous bombardment.

We then took a look at the battles of Rubizhne, Sievierodonetsk and Lysychansk that started in April 2022 and ended with a Russian victory on July 2, 2022. The comparatively lower destruction of Lysychansk is explained by the rapid encirclement of the city from the south instead of continued frontal assaults, as was the case in Bakhmut. The radar data reveals destruction away from the front line within cities, showing the whole extent of the devastation.

four maps of a set of three cities with increasing amounts of the buidlings marked in red
Changes in radar data during the battles of Rubizhne, Sievierodonetsk and Lysychansk show increasing amounts of destruction. Urban areas are shown in gray with damage in red.
Xu et al. (2024), CC BY-NC-ND

Devastation in focus

Remote sensing images offer the means to safely monitor the impact of armed conflicts, particularly as high-intensity wars in urban environments proliferate. Open-access satellite instruments complement other forms of open-source intelligence by offering unimpeded access to high-resolution, unbiased information, which can help people grasp the true impact of war on the ground.

The picture is clear: The real story of war is destruction.The Conversation

Sylvain Barbot, Associate Professor of Earth Sciences, USC Dornsife College of Letters, Arts and Sciences

Advertisement

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Continue Reading

News from the South

Trending