A critical look at the arguments and evidence used to justify new tech for policing
by Dan Fitch
Technology startups are infiltrating every part of our life, sometimes in secret and sometimes visibly. It’s time to look at two technologies whose impacts on public safety are not widely understood.
Body-worn cameras, or BWCs, are currently a hot topic in Madison. UWPD have BWCs already, SWAT teams have them already, funding has been allocated for MPD to potentially pilot cameras in the northern district, and BWCs are under consideration to go city-wide by a committee for the second time, with final reporting due in January 2021.
Facial recognition technology is harder to see, but is used behind the scenes by police. These algorithms can be used in many ways, but the most common in the context of policing is to search a database of faces for matches. Alder Max Prestigiacomo introduced a facial recognition technology ban which has since been taken up and passed by the Public Safety Committee and is headed to Madison’s Common Council for a vote on Tuesday, December 1. The proposal has led to heated public debate over the use of the technology in Madison.
The primary question asked by police proponents in these debates is, “why not use the technology?” as if those who are against it are all Luddites. Maybe the questioners are Luddites, in the sense that they would like technology to be applied to our lives ethically. We should think carefully about power and oppression, and large-scale surveillance can tip the scales in an incredibly powerful, largely invisible way. We must ask, who is helped and who is harmed by these technologies? Who is asking for these technologies? Should we keep listening to them?
Part of the problem is that most of the evidence is not in. Studies of body cameras on use of force and other metrics have conflicting results. Most studies on efficacy of facial recognition tech have conflicting, unconvincing results as well. The experts do not agree, and if anything, the consensus in academia is reversing from “body cams are probably good” to “body cams are likely bad”; where the consensus against facial recognition has always pointed out its biases. The only evidence heavily in favor of facial recognition seems to come from manufacturers of it, trying to sell their software.
The second major emphasis of pro-police council speakers, at least on the facial recognition ban, has been to “think of the children.” They believe that facial recognition is the only way to fight sex trafficking and the like. But this is propaganda that has little real world evidence. For every AI-fueled child-safety startup that gets announced, how many police investigations are actually helped? And in what ways can assembling an AI database of children backfire? There’s a lot of nuance here, and the burden of proof falls on those pushing to use the technology.
Neither side should succumb to fear, so let’s take a look at the actual evidence about body cameras and facial recognition software.
Body Cameras
Surveys of assistant district attorneys and public defenders have shown that 66% of PDs thought that BWCs increased the likelihood of acquittals, while 61% of ADAs thought they increased the likelihood of convictions. Those numbers should add up to 100% if everyone is understanding the situation correctly. So there’s something we’re missing here.
Part of the root issue may be that civilians in most jurisdictions do not have any power, so adding body cameras to that imbalance does nothing — or possibly worse than nothing, by empowering the justice system to charge more crimes. What will Madison’s own independent monitor and civilian oversight board be able to do with footage? Do the monitor and board have any teeth?
In 2016, in jurisdictions that deploy body cams, video was used by 93% of prosecutors in cases against civilian suspects. Only 8% of those same prosecutors used body camera video that year as evidence to prosecute police.
In 2016, in jurisdictions that deploy body cams, video was used by 93% of prosecutors in cases against civilian suspects. Only 8% of those same prosecutors used body camera video that year as evidence to prosecute police. As Texas police detective Nick Selby goes on to discuss, the real problem is that the police and prosecutors “control the button”. Body cameras can lead to confident prosecutors who go on, as in Los Angeles, to file misdemeanor charges against 2.4 times as many people, who would not previously have been charged.
Did body cameras make LA more safe? No. Randomized studies have shown no meaningful measurable effects of introducing body cameras; a recent meta‐analysis of 30 studies and 116 effects of police use of BWCs found that they produce “few clear or consistent impacts on police or citizen behaviors.” It had been hoped that body cameras would reduce police use of force, but it appears that they don’t (in some trials, use of force actually significantly increased when officers wore bodycams). It was hoped that they would improve police behavior, leading to citizens feeling that they were better treated, but no such effect occurred. It was hoped that they would reduce lawsuits and settlements, but that didn’t happen either. There is currently no good evidence that body cameras reliably do anything in particular, except cost jurisdictions big cash. But there is substantial evidence that they consistently sweep up more people with misdemeanor charges for minor offenses, contributing to overcriminalization. After being charged, most people then agree to plea deals to get out of jail or to try to move on with their lives; but they remain entangled in the arbitrary dictates of the probation system.
Who gets misdemeanor charges in Dane County? Disproportionally Black people. Misdemeanors are the bulk of our justice system: possibly as much as 80% of Wisconsin dockets. In my sample of Dane County data on WCCA for criminal cases active in the last year, 60% of the cases were misdemeanor charges. The extremely small proportion of Black people (5.2% in the last census) account for fully 55% of those misdemeanor cases. And our misdemeanor bail in Dane County is skewed heavily against Black people. 9% of non-Black defendents with only misdemeanor charges get cash bail. 28% of Black defendents with only misdemeanor charges get cash bail.
That means you are three times as likely to get cash bail and be stuck in jail if you are Black in Dane County. (Full article on why is forthcoming, but spoiler alert: it’s more bad technology, this time racist AI making “safety” decisions.) And this doesn’t even touch on wealth disparity, how much more likely Black people are to be poor and unable to pay, and how it will ruin their life (and their family’s lives) to be stuck in jail awaiting trial.
We have one of the worst disparities in our very racist country on misdemeanors. Meanwhile, Alderman Paul Skidmore is on record about body cameras saying “It’s just a tool”. Do we really want to pay for the tools that will likely be used by prosecutors to charge more misdemeanors and make systemic injustice worse?
When it comes to the actual body camera footage, you might think that wearing a camera directly on one’s body results in an impartial recording; a record which serves true justice. Of course, it depends on if officers are allowed to change the record by actually editing footage, or by turning their cameras off or by “recreating” events, and if the system using the resulting footage is fair. Let’s grant the impossible scenario that body cameras get applied in a completely unbiased way. Even then, the recording is not impartial: it has been shown a body camera video can bias people much more than a dash cam (or written report), leaving in one’s mind a “diminished sense of blame or responsibility” for the wearer of the camera (who is invisible in the video). That sort of bias would be directly involved in, for example, juries deciding between first degree murder and manslaughter, as well as many other actions taken by “bad apples”.
So, knowing all this, why do communities rally behind body cameras? When BWCs first rolled out, folks had hope. It was hype: new technology for technology’s sake. There wasn’t evidence. Nobody knew any better.
Now, with more information in hand, it would be foolish to give our unaccountable, detached-from-the-community police more tools to add charges and inoculate themselves to major, real justice reform. What are the net harms and who is actually helped by body cameras? It sounds like body cameras might widen our already-biased misdemeanor arrest race gap, and the evidence does not show that the community benefits in any other way. And let’s not forget, body cameras cost money to purchase and maintain.
Facial Recognition Technology
As with body cameras, there can be a bias to assume facial recognition systems are correct. This is often called the “automation bias.” But facial recognition systems which work fine on their predominantly white male programmers can fail in interesting ways. The American Civil Liberties Union (ACLU) tested Amazon’s Rekognition software on all the members of Congress. The algorithm incorrectly matched twenty-eight congresspersons to criminal mugshots; eleven of these false matches misidentified representatives of color, including the late civil rights pioneer John Lewis.
Jake Parker, a lobbyist from the security industry, recently gave testimony to the City of Madison Public Safety Review Committee (PSRC) on the topic of facial recognition. He referred to a huge National Institute of Standards and Technology (NIST) report and claimed there were “undetectable differences across race”. But what did the primary author of the report, Patrick Grother, a NIST computer scientist say? “While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied.” The report shows quite clearly that the accuracy of these algorithms can vary widely based on a person’s age, gender, or race. Algorithm accuracy gets worse with the very elderly, very young, and especially with darker-skinned and non-male faces. The NIST study found that for children, even using ideal photos, the typical vendor algorithm incorrectly labeled 1% of all photos in a database as a match to the test subject (rendering this technology useless for all practical purposes).
Facial recognition has resulted in at least one high-profile false arrest, that of Robert Williams in Detroit. Williams was arrested in front of his wife and children and held for 30 hours until finally released. He later wrote an article for the Washington Post, saying: “Federal studies have shown that facial-recognition systems misidentify Asian and Black people up to 100 times more often than white people. Why is law enforcement even allowed to use such technology when it obviously doesn’t work? I get angry when I hear companies, politicians and police talk about how this technology isn’t dangerous or flawed. What’s worse is that, before this happened to me, I actually believed them. I thought, what’s so terrible if they’re not invading our privacy and all they’re doing is using this technology to narrow in on a group of suspects?”
Ubiquitous facial detection algorithms raise serious Fourth Amendment issues and may chill First Amendment rights as well. (For more on the privacy issues, which deserve an entire separate article to examine, please see Facial Recognition Is the Perfect Tool for Oppression and Facing the Future of Surveillance.)
Even if we try to limit the uses of facial recognition to something specific, as with those who support using the technology to fight child sexual exploitation, the harms are likely to outweigh any help we can get. The current ban under discussion by the Common Council allows specific uses to find victims. But as Lindsey Barret (Georgetown University Law Center) writes in Ban Facial Recognition Technologies for Children – and Everyone Else: “Clearview AI, which has been heavily criticized for its privacy violative services, has been quick to tout the use of its product in cases involving children, including investigations into child sexual exploitation. The horrendous nature of those crimes may seem to reduce the need for scruples when it comes to the harms of these technologies, when in fact the sensitivity of the circumstances makes their problems even more concerning. The false identification of a victim could be deeply traumatic, and the false identification of an ostensible perpetrator could lead to tremendously damaging consequences for the wrongly identified.”
Multiple observers have raised alarm bells about the fact that Clearview’s secretive algorithm and weird giant database of children have not been tested for accuracy by an independent agency. This puts us back at the “hope and hype” stage, as we discussed earlier with body cameras. In the case of Madison, MPD has not yet released information on how actually effective their tools are. In how many child victim cases that used facial recognition technology, was that technology (in whole or in part) responsible for finding the child? The public is in the dark.
Vic Wahl, the interim chief of MPD, argued before the PSRC on November 18 that we should continue using facial recognition technology based on a Georgetown Law Center on Privacy & Technology report from 2016. But that very same center has since updated their recommendations, saying: “[W]e now believe that state, local, and federal government should place a moratorium on police use of face recognition. We also believe that jurisdictions that move to ban the technology outright are amply justified to do so.” When faced with this new report, Wahl said we should rely on our own community standards instead of these updated recommendations.
As a final question about facial recognition, some ask, “Why ban it entirely? What about other uses, like security?” We should throw supposed security uses out as hype, as well. Facial recognition is a bad model for securing things, because you can’t signal your intent. This is the problem where someone can threaten you and unlock something with your face without your consent. How well the algorithms work in one-to-one face matching simply does not matter. The first major breach of face data from one of these databases is going to be bad: you can’t easily change your face like you can change a password. The result: nobody should want face-based locking for anything serious.
Conclusion
As we consider technologies like body cameras and facial recognition, let’s think really hard about harm and help. Who do these technologies harm and help? If they don’t help our marginalized communities, and the people who need help the most, why are we considering them? If body cameras can increase misdemeanor charges and don’t actually help communities stay safe, we should not use them as they only increase the power differential. If facial recognition algorithms can actively harm our communities and do nothing about entrenched systemic biases, we should actively fight against such “innovations.”
Many thanks to Greg Gelembiuk for sending his evolving opinions and an archive of information and references from the Body-Worn Camera Feasibility Review Committee.
2 thoughts on “We Should Ban Facial Recognition Technology”