A Discussion with Deborah Raji
February 14, 2024
Proponents of law enforcement use of facial recognition technology highlight its potential for accuracy and efficiency, enabling police departments to scan through billions of images to find a potential suspect in a criminal investigation or a missing person. Accordingly, dozens of policing agencies and federal law enforcement entities are using facial recognition technology in their work. But are these algorithms reliable and accurate? And do these proprietary products sufficiently protect people’s privacy? Researchers, including our guest Deborah Raji, have repeatedly shown that such algorithms have higher rates of misidentification among darker-skinned women than light-skinned men, suggesting that racism may be baked into the development of these AI algorithms, the applications that use them, or the data they are fed. Some of the most prominent companies behind these algorithms announced a moratorium on law enforcement acquisition in 2020, at the height of racial justice protests across the US. And already there are multiple examples of false arrests of Black men inaccurately identified by law enforcement using facial recognition software, resulting in civil rights lawsuits against multiple police departments. We discussed this technology and the tradeoffs in whether and how facial recognition technology should be used by law enforcement—and in particular, the role of accountability, transparency, and guardrails in both private development and public policy.
Speaker
Deborah Raji is a Mozilla fellow and Computer Science PhD student at University of California, Berkeley, who is interested in questions on algorithmic auditing and evaluation. In the past, she worked closely with the Algorithmic Justice League initiative to highlight bias in deployed AI products. She has also worked with Googleʼs Ethical AI team and been a research fellow at the Partnership on AI and AI Now Institute at New York University working on various projects to operationalize ethical considerations in ML engineering practice. Recently, she was named to Forbes 30 Under 30, MIT Tech Review 35 Under 35 Innovators, and TIME100 AI.
Moderated by Katy Naples-Mitchell, Program Director of the Program in Criminal Justice Policy and Management.
LINKS TO RESOURCES MENTIONED DURING THE EVENT
- TED talk by Joy Buolamwini: How I'm fighting bias in algorithms
- Gender Shades project
- About Face: A Survey of Facial Recognition Evaluation by Deborah Raji and Genevieve Fried
- Your Face Belongs to Us by Kashmir Hill
- Statement on Amazon’s announcement to end program allowing law enforcement to request Ring doorbell camera footage from users through its Neighbors app
- But, Amazon says DOJ disclosure doesn’t indicate violation of facial recognition moratorium
- Ban Facial Recognition website
- ACLU of Massachusetts' "Press Pause on Face Surveillance" campaign
- Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products
- GAO report -- Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties
- Clearview AI used nearly 1 million times by US police, it tells the BBC
- "Facial recognition's 'dirty little secret': Millions of online photos scraped without consent"
- State Audit Finds Serious Lapses In CalGang Database
- 2015 California State Auditor report: The CalGang Criminal Intelligence System
- Office of the Privacy Commissioner of Canada: Clearview AI ordered to comply with recommendations to stop collecting, sharing images
- Chicago’s Inspector General Finds the City’s Gang Database Is Riddled With Errors
- Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing
- Sentencing algorithms like the one under development in Pa. are ‘deeply flawed,’ researchers say
- Acoustic gunshot detection systems: a quasi-experimental evaluation in St. Louis, MO
- The effect of gunshot detection technology on evidence collection and case clearance in Kansas City, Missouri
- Garbage In, Garbage Out: Face Recognition on Flawed Data
The Surveillance, Criminalization, and Punishment speaker series is organized by Katy Naples-Mitchell, Program Director of the Program in Criminal Justice Policy and Management, and Sandra Susan Smith, Guggenheim Professor of Criminal Justice; Faculty Director, Program in Criminal Justice Policy and Management; Director, Malcolm Wiener Center for Social Policy; Professor of Sociology; and Carol K. Pforzheimer Professor at the Radcliffe Institute.