How to Protect Black and Brown Lives in the Age of AI?

0
21

A wrongful arrest due to facial recognition technology (FRT) wrongly identifying a Black pregnant woman, Porcha Woodruff, as a robbery suspect earlier this year. The incident highlighted the heartbreaking harms tied to high-risk AI systems and a series of law enforcement failures, including inadequate investigation and technological training. As the White House’s AI Executive Order and draft policy on AI governance crystallize, the stakes are high, particularly for vulnerable communities.

While these recent developments are a milestone for U.S. AI governance, it falls short of addressing the profound issues surrounding the misuse of FRT and the generational trauma it perpetuates.

FRT and Generational Trauma

The implications of FRT are far-reaching and demand urgent attention. Its surveillance capabilities, coupled with documented biases, hold the ominous potential to weave an irreversible web of algorithmic bias, raising profound concerns about algorithmic fairness and privacy. The technology is notorious for exhibiting substantial technical flaws with a known documented issue of lower accuracy for people with darker skin tones.

Reported instances of law enforcement using FRT in publicly accessible spaces to surveil innocent protesters exercising their rights serve as a grim example of the inappropriate use of algorithmic bias in policing. This misuse not only jeopardizes people’s well-being, safety, and due process, but also poses a threat to the collective fabric of our society, undermining the principles of privacy and free speech.

The incidents involving Porcha Woodruff and Robert Williams, where wrongful arrests unfolded in front of their children, reflect a deeper issue—contributing not just to individual rights violations, but also to the collective trauma. Such striking experiences, echoing the painful history of injustices, stand as a stark reminder of the need for comprehensive regulation in the face of rights-impacting surveillance technologies.

Milestones With Limitations

While the Biden administration’s AI Executive Order (“executive order”) is a notable milestone in U.S. AI governance, its approach to tackling FRT raises questions. The executive order’s directive for a report on AI in the criminal justice system seems insufficient in light of an alarming Government Accountability Office (GAO) report revealing the reckless use of facial recognition software by law enforcement agencies without training, policies, or oversight. In fact, the GAO has highlighted in multiple reports the use of facial recognition by federal agencies, finding a lack of tracking, training, and compliance.

Despite attempts in the executive order to address training needs for law enforcement using rights-impacting AI, it lacks specificity in addressing FRT, redline prohibitions in specific cases, or remedies for those previously impacted. The accompanying draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of AI takes a slightly more detailed approach, defining covered AI systems and acknowledging the impact of FRT. It also falls short of banning FRT in publicly accessible spaces, or addressing existing harms caused by law enforcement’s reliance on mismatches and wrongful arrests.

A man types on a computer keyboard.
Emanuele Cremaschi/Getty Images

When the GAO report unequivocally underscores the urgent need for comprehensive training and oversight, why does the executive order merely recommend that agencies consider offering guidance to state, local, tribal, and territorial law enforcement entities? Is the White House really committed to addressing the critical issue of algorithmic discrimination effectively?

A Path Forward—Can Congress Deliver?

The executive order rightly places privacy at its core and calls for legislative solutions, placing the onus for a binding resolution on the shoulders of Congress. To effectively address algorithmic discrimination, Congress must adopt robust, narrow, and targeted measures limiting law enforcement’s use of FRT and banning specific applications. This legislative framework would protect personal privacy, enhance algorithmic transparency, and ensure the protection of civil and human rights.

The facial recognition mismatch cases serve as a wake-up call, signaling the need for swift and decisive congressional action to curb FRT misuse and abuse, ensure accountable policing, and protect the rights of all people in the U.S., regardless of their racial, ethnic, or religious backgrounds. Without such measures, wrongful arrests due to FRT and law enforcement’s lack of training will persist, perpetuating generational trauma, and undermining principles of fairness and justice. What will our society look like if these measures are in place? Safer.

If Congress were to enact human-rights centered data privacy legislation, and guardrails and prohibitions on certain facial recognition technologies, a future with fair and equitable technology might be within reach. By prohibiting certain applications of FRT and implementing safeguards around its use in law enforcement, Congress can pave the way for increased freedom and safety for Black and brown communities—where the technology works for us, not against us. This vision is one where technology is an ally, not an adversary.

Willmary Escoto is a U.S. policy analyst for Access Now, where she works on issues around content governance, privacy, artificial intelligence, and data protection.

The views expressed in this article are the writer’s own.