By Joshua Angelo
Last month, Duke Law’s Center on Law, Race & Policy hosted numerous scholars and experts for its AI & Marginalized Groups Symposium. I had the pleasure of attending both the Symposium’s Lunch Keynote and its Criminal Justice panel. In the Lunch Keynote, Dr. Charlton McIlwain discussed his concerns about the impact of artificial intelligence on marginalized communities. In the Criminal Justice panel, numerous experts, including Duke’s own Professor Brandon Garrett, explored AI’s often concerning implications for law enforcement and criminal justice.
Lunch Keynote:
Dr. Charlton McIlwain is the Vice Provost for Faculty Engagement and Development at New York University, as well as a Professor of Media, Culture, and Communications, and Founder of the Critical Race and Digital Studies Program. Dr. McIlwain began his presentation by noting that he approaches matters both as a historian and as a social scientist, with each perspective informing his viewpoint regarding technology.
The presentation then turned to Dr. McIlwain’s concerns about AI, beginning with the prospect of algorithmic discrimination. Dr. McIlwain first discussed the targeted advertising of predatory mortgage loans to Black and Hispanic individuals, a practice known as “reverse redlining.” He noted the role that digital advertising can play in facilitating this practice. The presentation emphasized the differences between AI and prior digital advertising and targeting, specifically with respect to transparency. On websites like Facebook, it is not terribly difficult to assess targeting. One only needs to look at the categories for which advertisers are selecting. AI, however, can be a “black box” of sorts, opaque in how it functions and who it targets.
Dr. McIlwain turned next to the question of participation and competition in the AI field. Framing his discussion around the Chips and Science Act, he discussed the various ways in which marginalized communities can participate in technological progress. Drawing a distinction between superficial involvement like survey-taking and deeper engagement in the innovation economy, Dr. McIlwain emphasized the importance of purposeful work with, for, and by marginalized communities. He stressed the importance of this deep engagement in facilitating public welfare through technology and minimizing harm. In this light, he noted his concerns about representation of marginalized communities at all levels of technological development as we move forward in the field of AI.
Dr. McIlwain finished his presentation with a discussion of his book, Black Software. In accordance with the concerns noted above, he expressed fear that new technology will be used to surveil and police marginalized communities and that, absent discussions about race in the development of AI, the future will reproduce the racial inequities of the past.
In the Q & A following Dr. McIlwain’s presentation, participants engaged with areas such as trust, agenda-setting, and the role of private companies in making change. Dr. McIlwain emphasized that trust is difficult to build and easy to lose, illustrating the importance of avoiding harm in developing AI. In response to a question about incentivizing the involvement of marginalized communities in agenda-setting, Dr. McIlwain suggested conditioning certain National Science Foundation funds on deeper community involvement. Dr. McIlwain also fielded questions about the roles of companies in facilitating participation by marginalized communities. He noted that corporate change has often come in fits and starts and suffered from an overall lack of sustainability, perhaps creating a need for independent innovation.
Dr. McIlwain’s presentation provided a unique perspective about the impacts of technological development across communities. In light of the increasing prominence of artificial intelligence, this discussion spoke to the broader importance of ensuring that novel technologies operate to the public benefit.
Criminal Justice Panel:
The Criminal Justice panel discussed the implications of artificial intelligence for constitutional rights, evidentiary standards, and marginalization. Moderated by Cornell Law professor Jessica Eaglin, the panel consisted of Chaz Arnett, a professor at the University of Maryland Francis King Carey School of Law, Sarah R. Olson, Forensic Resource Counsel at the Office of Indigent Defense Services, and Brandon L. Garrett, L. Neil Williams, Jr. Distinguished Professor of Law and Director of the Wilson Center for Science and Justice here at Duke.
Throughout the conversation, Professor Garrett spoke on the tension between AI and constitutional rights and evidentiary standards. He cited the use of AI and complex algorithms in evaluating DNA mixtures, conducting risk assessments, and engaging in predictive policing as evidence of this tension. The “black box” nature of AI also featured heavily in the discussion. Specifically, Professor Garrett noted that AI technology’s proprietary character can prevent evaluation both of the data being relied upon and of the relative weights given to different data points. This complicates error identification and may implicate the right to the disclosure of evidence as well as the ability to challenge it.
Professor Garrett also discussed the impact of AI and complex algorithms on evidentiary rules. He specifically made note of the recent revisions to the Federal Rules of Evidence, emphasizing the need of expert witnesses to understand the software that they use. Moreover, he discussed the role of experts and academia in evaluating software being used in the criminal justice setting. Professor Garrett described the shortage of regulation in law enforcement procurement and the reliability issues that frequently emerge when researchers are able to examine the technology. Moreover, he stated that AI models produce data that is distillable so as to be understood by courts and judges, allowing for judicial evaluation of evidence produced by AI.
Professor Arnett discussed the impact of AI on marginalized communities. He began this discussion by noting that discourse around AI often revolves around utopian appeals and the notion of continued technological advances. This narrative, according to Professor Arnett, obscures the potential harms of AI. Specifically, Professor Arnett discussed the implications of AI for what he termed the “sub-opticon” of dignitary harm and dehumanization for marginalized communities. He noted the phenomenon of “digital blackface:” the caricaturing of black people on online platforms, and the impact that these sorts of depictions can have on policing and criminal justice.
Sarah Olson began by noting her work helping public defenders to understand new technologies. In her view, more defense attorneys and resources are needed to level the playing field between prosecutors and defense attorneys. Specifically, in the realm of technology, prosecutors have access to technology experts, while public defenders generally do not. Olson discussed the need for funding to be made available to public defenders to hire these experts.
The panel discussion was followed by a Q & A which largely centered on issues of accountability and error detection. Professor Garrett discussed the work that the Wilson Center has done in examining data sets. Specifically, he noted the errors and gaps found in the personal data itself. AI tools, he said, cannot improve on decision-making if the data on which they rely is incorrect or missing. Moreover, Professor Garrett stated that, while existing evidentiary rules ask the questions appropriate to test the accuracy of AI systems, problems have arisen in applying these rules. Sarah Olson also noted problems with the application of new technologies, focusing on facial recognition. Because facial recognition technology is not yet a sufficient basis for making arrests, officers are supposed to have a human examiner to validate results. She noted her uncertainty that this requirement is actually met in practice.
In the Q & A, Professor Arnett argued that increased reliance on technology in the realms of criminal justice and social welfare creates problems for democracy and makes states less accountable. He cited Sam Altman’s threats to end OpenAI’s engagements with the European Union in response to regulation as an example of these challenges. He also discussed his concern about a reemergence of eugenics, as the application of AI to biometric data, among other areas, can validate the social construction of race without any biological justification.
The AI and Criminal Justice panel provided many examples of the challenges posed by AI. It was a reminder that discussions of social costs and fundamental rights play a critical role in the broader discourse around technological development.