By: Keith Porcaro Debates over economic protectionism or the technology flavor-of-the-month obscure a simple, urgent truth: people are going online to find help that they cannot get from legal and health professionals. They are being let down, by products with festering trust and quality issues, by regulators slow to apply consumer protection standards to harmful offerings, and by professionals loath to acknowledge changes to how help is delivered. The status quo cannot continue. Waves of capital and code are empowering ever more organizations to build digital products that blur the line between self-help and professional advice. For good or ill, “gray advice” is changing how ordinary people get help with legal issues and healthcare issues, and even how they perceive professionals. This Article begins the work of articulating what makes a high-quality digital advice product, and how regulators and professionals can engage with the reality of how people seek and find help today. Download Full Article (PDF) Cite: 25 Duke L. & Tech. Rev. 48
Tag: Artificial Intelligence
AI & Marginalized Communities Symposium
By Joshua Angelo Last month, Duke Law’s Center on Law, Race & Policy hosted numerous scholars and experts for its AI & Marginalized Groups Symposium. I had the pleasure of attending both the Symposium’s Lunch Keynote and its Criminal Justice panel. In the Lunch Keynote, Dr. Charlton McIlwain discussed his concerns about the impact of artificial intelligence on marginalized communities. In the Criminal Justice panel, numerous experts, including Duke’s own Professor Brandon Garrett, explored AI’s often concerning implications for law enforcement and criminal justice. Lunch Keynote: Dr. Charlton McIlwain is the Vice Provost for Faculty Engagement and Development at New York University, as well as a Professor of Media, Culture, and Communications, and Founder of the Critical Race and Digital Studies Program. Dr. McIlwain began his presentation by noting that he approaches matters both as a historian and as a social scientist, with each perspective informing his viewpoint regarding technology. The presentation then turned to Dr. McIlwain’s concerns about AI, beginning with the prospect of algorithmic discrimination. Dr. McIlwain first discussed the targeted advertising of predatory mortgage loans to Black and Hispanic individuals, a practice known as “reverse redlining.” He noted the role that digital advertising can play in facilitating
Tribes and AI: Possibilities for Tribal Sovereignty
By: Adam Crepelle Artificial Intelligence (AI) has permeated every facet of modern existence. Governments across the globe are exploring its applications and attempting to establish regulatory frameworks. Numerous scholars have proffered recommendations for governing AI at the local, national, and international levels. However, as is often the case, Indian tribes have been neglected in AI policy discussions. This oversight is significant because the 574 federally recognized tribes are sovereigns with their own judicial, education, and healthcare systems. Due to their relatively small populations and geographic isolation, tribes stand to benefit significantly from the services AI can perform. Moreover, tribes are uniquely well-suited to implement AI. This is the first law review article dedicated to exploring how AI can enhance tribal sovereignty. This article begins with a history of tribal sovereignty and then provides an overview of AI. Subsequent sections delve into the ways AI can augment tribal legal systems, healthcare, education, cultural preservation endeavors, economic development, and administrative capacity. By illuminating the intersection of AI and tribal sovereignty, this article seeks to foster a more inclusive discussion of AI. Download Full Article (PDF) Cite: 25 Duke L. & Tech. Rev. 1
The GPTJudge: Justice in a Generative AI World
By: Maura R. Grossman, Paul W. Grimm, Daniel G. Brown, and Molly Xu Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they can produce computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos, and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI-generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases. This article discusses these issues, and offers a
Causation and Conception in American Inventorship
By: Dan L. Burk Increasing use of machine learning or “artificial intelligence” (AI) software systems in technical innovation has led some to speculate that perhaps machines might be considered inventors under patent law. While U.S. patent doctrine decisively precludes such a bizarre and counterproductive result, the speculation leads to a more fruitful inquiry about the role of causation in the law of inventorship. U.S. law has almost entirely disregarded causation in determining inventorship, with very few exceptions, some of which are surprising. In this essay, I examine those exceptions to inventive causality, the role they play in determining inventorship, and their effect in excluding consideration of mechanical inventors under current law. Download Full Article (PDF) Cite: 20 Duke L. & Tech. Rev. 116