Confidentiality of AI Conversations: Protecting Self-Represented Litigants Who Use ChatGPT for Legal Advice

By: Anoo Dinesh Vyas When a layperson uses ChatGPT to obtain feedback on a legal matter, attorney-client privilege may not apply, as ChatGPT is not a lawyer, much less a human. Further, while lawyers are entitled to protection for their opinion work-product, it is not clear whether self-represented litigants are entitled to the same protection. Additionally, the broader duty of confidentiality binds only attorneys, not AI systems like ChatGPT. The public increasingly uses AI tools such as ChatGPT. If a layperson employs ChatGPT for legal advice, particularly in a civil matter, such communications may be discoverable and potentially admissible. This presents an access to justice issue because a self-represented litigant who seeks to understand the scope of their legal rights may not realize that their AI communications can be used against them. Alternatively, they may not be able to afford an attorney, and thus decide to take the risk of communicating with AI anyway. This Article argues that self-represented litigants should enjoy protection for opinion work-product, and further, AI responses to self-represented litigants should also be permitted to count as opinion work-product. In addition, this Article proposes a discovery management protocol so courts may handle AI communications in a practicable

Fossil-Fueled Failure: How Nonrenewable Energy Policy Will Cost the United States the AI Race

By: Kayla Landeros This Article examines the structure and regulation of the United States electricity industry in light of accelerating electricity demand driven by Artificial Intelligence (“AI”) and digitalization. It argues that the One Big Beautiful Bill Act and related executive actions have exacerbated existing structural weaknesses within the nation’s energy governance framework by repealing renewable energy incentives, privileging nonrenewable generation, and reinforcing the instability of an executive dominated regulatory model. Through a comparative analysis, the Article contrasts the United States’ fragmented, market-based approach with China’s centralized, statute-based framework, which integrates renewable energy development, grid expansion, and AI infrastructure into a cohesive national strategy. While acknowledging the constitutional and institutional limits of United States energy federalism, the Article contends that meaningful reform remains possible through measures that promote reliability and long-term planning. It concludes that the United States economic and technological leadership in the AI era requires a durable, whole-of-government energy policy that supports all forms of energy generation and unites generation, transmission, and distribution within a coherent framework capable of supporting both innovation and sustainability. Download Full Article (PDF) Cite: 26 Duke L. & Tech. Rev. 127

Falling Flat: Why AI Cannot Free Melodies from Copyright Protection with “All the Music” as an Example

By: Hayley Huber As artificial intelligence (AI) continues to surprise us with its humanlike abilities, it raises the question of whether AI-created music can or should be afforded legal protection. Particularly, how should copyright law treat melodies produced by an AI designed to algorithmically generate every possible melody? This article seeks to answer that question, ultimately concluding that AI-produced melodies are not copyrightable and that melodies are not merely facts undeserving of copyright protection, but something valuable to mankind and worth protecting by law. The article explores Damien Riehl’s All the Music project (ATM) and his arguments for why ATM’s outputs should be protected as a case study that AI-produced music is uncopyrightable and that melodies are more than uncopyrightable facts. The article shows that U.S. copyright law does not recognize machines as “authors” for copyright purposes, that reducing melodies to “just math” conflicts with mainstream legal and musicological understandings of melody, and that even if ATM’s outputs were copyrightable, most of its “melodies” would fail for lack of originality. Projects like ATM neither free existing melodies from copyright protection nor meaningfully reduce the risk of infringement litigation for musicians. Download Full Article (PDF) Cite: 26 Duke L. & Tech.

Playing to Win: The Use of Export Controls to Address Non-military Strategic Competition

By: Joshua Angelo Technology and national security have been intimately related throughout American history. Over the past eighty-five years, the United States has increasingly made use of export controls to prevent adversarial countries from using the fruits of its technological advancement to strengthen their militaries and harm American interests. Today, strategic competition with the People’s Republic of China involves a myriad of technologies which present risks outside of, as well as within, the military context. Chinese exports of technologies, including artificial intelligence, quantum computing, and 5G communications, can promote the country’s authoritarian model abroad, enable economic coercion, and help to enrich the regime. The Belt and Road and “Made in China 2025” initiatives illustrate China’s use of novel technologies to further its global ambitions. Export controls may help to confront these threats. This note asks whether the Bureau of Industry and Security (a government body tasked with implementing export controls) can regulate exports of novel technologies for purposes of non-military strategic competition. Through examination of the statutes authorizing export controls and of the deference traditionally afforded to the Executive in administrative and foreign affairs matters, this note concludes that such export controls are authorized. Download Full Article (PDF) Cite: 25

Determinants of Socially Responsible AI Governance

By: Daryl Lim The signing of the first international AI treaty by the United States, European Union, and other nations marks a pivotal step in establishing a global framework for AI governance, ensuring that AI systems respect human rights, democracy, and the rule of law. This article advances the concepts of justice, equity, and the rule of law as yardsticks of socially responsible AI—from development through deployment—to ensure that AI technologies do not exacerbate existing inequalities but actively promote fairness and inclusivity. Part I explores AI’s potential to improve access to justice for marginalized communities and small and medium-sized law firms while scrutinizing AI-related risks judges, lawyers, and the communities they serve face. Part II examines the structural biases in AI systems, focusing on how biased data and coding practices can entrench inequity and how intellectual property protections like trade secrets can limit transparency and undermine accountability in AI governance. Part III evaluates the normative impact of AI on traditional legal frameworks, offering a comparative analysis of governance models: the U.S. market-driven approach, the EU’s rights-based model, China’s command economy, and Singapore’s soft law framework. The analysis highlights how different systems balance innovation with safeguards, emphasizing that successful AI governance

Fine-Tuning LLMs: Structural Fluency and Augmentation for the Great and Powerful Wizard of AI

By: Verónica C. Gonzales The civil legal tradition carries assumptions, biases, and attitudes rooted in racism and ideologies intended to protect the (im)balance of power. This moment in history offers new versions of the same challenges with the integration of artificial intelligence (“AI”) and large language models (“LLMs”) into legal frameworks, and those critiques are being addressed in the legal discourse. Building on these perspectives, this moment also offers civil legal professionals a chance to engage in machine learning frameworks informed by social justice principles and accountable to measures of “equal justice for all.” The perception of fairness in the use of these tools is critical to protecting the integrity of and trust in the civil justice system. Although the features of LLMs may not be able to replace legal analysis just yet, developers anticipate that that is where these tools are headed sooner than one might think. Without intentional approaches to machine learning, LLMs will create a civil legal system twilight zone where machines propose new outcomes based on the ineffective patterns of the past, a never-ending feedback loop that traps litigants and stifles social progress. LLMs, and the AI tools which use them, offer a new reality in

Gray Advice

By: Keith Porcaro Debates over economic protectionism or the technology flavor-of-the-month obscure a simple, urgent truth: people are going online to find help that they cannot get from legal and health professionals. They are being let down, by products with festering trust and quality issues, by regulators slow to apply consumer protection standards to harmful offerings, and by professionals loath to acknowledge changes to how help is delivered. The status quo cannot continue. Waves of capital and code are empowering ever more organizations to build digital products that blur the line between self-help and professional advice. For good or ill, “gray advice” is changing how ordinary people get help with legal issues and healthcare issues, and even how they perceive professionals. This Article begins the work of articulating what makes a high-quality digital advice product, and how regulators and professionals can engage with the reality of how people seek and find help today. Download Full Article (PDF) Cite: 25 Duke L. & Tech. Rev. 48

AI & Marginalized Communities Symposium

By Joshua Angelo Last month, Duke Law’s Center on Law, Race & Policy hosted numerous scholars and experts for its AI & Marginalized Groups Symposium. I had the pleasure of attending both the Symposium’s Lunch Keynote and its Criminal Justice panel. In the Lunch Keynote, Dr. Charlton McIlwain discussed his concerns about the impact of artificial intelligence on marginalized communities. In the Criminal Justice panel, numerous experts, including Duke’s own Professor Brandon Garrett, explored AI’s often concerning implications for law enforcement and criminal justice. Lunch Keynote: Dr. Charlton McIlwain is the Vice Provost for Faculty Engagement and Development at New York University, as well as a Professor of Media, Culture, and Communications, and Founder of the Critical Race and Digital Studies Program. Dr. McIlwain began his presentation by noting that he approaches matters both as a historian and as a social scientist, with each perspective informing his viewpoint regarding technology. The presentation then turned to Dr. McIlwain’s concerns about AI, beginning with the prospect of algorithmic discrimination. Dr. McIlwain first discussed the targeted advertising of predatory mortgage loans to Black and Hispanic individuals, a practice known as “reverse redlining.” He noted the role that digital advertising can play in facilitating

Tribes and AI: Possibilities for Tribal Sovereignty

By: Adam Crepelle Artificial Intelligence (AI) has permeated every facet of modern existence. Governments across the globe are exploring its applications and attempting to establish regulatory frameworks. Numerous scholars have proffered recommendations for governing AI at the local, national, and international levels. However, as is often the case, Indian tribes have been neglected in AI policy discussions. This oversight is significant because the 574 federally recognized tribes are sovereigns with their own judicial, education, and healthcare systems. Due to their relatively small populations and geographic isolation, tribes stand to benefit significantly from the services AI can perform. Moreover, tribes are uniquely well-suited to implement AI. This is the first law review article dedicated to exploring how AI can enhance tribal sovereignty. This article begins with a history of tribal sovereignty and then provides an overview of AI. Subsequent sections delve into the ways AI can augment tribal legal systems, healthcare, education, cultural preservation endeavors, economic development, and administrative capacity. By illuminating the intersection of AI and tribal sovereignty, this article seeks to foster a more inclusive discussion of AI. Download Full Article (PDF) Cite: 25 Duke L. & Tech. Rev. 1

The GPTJudge: Justice in a Generative AI World

By: Maura R. Grossman, Paul W. Grimm, Daniel G. Brown, and Molly Xu Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they can produce computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos, and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI-generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases. This article discusses these issues, and offers a