Determinants of Socially Responsible AI Governance

By: Daryl Lim The signing of the first international AI treaty by the United States, European Union, and other nations marks a pivotal step in establishing a global framework for AI governance, ensuring that AI systems respect human rights, democracy, and the rule of law. This article advances the concepts of justice, equity, and the rule of law as yardsticks of socially responsible AI—from development through deployment—to ensure that AI technologies do not exacerbate existing inequalities but actively promote fairness and inclusivity. Part I explores AI’s potential to improve access to justice for marginalized communities and small and medium-sized law firms while scrutinizing AI-related risks judges, lawyers, and the communities they serve face. Part II examines the structural biases in AI systems, focusing on how biased data and coding practices can entrench inequity and how intellectual property protections like trade secrets can limit transparency and undermine accountability in AI governance. Part III evaluates the normative impact of AI on traditional legal frameworks, offering a comparative analysis of governance models: the U.S. market-driven approach, the EU’s rights-based model, China’s command economy, and Singapore’s soft law framework. The analysis highlights how different systems balance innovation with safeguards, emphasizing that successful AI governance

Fine-Tuning LLMs: Structural Fluency and Augmentation for the Great and Powerful Wizard of AI

By: Verónica C. Gonzales The civil legal tradition carries assumptions, biases, and attitudes rooted in racism and ideologies intended to protect the (im)balance of power. This moment in history offers new versions of the same challenges with the integration of artificial intelligence (“AI”) and large language models (“LLMs”) into legal frameworks, and those critiques are being addressed in the legal discourse. Building on these perspectives, this moment also offers civil legal professionals a chance to engage in machine learning frameworks informed by social justice principles and accountable to measures of “equal justice for all.” The perception of fairness in the use of these tools is critical to protecting the integrity of and trust in the civil justice system. Although the features of LLMs may not be able to replace legal analysis just yet, developers anticipate that that is where these tools are headed sooner than one might think. Without intentional approaches to machine learning, LLMs will create a civil legal system twilight zone where machines propose new outcomes based on the ineffective patterns of the past, a never-ending feedback loop that traps litigants and stifles social progress. LLMs, and the AI tools which use them, offer a new reality in

Something Is Rotten in the State of Healthcare: Accountability, Affordability, and the Court of Public Opinion

By: Isabelle Breier Many countries worldwide recognize a right to health and provide legal recourse for securing that right. By contrast, for many Americans, there is no legal right or remedy that enables them to access healthcare if they cannot afford it. While there are some statutes and measures in place, such as the Emergency Medical Treatment and Active Labor Act (EMTALA) and Medicaid, coverage remains piecemeal and access to healthcare is not guaranteed. Conditioning people’s access to healthcare upon their ability to pay violates the foundational bioethical principle of justice. In the United States, individuals and advocates use social media to fill this vacuum, raising awareness and funds through platforms like GoFundMe and X.com (formerly Twitter). In the absence of government-provided healthcare or a right to health, the American public sometimes functions as a healthcare access mechanism via social media. However, this insufficient and inequitable stopgap cannot replace effective governance. The public is filling a vital and unmet need through social media advocacy, but there are many drawbacks to effectively delegating this authority to the public instead of the government, including the public’s capriciousness and lack of legal accountability. The government should take proactive steps to ensure that healthcare

Tribes and AI: Possibilities for Tribal Sovereignty

By: Adam Crepelle Artificial Intelligence (AI) has permeated every facet of modern existence. Governments across the globe are exploring its applications and attempting to establish regulatory frameworks. Numerous scholars have proffered recommendations for governing AI at the local, national, and international levels. However, as is often the case, Indian tribes have been neglected in AI policy discussions. This oversight is significant because the 574 federally recognized tribes are sovereigns with their own judicial, education, and healthcare systems. Due to their relatively small populations and geographic isolation, tribes stand to benefit significantly from the services AI can perform. Moreover, tribes are uniquely well-suited to implement AI. This is the first law review article dedicated to exploring how AI can enhance tribal sovereignty. This article begins with a history of tribal sovereignty and then provides an overview of AI. Subsequent sections delve into the ways AI can augment tribal legal systems, healthcare, education, cultural preservation endeavors, economic development, and administrative capacity. By illuminating the intersection of AI and tribal sovereignty, this article seeks to foster a more inclusive discussion of AI. Download Full Article (PDF) Cite: 25 Duke L. & Tech. Rev. 1

Unintentional Algorithmic Discrimination: How Artificial Intelligence Undermines Disparate Impact Jurisprudence

By: Vincent Calderon Artificial intelligence holds the capacity to revolutionize the economy by capturing efficiencies. These benefits, ostensibly, should pass down to consumers, thereby benefitting the general public. But the immense complexity of AI systems is bound to introduce legal hurdles for plaintiffs and frustrate our disparate impact jurisprudence. Specifically, demonstrating causation and proffering a less discriminatory alternative are herculean tasks for a plaintiff seeking to prove a disparate impact upon which legal relief may be granted. The courts have already begun to wrestle with these issues, primarily in the housing and employment sectors. With the rapid surge of AI systems, courts should expect further inquiry into how these programs interfere with our established antidiscrimination framework. This Note outlines how each step of a plaintiff’s successful disparate impact analysis is hindered by the opaque ways in which AI operates. This Note then proposes several policy reforms to mitigate these consequences. Download Full Article (PDF) Cite: 24 Duke L. & Tech. Rev. 28

Can ChatGPT Keep a Secret? An Evaluation of the Applicability and Suitability of Trade Secrecy Protection for AI-Generated Inventions

By: Gina L. Campanelli The rising popularity of generative artificial intelligence has sparked questions around whether AI-generated inventions and works can be protected under current intellectual property regimes, and if so, how. Guidance from the U.S. Copyright Office and recent court cases shed some light on the applicability of copyright and patent protection to AI-generated products; namely “authors” and “inventors” are limited to natural persons. But further developments in copyright and patent law are still lagging behind generative-AI’s rapid growth. Trade secrecy emerges as the most viable path forward to protect AI-generated works and inventions because ownership of trade secrets is not limited to natural persons. But trade secrecy has its drawbacks too, primarily inadequate protection outside of misappropriation. Further, trade secrecy precludes disclosure, which hinders greater scientific development and progress. This Note examines the suitability and applicability of copyright, patent, and trade secret protection for AI-generated works and inventions and posits alternative protection schemes. Download Full Article (PDF) Cite: 24 Duke L. & Tech. Rev. 1