Determinants of Socially Responsible AI Governance

By: Daryl Lim The signing of the first international AI treaty by the United States, European Union, and other nations marks a pivotal step in establishing a global framework for AI governance, ensuring that AI systems respect human rights, democracy, and the rule of law. This article advances the concepts of justice, equity, and the rule of law as yardsticks of socially responsible AI—from development through deployment—to ensure that AI technologies do not exacerbate existing inequalities but actively promote fairness and inclusivity. Part I explores AI’s potential to improve access to justice for marginalized communities and small and medium-sized law firms while scrutinizing AI-related risks judges, lawyers, and the communities they serve face. Part II examines the structural biases in AI systems, focusing on how biased data and coding practices can entrench inequity and how intellectual property protections like trade secrets can limit transparency and undermine accountability in AI governance. Part III evaluates the normative impact of AI on traditional legal frameworks, offering a comparative analysis of governance models: the U.S. market-driven approach, the EU’s rights-based model, China’s command economy, and Singapore’s soft law framework. The analysis highlights how different systems balance innovation with safeguards, emphasizing that successful AI governance

Fine-Tuning LLMs: Structural Fluency and Augmentation for the Great and Powerful Wizard of AI

By: Verónica C. Gonzales The civil legal tradition carries assumptions, biases, and attitudes rooted in racism and ideologies intended to protect the (im)balance of power. This moment in history offers new versions of the same challenges with the integration of artificial intelligence (“AI”) and large language models (“LLMs”) into legal frameworks, and those critiques are being addressed in the legal discourse. Building on these perspectives, this moment also offers civil legal professionals a chance to engage in machine learning frameworks informed by social justice principles and accountable to measures of “equal justice for all.” The perception of fairness in the use of these tools is critical to protecting the integrity of and trust in the civil justice system. Although the features of LLMs may not be able to replace legal analysis just yet, developers anticipate that that is where these tools are headed sooner than one might think. Without intentional approaches to machine learning, LLMs will create a civil legal system twilight zone where machines propose new outcomes based on the ineffective patterns of the past, a never-ending feedback loop that traps litigants and stifles social progress. LLMs, and the AI tools which use them, offer a new reality in

Something Is Rotten in the State of Healthcare: Accountability, Affordability, and the Court of Public Opinion

By: Isabelle Breier Many countries worldwide recognize a right to health and provide legal recourse for securing that right. By contrast, for many Americans, there is no legal right or remedy that enables them to access healthcare if they cannot afford it. While there are some statutes and measures in place, such as the Emergency Medical Treatment and Active Labor Act (EMTALA) and Medicaid, coverage remains piecemeal and access to healthcare is not guaranteed. Conditioning people’s access to healthcare upon their ability to pay violates the foundational bioethical principle of justice. In the United States, individuals and advocates use social media to fill this vacuum, raising awareness and funds through platforms like GoFundMe and X.com (formerly Twitter). In the absence of government-provided healthcare or a right to health, the American public sometimes functions as a healthcare access mechanism via social media. However, this insufficient and inequitable stopgap cannot replace effective governance. The public is filling a vital and unmet need through social media advocacy, but there are many drawbacks to effectively delegating this authority to the public instead of the government, including the public’s capriciousness and lack of legal accountability. The government should take proactive steps to ensure that healthcare

Gray Advice

By: Keith Porcaro Debates over economic protectionism or the technology flavor-of-the-month obscure a simple, urgent truth: people are going online to find help that they cannot get from legal and health professionals. They are being let down, by products with festering trust and quality issues, by regulators slow to apply consumer protection standards to harmful offerings, and by professionals loath to acknowledge changes to how help is delivered. The status quo cannot continue. Waves of capital and code are empowering ever more organizations to build digital products that blur the line between self-help and professional advice. For good or ill, “gray advice” is changing how ordinary people get help with legal issues and healthcare issues, and even how they perceive professionals. This Article begins the work of articulating what makes a high-quality digital advice product, and how regulators and professionals can engage with the reality of how people seek and find help today. Download Full Article (PDF) Cite: 25 Duke L. & Tech. Rev. 48

Decoding Cryptocurrency Taxes: The Challenges for Estate Planners

By: Max Angel In this article, Angel explores the unique challenges of estate planning with cryptocurrency, which include accurately valuing those assets, preserving their value, and addressing the complex tax implications of transferring cryptocurrency to heirs. Download Full Article (PDF) Cite: 23 Duke L. & Tech. Rev. 137

Beyond Patents: Incentive Strategies for Ocean Plastic Remediation Technologies

By: Jacob Stotser With a garbage truck’s worth of plastic being dumped in the ocean each minute, there is a dire need for effective technological solutions aimed at mitigating the marine plastic pollution problem. However, the reliance of the U.S. patent system on market demand to incentivize this type of innovation has proven insufficient in light of the peculiarities of “green” technologies. To remedy this, this article proposes a multi-faceted incentivization approach that looks beyond the U.S. Patent and Trademark Office to stimulate the development of remediation technologies through comprehensive regulatory interventions, the establishment of prize funds and other alternative incentive mechanisms, and targeted reforms to patent procedures. Download Full Article (PDF) Cite: 23 Duke L. & Tech. Rev. 114

Barcoding Bodies: RFID Technology and the Perils of E-Carceration

By: Jackson Samples Electronic surveillance now plays a central role in the criminal legal system. Every year, hundreds of thousands of people are tracked by ankle monitors and smartphone technology. And frighteningly, commentators and policymakers have now proposed implanting radio frequency identification (“RFID”) chips into people’s bodies for surveillance purposes. This Note examines the unique risks of these proposals—particularly with respect to people on probation and parole—and argues that RFID implants would constitute a systematic violation of individual privacy and bodily integrity. As a result, they would also violate the Fourth Amendment. Download Full Article (PDF) Cite: 23 Duke L. & Tech. Rev. 89

The Lack of Responsibility of Higher Education Institutions in Addressing Phishing Emails and Data Breaches

By: Muxuan (Muriel) Wang Higher education institutions (HEIs) are highly susceptible to cyberattacks, particularly those facilitated through phishing, due to the substantial volume of confidential student and staff data and valuable research information they hold. Despite federal legislations focusing on bolstering cybersecurity for critical institutions handling medical and financial data, HEIs have not received similar attention. This Note examines the minimal obligations imposed on HEIs by existing federal and state statutes concerning data breaches, the absence of requirements for HEIs to educate employees and students about phishing attacks, and potential strategies to improve student protection against data breaches. Download Full Article (PDF) Cite: 23 Duke L. & Tech. Rev. 35

Next-Generation Data Governance

By: Kimberly A. Houser & John W. Bagby The proliferation of sensors, electronic payments, click-stream data, location-tracking, biometric feeds, and smart home devices, creates an incredibly profitable market for both personal and non-personal data. It is also leading to an amplification of harm to those from or about whom the data is collected. Because federal law provides inadequate protection for data subjects, there are growing calls for organizations to implement data governance solutions. Unfortunately, in the U.S., the concept of data governance has not progressed beyond the management and monetization of data. Many organizations operate under an outdated paradigm which fails to consider the impact of data use on data subjects due to the proliferation of third-party service providers hawking their “check-the-box” data governance systems. As a result, American companies suffer from a lack of trust and are hindered in their international operations due to the higher data protection requirements of foreign regulators. After discussing the pitfalls of the traditional view of data governance and the limitations of suggested models, we propose a set of ten principles based on the Medical Code of Ethics. This framework, first encompassed in the Hippocratic Oath, has been evolving for over one thousand years

The GPTJudge: Justice in a Generative AI World

By: Maura R. Grossman, Paul W. Grimm, Daniel G. Brown, and Molly Xu Generative AI (“GenAI”) systems such as ChatGPT recently have developed to the point where they can produce computer-generated text and images that are difficult to differentiate from human-generated text and images. Similarly, evidentiary materials such as documents, videos, and audio recordings that are AI-generated are becoming increasingly difficult to differentiate from those that are not AI-generated. These technological advancements present significant challenges to parties, their counsel, and the courts in determining whether evidence is authentic or fake. Moreover, the explosive proliferation and use of GenAI applications raises concerns about whether litigation costs will dramatically increase as parties are forced to hire forensic experts to address AI-generated evidence, the ability of juries to discern authentic from fake evidence, and whether GenAI will overwhelm the courts with AI-generated lawsuits, whether vexatious or otherwise. GenAI systems have the potential to challenge existing substantive intellectual property (“IP”) law by producing content that is machine, not human, generated, but that also relies on human-generated content in potentially infringing ways. Finally, GenAI threatens to alter the way in which lawyers litigate and judges decide cases. This article discusses these issues, and offers a