Tribal Lending After Gingras

By: Max King   Online payday lenders pose serious risks for consumers. Yet, for years, these lending companies have skirted state regulation by pleading tribal sovereign immunity. Under this doctrine, entities that are so affiliated with tribal nations that they are “an arm of the tribe” are immune from suit. Without comprehensive federal regulation, tribal sovereign immunity has served as a trump card at the pleading state for online payday lenders. The Note argues that change may be on the horizon. In the recent decision Gingras v. Think Finance, the Second Circuit held that the Supreme Court’s holding in Michigan v. Bay Mills Indian Community permitted injunctive suits against tribal affiliates, acting in their official capacity off reservation, based on state law. If other courts adopt the Second Circuit’s reasoning, states and consumers will be far better equipped to tackle online payday lenders. Download Full Article (PDF) Cite: 19 Duke L. & Tech. Rev. 122

Icts, Social Media, & The Future of Human Rights

By: Nikita Mehandru and Alexa Koenig As communication increasingly shifts to digital platforms, information derived from online open sources is starting to become critical in creating an evidentiary basis for international crimes. While journalists have led the development of many newly emerging open source investigation methodologies, courts have heightened the requirements for verifying and preserving a chain of custody—information linking all of the individuals who possessed the content and indicating the duration of their custody—creating a need for standards that are just now beginning to be identified, articulated, and accepted by the international legal community. In this article, we discuss the impact of internet-based open source investigations on international criminal legal processes, as well as challenges related to their use. We also offer best practices for lawyers, activists, and other individuals seeking to admit open source information—including content derived from social media—into courts. Download Full Article (PDF) Cite: 17 Duke L. & Tech. Rev. 129

Deepfakes: False Pornography Is Here and the Law Cannot Protect You

By: Douglas Harris It is now possible for anyone with rudimentary computer skills to create a pornographic deepfake portraying an individual engaging in a sex act that never actually occurred. These realistic videos, called “deepfakes,” use artificial intelligence software to impose a person’s face onto another person’s body. While pornographic deepfakes were first created to produce videos of celebrities, they are now being generated to feature other nonconsenting individuals—like a friend or a classmate. This Article argues that several tort doctrines and recent non-consensual pornography laws are unable to handle published deepfakes of non-celebrities. Instead, a federal criminal statute prohibiting these publications is necessary to deter this activity. Download Full Article (PDF) Cite: 17 Duke L. & Tech. Rev. 99

The Future of Freedom of Expression Online

By: Evelyn Mary Aswad Should social media companies ban Holocaust denial from their platforms? What about conspiracy theorists that spew hate? Does good corporate citizenship mean platforms should remove offensive speech or tolerate it? The content moderation rules that companies develop to govern speech on their platforms will have significant implications for the future of freedom of expression. Given that the prospects for compelling platforms to respect users’ free speech rights are bleak within the U.S. system, what can be done to protect this important right? In June 2018, the United Nations’ top expert for freedom of expression called on companies to align their speech codes with standards embodied in international human rights law, particularly the International Covenant on Civil and Political Rights (ICCPR). After the controversy over de-platforming Alex Jones in August 2018, Twitter’s CEO agreed that his company should root its values in international human rights law and Facebook referenced this body of law in discussing its content moderation policies. This is the first article to explore what companies would need to do to align the substantive restrictions in their speech codes with Article 19 of the ICCPR, which is the key international standard for protecting freedom of

Dialing It Back: Why Courts Should Rethink Students’ Privacy and Speech Rights as Cell Phone Communications Erode the ‘Schoolhouse Gate’

By: Nicholas J. McGuire The ubiquity of cell phones in today’s society has forced courts to change or dismiss established, but inapplicable analytical frameworks. Two such frameworks in the school setting are regulations of student speech and of student searches. This Article traces the constitutional jurisprudence of both First Amendment off-campus speech protection and Fourth Amendment search standards as applied to the school setting. It then analyzes how the Supreme Court’s ruling in Riley v. California complicates both areas. Finally, it proposes a pragmatic solution: by recognizing a categorical First Amendment exception for “substantial threats” against the school community, courts could accommodate students’ constitutional rights while upholding school administrators’ ability to maintain a safe environment. Download Full Article (PDF) Cite: 17 Duke L. & Tech. Rev. 1

Systemic Social Media Regulation

By: Frank Fagan Social media platforms are motivated by profit, corporate image, long-term viability, good citizenship, and a desire for friendly legal environments. These managerial interests stand in contrast to the gubernatorial interests of the state, which include the promotion of free speech, the development of e-commerce, various counter terrorism initiatives, and the discouragement of hate speech. Inasmuch as managerial and gubernatorial interests overlap, a self-regulation model of platform governance should prevail. Inasmuch as they diverge, regulation is desirable when its benefits exceed its costs. An assessment of the benefits and costs of social media regulation should account for how social facts, norms, and falsehoods proliferate. This Article sketches a basic economic model. What emerges from the analysis is that the quality of discourse cannot be controlled through suppression of content, or even disclosure of source. A better approach is to modify, in a manner conducive to discursive excellence, the structure of the forum. Optimal platform architecture should aim to reduce the systemic externalities generated by the social interactions that they enable, including the social costs of unlawful interference in elections and the proliferation of hate speech. Simultaneously, a systemic approach to social media regulation implies fewer controls on user

Big Brother is Listening to You: Digital Eavesdropping in the Advertising Industry

By: Dacia Green In the Digital Age, information is more accessible than ever. Unfortunately, that accessibility has come at the expense of privacy. Now, more and more personal information is in the hands of corporations and governments, for uses not known to the average consumer. Although these entities have long been able to keep tabs on individuals, with the advent of virtual assistants and “always-listening” technologies, the ease by which a third party may extract information from a consumer has only increased. The stark reality is that lawmakers have left the American public behind. While other countries have enacted consumer privacy protections, the United States has no satisfactory legal framework in place to curb data collection by greedy businesses or to regulate how those companies may use and protect consumer data. This Article contemplates one use of that data: digital advertising. Inspired by stories of suspiciously well-targeted advertisements appearing on social media websites, this Article additionally questions whether companies have been honest about their collection of audio data. To address the potential harms consumers may suffer as a result of this deficient privacy protection, this Article proposes a framework wherein companies must acquire users’ consent and the government must ensure

Online Terrorist Speech, Direct Government Regulation, and the Communications Decency Act

By: Steven Beale The Communications Decency Act (CDA) provides Internet platforms complete liability protection from user-generated content. This Article discusses the costs of this current legal framework and several potential solutions. It proposes three modifications to the CDA that would use a carrot and stick to incentivize companies to take a more active role in addressing some of the most blatant downsides of user-generated content on the Internet. Despite the modest nature of these proposed changes, they would have a significant impact. Download Full Article (PDF) Cite: 16 Duke L. & Tech. Rev. 333

Peeling Back the Student Privacy Pledge

By: Alexi Pfeffer-Gillett Education software is a multi-billion dollar industry that is rapidly growing. The federal government has encouraged this growth through a series of initiatives that reward schools for tracking and aggregating student data. Amid this increasingly digitized education landscape, parents and educators have begun to raise concerns about the scope and security of student data collection. Industry players, rather than policymakers, have so far led efforts to protect student data. Central to these efforts is the Student Privacy Pledge, a set of standards that providers of digital education services have voluntarily adopted. By many accounts, the Pledge has been a success. Since its introduction in 2014, over 300 companies have signed on, indicating widespread commitment to the Pledge’s seemingly broad protections for student privacy. This industry participation is encouraging, but the Pledge does not contain any meaningful oversight or enforcement provisions. This Article analyzes whether signatory companies are actually complying with the Pledge rather than just paying lip service to its goals. By looking to the privacy policies and terms of service of a sample of the Pledge’s signatories, I conclude that noncompliance may be a significant and prevalent issue. Consumers of education software have some power to

Increasing Copyright Protection for Social Media Users by Expanding Social Media Platforms’ Rights

By: Ryan Wichtowski Social media platforms allow users to share their creative works with the world. Users take great advantage of this functionality, as Facebook, Instagram, Flickr, Snapchat, and WhatsApp users alone uploaded 1.8 billion photos per day in 2014. Under the terms of service and terms of use agreements of most U.S. based social media platforms, users retain ownership of this content, since they only grant social media platforms nonexclusive licenses to their content. While nonexclusive licenses protect users vis-à-vis the social media platforms, these licenses preclude social media platforms from bringing copyright infringement claims on behalf of their users against infringers of user content under the Copyright Act of 1976. Since the average cost of litigating a copyright infringement case might be as high as two million dollars, the average social media user cannot protect his or her content against copyright infringers. To remedy this issue, Congress should amend 17 U.S.C. § 501 to allow social media platforms to bring copyright infringement claims against those who infringe their users’ content. Through this amendment, Congress would create a new protection for social media users while ensuring that users retain ownership over the content they create. Download Full Article (PDF)