On Politico, Seamus Hughes, deputy director of George Washington University’s Program on Extremism, calls out PACER: “I’m here to tell you that PACER—Public Access to Court Electronic Records—is a judicially approved scam. The very name is misleading: Limiting the public’s access by charging hefty fees, it has been a scam since it was launched and, barring significant structural changes, will be a scam forever.” Read The Federal Courts Are Running An Online Scam (Mar. 20, 2019) here.

From the abstract for Vicenç Feliú, Moreau Lislet: The Man Behind the Digest of 1808:

The Louisiana legal system is unique in the United States and legal scholars have been interested in learning how this situation came to pass. Most assume that the origin of this system is the Code Napoleon and even legal scholars steeped in Louisiana law have a hard time answering the question of the roots of the Louisiana legal system. This book solves the riddle through painstaking research into the life of Lois Moreau Lislet, the driving force behind the Digest of 1808.

From the abstract for Ronen Avraham, Database of State Tort Law Reforms (6.1):

This manuscript of the Database of State Tort Law Reforms (6th) (DSTLR) updates the DSTLR (5th) and contains the most detailed, complete and comprehensive legal dataset of the most prevalent tort reforms in the United States between 1980 and 2018. The DSTLR has been downloaded more than 2700 times and has become the standard tool in empirical research of tort reform. The dataset records state laws in all fifty states and the District of Columbia over the last several decades. For each reform we record the effective date, a short description of the reform, whether or not the jury is allowed to know about the reform, whether the reform was upheld or struck down by the states’ courts, as well as whether it was amended by the state legislator. Scholarship studying the empirical effects of tort reforms relies on various datasets, (tort reforms datasets and other legal compilations). Some of the datasets are created and published independently and some of them are created ad-hoc by the researchers. The usefulness of these datasets frequently suffers from various defects. They are often incompatible and do not accurately record judicial invalidation of laws. Additionally, they frequently lack reforms adopted before 1986, amendments adopted after 1986, court-based reforms, and effective dates of legislation. It is possible that some of the persisting variation across empirical studies about the effects of tort reforms might be due to the variations in legal datasets used by the studies. This dataset builds upon and improves existing data sources. It does so through a careful review of original legislation and case law to determine the exact text and effective dates. The fifth draft corrects errors that were found in the fourth draft, focuses only on the most prevalent reforms, and standardizes the descriptions of the reforms. A link to an Excel file which codes ten reforms found in DSTLR (6th) can be found here.
It is hoped that creating one “canonized” dataset will increase our understanding of tort reform’s impacts on our lives.

From the blurb for Kendall Svengalis, A Layperson’s Guide to Legal Research and Self-Help Law Books (New England Press 2018):

This unique and revolutionary new reference book provides reviews of nearly 800 significant self-help law books in 85 subject areas, each of which is proceeded by a concise and illuminating overview of the subject area, with links to online sources for further information. The appendices include the most complete directory of public law libraries in the United States. This is an essential reference work for any law, public, or academic library which fields legal questions or inquiries.

Highly recommended.

From the abstract for Neal Goldfarb’s Corpus Linguistics in Legal Interpretation: When Is It (In)appropriate? (feb. 2019):

Corpus linguistics can be a powerful tool in legal interpretation, but like all tools, it is suited for some uses but not for others. At a minimum, that means that there are likely to be cases in which corpus data doesn’t yield any useful insights. More seriously, in some cases where the data seems useful, that appearance might prove on closer examination to be misleading. So it is important for people to be able to distinguish issues as to which corpus results are genuinely useful from those in which they are not. A big part of the motivation behind introducing corpus linguistics into legal interpretation is to increase the sophistication and quality of interpretive analysis. That purpose will be disserved corpus data is cited in support of conclusions that the data doesn’t really support.

This paper is an initial attempt to deal with problem of distinguishing uses of corpus linguistics that can yield useful data from those that cannot. In particular, the paper addresses a criticism that has been made of the use of corpus linguistics in legal interpretation — namely, that that the hypothesis underlying the legal-interpretive use of frequency data is flawed. That hypothesis, ac-cording to one of the critics, is that “where an ambiguous term retains two plausible meanings, the ordinary meaning of the term… is the more frequently used meaning[.]” (Although that description is not fully accurate, it will suffice for present purposes.)

The asserted flaw in this hypothesis is that differences in the frequencies of different senses of a word might be due to “reasons that have little to do with the ordinary meaning of that word.” Such differences, rather than reflecting the “sense of a word or phrase that is most likely implicated in a given linguistic context,” might instead reflect at least in part “the prevalence or newsworthiness of the underlying phenomenon that the term denotes.” That argument is referred to in this paper as the Purple-Car Argument, based on a skeptical comment about the use of corpus linguistics in legal interpretation: “If the word ‘car’ is ten times more likely to co-occur with the word ‘red’ than with the word ‘purple,’ it would be ludicrous to conclude from this data that a purple car is not a ‘car.’”

This paper deals with the Purple-Car Argument in two ways. First, it attempts to clarify the argument’s by showing that there are ways of using corpus linguistics that do not involve frequency analysis and that are therefore not even arguably subject to the Purple-Car Argument. The paper offers several case studies illustrating such uses.

Second, the acknowledges that when frequency analysis is in fact used, there will be cases that do implicate the flaw that the Purple-Car Argument identifies. The problem, therefore, is to figure out how to distinguish these Purple-Car cases from cases in which the Purple-Car Argument does not apply. The paper discusses some possible methodologies that might be helpful in making that determination. It then presents three case studies, focusing on cases that are well known to those familiar with the law-and-corpus-linguistics literature: Muscarello v. United States, State v. Rasabout, and People v. Harris. The paper concludes that the Purple-Car Argument does not apply to Muscarello, that it does apply to Rasabout, and that a variant of the argument applies to the dissenting opinion in Harris.

From the abstract for Clark D. Cunningham & Jesse Egbert, Scientific Methods for Analyzing Original Meaning: Corpus Linguistics and the Emoluments Clauses, Fourth Annual Conference of Law & Corpus Linguistics (2019):

In interpreting the Constitution’s text, courts “are guided by the principle that ‘[t]he Constitution was written to be understood by the voters; its words and phrases were used in their normal and ordinary as distinguished from their technical meaning’.” District of Columbia v. Heller, 554 U.S. 570, 576 (2008). According to James Madison: “[W]hatever respect may be thought due to the intention of the Convention, which prepared and proposed the Constitution, as a presumptive evidence of the general understanding at the time of the language used, it must be kept in mind that the only authoritative intentions were those of the people of the States, as expressed through the Conventions which ratified the Constitution.”

In looking for “presumptive evidence of the general understanding at the time of the language used” courts have generally relied on dictionary definitions and selected quotations from texts dating from the period of ratification. This paper presents a completely different, scientifically-grounded approach: applying the tools of linguistic analysis to “big data” about how written language was used at the time of ratification. This data became publicly available in Fall 2018 when the website of the Corpus of Founding Era American English (COFEA) was launched. COFEA contains in digital form over 95,000 texts created between 1760 and 1799, totaling more than 138,800,000 words.

The authors illustrate this scientific approach by analyzing the usage of the word emolument by writers in America during the period covered by COFEA, 1760-1799. The authors selected this project both because the interpretation of two clauses in the Constitution using emolument are of considerable current interest and because the meaning of emolument is a mystery to modern Americans.

The District of Columbia and State of Maryland are currently suing President Donald Trump alleging that his continued ownership of the Trump Hotel in Washington puts him in violation of Constitutional prohibitions on receiving or accepting “emoluments” from either foreign or state governments. The President’s primary line of defense is a narrow reading of emolument as “profit arising from an office or employ.”

The authors accessed every text in COFEA in which emolument appeared – over 2500 examples of actual usage – and analyzed all of these examples using three different computerized search methods. The authors found no evidence that emolument had a distinct narrow meaning of “profit arising from an office or employ.” All three analyses indicated just the opposite: emolument was consistently used and understood as a general and inclusive term.

Security breach laws typically have provisions regarding who must comply with the law (e.g., businesses, data/ information brokers, government entities, etc); definitions of “personal information” (e.g., name combined with SSN, drivers license or state ID, account numbers, etc.); what constitutes a breach (e.g., unauthorized acquisition of data); requirements for notice (e.g., timing or method of notice, who must be notified); and exemptions (e.g., for encrypted information). All 50 states, the District of Columbia, Guam, Puerto Rico and the Virgin Islands, according to the National Conference of State Legislatures survey, have enacted legislation requiring private or governmental entities to notify individuals of security breaches of information involving personally identifiable information.

Resolving Ambiguity: The Continued Relevance of Legislative History in an Era of Textualism (Feb. 11, 2019) by John Cannan “argues that Judge Brett Kavanaugh’s decision in Allina Health Servs. v. Price, 863 F.3d 937 (D.C. Cir. 2017), currently before the U.S. Supreme Court, was the correct one, but only by chance. Kavanagh based his ruling on subjective textualism. Congress’ true intent for the provision at issue, 42 U.S.C. 1395hh(a)(2), can be found in legislative history that has gone largely overlooked. This paper examines this history and shows how legislative history, in general, should, at the very least, continue to be persuasive evidence of statutory meaning.”

Prior to 2010, most states ended foster care services for youth at age 18. However, the Federal Fostering Connections to Success and Increasing Adoptions Act of 2008 allowed states to use federal funding to extend care up until age 21. More than 45 states extend foster care to serve youth who are over age 18. Here’s the Juvenile Law Center’s 50-state survey of extended foster care.

Back in 2017, Venture Beat reported that LexisNexis was testing chatbots for legal search. Bob Ambrogi now reports that implementation of a chatbot for Lexis Advance is coming sooner rather than later although no launch date has been announced.

The chatbot’s goal, LexisNexis said, is to give users the option to take more of a conversational approach to search, rather than the “typing keywords into a search bar” approach. A Lexis Advance chatbot could have two key uses. The bot can guide researchers unfamiliar with a topic to sources people typically look at for that topic. The second use is when revisiting prior research. The bot can present it back to searchers, pointing out that, three months ago, they did similar research, and offering to show it to them again. Also, it is claimed that the bot will get better over time at predicting a user’s intent as the user interacts with the system.

Wait ‘n see.

Bob Ambrogi is reporting that Thomson Reuters is rolling out Precedent Analytics for Westlaw Edge users today. “Precedent Analytics lets users see the citation patterns of individual judges, revealing the cases, courts, judges and citation language they rely on in deciding different legal issues. It also shows the frequency with which judges have dealt with different issues,” writes Bob. Details on LawSites. See also this Dewey B Strategic post.

A 2005 law review article by Lawrence Solan noted in passing that corpus linguistics had potential for its application to interpreting legal texts. 38 Loyola of L.A. Law Review 2027 (2005). But the first systematic exploration and advocacy of applying the tools and methodologies of corpus linguistics to legal interpretive questions of law and corpus linguistics came in the fall of 2010, when the BYU Law Review published a note by Stephen Mouritsen, entitled The Dictionary is Not a Fortress: Definitional Fallacies and a Corpus-Based Approach to Plain Meaning. 2010 Brigham Young University Law Review 1915. The note argued that dictionaries are the primary linguistic tool used by judges to determine the plain or ordinary meaning of words and phrases, and highlighted the deficiencies of such an approach. In its stead, the note proposed using corpus linguistics.

Here’s the abstract for Mouritsen’s The Dictionary is Not a Fortress: Definitional Fallacies and a Corpus-Based Approach to Plain Meaning:

“Plain meaning,” said Judge Frank Easterbrook, “as a way to understand language is silly. In interesting cases, meaning is not ‘plain’; it must be imputed; and the choice among meanings must have a footing more solid than a dictionary.”

This paper proposes an empirical method for determining the “ordinary meaning” of statutory terms; an approach grounded in a linguistic methodology known as Corpus Linguistics. I begin by addressing a number of commonly held, but ultimately erroneous assumptions about the content and structure of dictionaries – assumptions that find their way into judicial reasoning with alarming frequency.

I then outline an approach to the resolution of lexical ambiguity in statutory interpretation – an approach based on Corpus Linguistics methods. Corpus Linguistics is an empirical methodology that analyzes language function and use by means of large electronic databases called corpora. A corpus is a principled collection of naturally occurring language data, typically tagged with grammatical content and searchable in such a way that the ordinary use of a given term in a given context may be ascertained.

Though Corpus Linguistics is not a panacea, the methodology has the potential to remove the determination of ordinary meaning from the black box of the judge’s mental impression and render the discussion of the ordinary meaning of statutory terms one of tangible and quantifiable reality.

From the abstract for Doctrinal Sunsets (Jan. 16, 2019) by David H. Schraub:

Sunset provisions — timed expirations of an announced legal or policy rule — occupy a prominent place in the toolkit of legislative policymakers. In the judiciary, by contrast, their presence is far more obscure. This disjuncture is intriguing: The United States’ constitutional text contains several sunset provisions, and an apparent doctrinal sunset appeared in one of the most high-profile and hot-button Supreme Court decisions in recent memory: Grutter v. Bollinger’s famous declaration that while affirmative action programs in pursuit of diversity ends were currently constitutional, “25 years from now, the use of racial preferences will no longer be necessary to further the interest approved today.” Yet despite voluminous literature debating the merits of sunset clauses as a legislative practice, scholars have not systematically explored the utility of incorporating sunset clauses into judicial doctrine.

This article provides the first comprehensive analysis of the place of sunset provisions in judicial doctrine. It defends the conceptual legitimacy of doctrinal sunsets as valid across all theories of legal interpretation, including textualist or originalist accounts which might seem incompatible with admitting any change in legal outcomes without formally amending the underlying text. And it articulates the practical utility of doctrinal sunset clauses in scenarios where predictable changes in circumstances make it unlikely that an initial rule-decision will remain optimal over a long period of time. This can occur in mundane situations where a placeholder rule is necessary to govern until a more complex and tailored rule can be operationalized. It can also occur in sharply controversial scenarios where a decision is needed immediately under conditions that do not allow for optimal deliberation. Finally, sunsets can be beneficial as a means of prompting reassessment and tailored adjustment of prior decisions which — though perhaps products of the best judgment of their eras — are unlikely to continue tracking changing social circumstances.

Thirty-three states and the District of Columbia currently have passed laws broadly legalizing marijuana in some form. The District of Columbia and 10 states — Alaska, California, Colorado, Maine, Massachusetts, Michigan, Nevada, Oregon, Vermont and Washington — have adopted the most expansive laws legalizing marijuana for recreational use. The National Cannabis Industry Association has produced a 50-state survey of marijuana policies current to Nov. 7, 2018. View the interactive map here.

From Solum’s legal theory lexicon entry for corpus linguistics:

How do we determine the meaning of legal texts? One possibility is that judges could consult their linguistic intuitions. Another possibility is the use of dictionaries. Recently, however, lawyers, judges, and legal scholars have discovered a data-driven approach to ascertaining the semantic meaning of disputed language. This technique, called “corpus linguistics,” has already been used by courts and plays an increasingly prominent role in legal scholarship. This entry in the Legal Theory Lexicon provides a basic introduction to corpus linguistics.

The law firm of Bressler, Amery & Ross, P.C. published in 2018 a web-based 50 state survey of senior and vulnerable investor laws. The survey provides a summary of each state’s financial exploitation statute and includes links to key state agencies and to required forms where applicable. Bressler actively monitors legislative developments in this space and continuously updates the survey to reflect changes in the law.