Here’s the abstract for Daniel Tankersley’s Beyond the Dictionary: Why SUA Sponte Judicial Use of Corpus Linguistics Is Not Appropriate for Statutory Interpretation (Feb. 1, 2018):
In interpreting statutory language, judges have often turned to the “ordinary meaning” canon, which calls for statutory terms to be given their obvious meaning where the text is clear and unambiguous. To solve any perceived ambiguities in language, judges have historically turned to dictionaries. Because of the lack of consistency of this approach, in both results and methodologies, many legal scholars and judges have looked to other methods of arriving at the elusive “ordinary meaning” of statutory terms.
One of these emerging methods, which has been endorsed by many legal scholars and a few judges, is corpus linguistics. Corpus linguistics is the analysis of language through analysis of a database (corpus) of real-world text. By analyzing statutory terms in databases of naturally-occurring context (across multiple genres), many legal scholars and judges believe a term’s ordinary meaning can be objectively and empirically shown.
This article will argue that, while corpus linguistics may have its place in the legal field, judges should not raise corpus data sua sponte in judicial opinions. To make this argument, the article will describe the many inconsistent methodologies that come about when legal scholars and judges have utilized and analyzed corpus data, in order to illustrate how the deceptively empirical data generated by the corpus is a uniquely dangerous tool for statutory interpretation. The article will also lay out some procedural objections, grounded in judicial notice and the adversarial process.
Ask Google’s new semantic-search tool “Talk to Books” a question and the tool will return a list of books that include information on that specific question. How? An AI-powered tool will scan every sentence in 100,000 volumes in Google Books and generate a list of likely responses with the pertinent passage bolded. Give the new tool a test drive here. — Joe
At the federal level, see House Office of Legislative Counsel, Legislative Drafting Guide and Style Manual. For a 50-state survey, see this compilation of resources produced by the National Conference of State Legislatures. — Joe
What Happens When Five Supreme Court Justices Can’t Agree? (LSB10113, Apr. 5, 2018) begins by discussing the current doctrinal framework for determining what opinion should govern when no opinion commands a majority vote. The Sidebar then explores the facts and issues involved in Hughes before examining Hughes’s potential impact. — Joe
From the introduction to Statutory Interpretation: Theories, Tools, and Trends (R45153, Apr. 5, 2018):
The two main theories of statutory interpretation — purposivism and textualism — disagree about how judges can best adhere to this ideal of legislative supremacy. The problem is especially acute in instances where it is unlikely that Congress anticipated and legislated for the specific circumstances being disputed before the court. While purposivists argue that courts should prioritize interpretations that advance the statute’s purpose, textualists maintain that a judge’s focus should be confined primarily to the statute’s text.
Regardless of their interpretive theory, judges use many of the same tools to gather evidence of statutory meaning. First, judges often begin by looking to the ordinary meaning of the statutory text. Second, courts interpret specific provisions by looking to the broader statutory context. Third, judges may turn to the canons of construction, which are presumptions about how courts ordinarily read statutes. Fourth, courts may look to the legislative history of a provision. Finally, a judge might consider how a statute has been—or will be—implemented. Although both purposivists and textualists may use any of these tools, a judge’s theory of statutory interpretation may influence the order in which these tools are applied and how much weight is given to each tool.
This report begins by discussing the general goals of statutory interpretation, reviewing a variety of contemporary as well as historical approaches. The report then briefly describes the two primary theories of interpretation employed today, before examining the main types of tools that courts use to determine statutory meaning. The report concludes by exploring developing issues in statutory interpretation.
From James Cleith Phillips and Jesse Egbert’s Advancing Law and Corpus Linguistics: Importing Principles and Practices from Survey and Content-Analysis Methodologies to Improve Corpus Design and Analysis __ BYU Law Review __ (2017): “The nascent field of law and corpus linguistics has much to offer legal interpretation. But to do so it must more fully incorporate principles from survey and content analysis methodologies used in the social sciences. Importing such will provide greater rigor, transparency, reproducibility and accuracy in the important quest to determine the meaning of the law. This paper highlights some of those principles to provide a best-practices guide to those seeking to perform law and corpus linguistic analysis.” — Joe
A principal responsibility of House committees is to conduct markups, to select legislation to consider, to debate it and vote on amendments to it (to mark up), and to report recommendations on passage to the House. House Committee Markups: Manual of Procedures and Procedural Strategies (R41083 Mar. 27, 2018) examines procedures and strategy related to committee markups and provides sample procedural scripts. — Joe
Here’s the abstract for Stephen Mouritsen’s Corpus Linguistics in Legal Interpretation. An Evolving Interpretative Framework, Journal of Language and Law, 6 (2017), 67-89:
When called upon to interpret the undefined words in a legal text, U.S. judges will often invoke a rule (or canon) of interpretation called the “plain meaning rule,” which holds that if the language of the text is clear and unambiguous, courts cannot consider any extrinsic evidence to determine what the text means. But U.S. courts have no uniform definition of what “plain meaning” actually means and no systematic method for discovering and resolving ambiguities in legal texts. Faced with these challenges, some U.S. judges and academics have recently begun to consider the use of corpus linguistics to resolve uncertainties in the interpretation of legal texts. A corpus-based approach to legal interpretation promises to increase the objectivity and predictability of decisions about the meanings of legal texts. However, such an approach also presents a number of theoretical problems that must be addressed before corpus methods can be fully incorporated into a theory of legal interpretation. This article documents this recent turn to corpus linguistics in legal interpretation and outlines some of the challenges facing the corpus-based approach to legal interpretation.
From the abstract for The European Union Case Law Corpus (EUCLCORP): A Multilingual Parallel and Comparative Corpus of EU Court Judgments by Aleksandar Trklja and Karen McAuliffe:
The empirical approach to the study of legal language has recently undergone profound development. Corpus linguistics study has, in particular, revealed previously unnoticed features of the legal language at both the lexico-grammatical and discourse level. Existing resources such as legal databases, however, do not contain functionalities that enable the application of corpus linguistics methodology. To address this gap in the context of EU law we developed a multilingual corpus of judgments that allows scholars and practitioners to investigate in a systematic way a range of issues such as the history of the meaning(s) of legal term, the migration of terms between legal systems, the use of binominals or the distribution of formulaic expressions in EU legal sub-languages. As well as being the first multilingual corpus of judgments it is also the largest legal multilingual corpus ever created. Since it contains case law from two sources (the Court of Justice of the European Union and EU national courts) it is also the largest comparable corpus of legal texts. The aim of the corpus is to contribute to the further development of the emerging field of language and law.
In Corpus Linguistics as a Tool in Legal Interpretation, Brigham Young University Law Review, 2018 Forthcoming, by Lawrence Solan and Tammy Gales “set out to explore conditions in which the use of large linguistic corpora can be optimally employed by judges and others tasked with construing authoritative legal documents. Linguistic corpora, sometimes containing billions of words, are a source of information about the distribution of language usage. Thus, corpora and the tools for using them are most likely to assist in addressing legal issues when the law considers the distribution of language usage to be legally relevant. As Thomas Lee and Stephen Mouritsen have so ably demonstrated in earlier work, corpus analysis is especially helpful when the legal standard for construction is the ordinary meaning of the document’s terms. We argue here that four issues should be addressed before determining that corpus analysis is likely to be maximally convincing. First, the legal issue before the court must be about the distribution of linguistic facts. Second, the court must decide what makes an interpretation “ordinary.” Third, if one wishes to search a corpus to glean the ordinary meaning of a term, one must decide in advance what to search. Fourth, there are different reasons as to why a particular meaning might present a weak showing in a corpus search and these need to be understood. Each of these issues is described and discussed.” — Joe
Here’s the blurb for Victoria Nourse’s Misreading Law, Misreading Democracy (Harvard UP, 2016):
American law schools extol democracy but teach little about its most basic institution, the Congress. Interpreting statutes is lawyers’ most basic task, but law professors rarely focus on how statutes are made. This misguided pedagogy, says Victoria Nourse, undercuts the core of legal practice. It may even threaten the continued functioning of American democracy, as contempt for the legislature becomes entrenched in legal education and judicial opinions. Misreading Law, Misreading Democracy turns a spotlight on lawyers’ and judges’ pervasive ignorance about how Congress makes law.
Victoria Nourse not only offers a critique but proposes reforming the way lawyers learn how to interpret statutes by teaching legislative process. Statutes are legislative decisions, just as judicial opinions are decisions. Her approach, legislative decision theory, reverse-engineers the legislative process to simplify the task of finding Congress’s meanings when statutes are ambiguous. This theory revolutionizes how we understand legislative history—not as an attempt to produce some vague notion of legislative intent but as a surgical strike for the best evidence of democratic context.
Countering the academic view that the legislative process is irrational and unseemly, Nourse makes a forceful argument that lawyers must be educated about the basic procedures that define how Congress operates today. Lawmaking is a sequential process with political winners and losers. If lawyers and judges do not understand this, they may well embrace the meanings of those who opposed legislation rather than those who supported it, making legislative losers into judicial winners, and standing democracy on its head.
Computer-Assisted Legal Linguistics: Corpus Analysis as a New Tool for Legal Studies, ___ Law & Social Inquiry ___ (2017), by Friedemann Vogel, Hanjo Hamann and Isabelle Gauer introduces computer-assisted legal linguistics, an area of study ranging from computer-supported qualitative analysis of legal texts to legal semantics and legal sociosemiotics based on big data. From the article’s abstract:
Law exists solely in and through language. Nonetheless, systematical empirical analysis of legal language has been rare. Yet, the tides are turning: After judges at various courts (including the US Supreme Court) have championed a method of analysis called corpus linguistics, the Michigan Supreme Court held in June 2016 that this method “is consistent with how courts have understood statutory interpretation.” The court illustrated how corpus analysis can benefit legal casework, thus sanctifying twenty years of previous research into the matter. The present article synthesizes this research and introduces computer-assisted legal linguistics (CAL2) as a novel approach to legal studies. Computer-supported analysis of carefully preprocessed collections of legal texts lets lawyers analyze legal semantics, language, and sociosemiotics in different working contexts (judiciary, legislature, legal academia). The article introduces the interdisciplinary CAL2 research group (www.cal2.eu), its Corpus of German Law, and other related projects that make law more transparent.
From the blurb for Fastcase: The Definitive Guide (ABA, 2018) by Brian Huddleston:
The days when lawyers could run up hundreds or thousands of dollars in expenses from one of the “big three” legal research services and bill those amounts to their clients are long gone. And, as any lawyer knows, time is money. If you’re a lawyer using Fastcase, you already know how to use your legal research budget and your time efficiently. This book will help you put Fastcase and your valuable time to even better use. It will also show you some features you didn’t even know it had.
If you’re new to Fastcase, get ready to learn how to use this invaluable legal research tool and work with the variety of resources it gives you, from case law to statutes, regulations, and more. More than 25 state bar associations now provide Fastcase to their members; if yours is one of them (or if you have your own subscription) you can’t be without this helpful guide!
Recommended for all Fastcase users. — Joe
Legal Research, Legal Reasoning and Precedent in Canada in the Digital Age, 48 Advocates’ Quarterly 1 (2018), by Jonathan de Vries “summarizes the existing Anglo-American scholarship on the interaction between legal media, legal reasoning and substantive law, and applies it to the context of Canadian law and Canada’s unique experience of print-based legal information. While Canada adopted the intellectual methods of a print-based legal system, it lagged behind in the establishment of print-based sources of legal information, with the result that the intellectual and institutional structures that derive from print media were nowhere near as entrenched in Canadian law as compared with other common law jurisdictions. Therefore, to whatever degree the transition to digital legal information poses a threat of disruption to a common law legal system, this disruptive effect will be more acute in Canada than in the United States or England.” — Joe
Here’s the abstract for Adam Steinman’s Non-Majority Opinions and Biconditional Rules, Yale Law Journal Forum, Vol. 128 (Forthcoming):
In Hughes v. United States, the Supreme Court will revisit a thorny question: how to determine the precedential effect of decisions with no majority opinion. For four decades, the clearest instruction from the Court has been the rule from Marks v. United States: the Court’s holding is “the position taken by those Members who concurred in the judgments on the narrowest grounds.” The Marks rule raises particular concerns, however, when it is applied to biconditional rules. Biconditionals are distinctive in that they set a standard that dictates both success and failure for a given issue. More formulaically, they combine an if-then proposition (If A, then B) with its inverse (If Not-A, then Not-B).
Appellate courts on both sides of the circuit split that prompted the grant of certiorari in Hughes have overlooked the special features of biconditional rules. If the Supreme Court makes the same mistake, it could adopt a misguided approach that would unjustifiably create binding law without a sufficient consensus among the Justices involved in the precedent-setting case. This Essay identifies these concerns and proposes ways to coherently apply Marks to non-majority opinions that endorse biconditional rules.
Here’s the abstract for Aaron-Andrew P. Bruhl’s very interesting Statutory Interpretation and the Rest of the Iceberg: Divergences between the Lower Federal Courts and the Supreme Court, Duke Law Journal, Forthcoming:
This Article examines the methods of statutory interpretation used by the lower federal courts, especially the federal district courts, and compares those methods to the practices of the U.S. Supreme Court. This novel research reveals both similarities across courts and some striking differences. The research shows that some interpretive tools are highly overrepresented in the Supreme Court’s decisions while other tools are much more prevalent in the lower courts. Another finding, based on a study of forty years of cases, is that all federal courts have shifted toward more textualist tools in recent decades but that the shift was less pronounced as one moves down the judicial hierarchy.
The divergence between the interpretive practices of different federal courts has implications for both descriptive and normative accounts of statutory interpretation. On the descriptive side, most beliefs about statutory interpretation are based on the narrow and unrepresentative slice of judicial business conducted in the Supreme Court, and some of those beliefs turn out to be incorrect or incomplete as descriptions of statutory interpretation more generally. This research therefore substantially improves our understanding of the reality of judicial statutory interpretation. On the normative side, the results can advance scholarly and judicial debates over whether lower courts should do statutory interpretation differently than the Supreme Court and whether the Court’s interpretive methodology should be binding on lower courts. The Article’s findings also suggest that the teaching of statutory interpretation should take into account the distinctive practices of the lower courts, where the vast majority of legal work is done.
Recommended. — Joe
Here’s the abstract for Amy Semet’s very interesting article, An Empirical Examination of Agency Statutory Interpretation, ___ Minnesota Law Review ___ (Forthcoming 2018):
How do administrative agencies interpret statutes? Despite the theoretical treatment scholars offer on how agencies construe statutes, far less is known empirically about administrative statutory interpretation even though agencies play a critical role in interpreting statutes. This Article looks behind the black box of agency statutory interpretation to review how administrative agencies use canons and other tools of statutory interpretation to decide cases. Surveying over 7,000 cases heard by the National Labor Relations Board (“NLRB”) from 1993-2016, I analyze the statutory methodologies the Board uses in its decisions in order to uncover patterns of how the Board interprets statutes over time. Overall, I find no ideological coherence to statutory methodology. Board members switch between textualist or purposive methods depending upon the partisan outcome sought. Indeed, Board members often use statutory methodologies to dueling purposes, with majority and dissenting Board members using the same statutory methodology to support contrasting outcomes. The Board has also changed how it interprets statutes over time, relying in recent years more on vague pronouncements of policy and less on precedent or legislative history. Moreover, despite scholars arguing that agencies should interpret statutes differently than courts, in practice, this study indicates that the NLRB interprets its governing statute in similar fashion to how courts do. After analyzing the empirical data, I set forth policy recommendations for how agencies should interpret statutes. The balance required—between policy coherence, stability and democratic accountability—is fundamentally different in the context of agency statutory interpretation than for interpretation by a judicial body. Rather than acting like a court, adjudicative agencies like the NLRB should leverage their expertise to arrive at an interpretation that best effectuates the purpose of the statute. For an agency like the NLRB that makes decisions almost exclusively through adjudication this may necessitate that the agency reveal its statutory interpretation in a more transparent fashion through rulemaking.
A unanimous Supreme Court struck a blow for the plain reading of the law last week in Digital Realty Trust, Inc. v. Somers (No. 16-1276, Feb. 21, 2018) but a pair of dueling concurrences deserve broader attention for what they say about the different methods of legal interpretation on the High Court today. At issue — legislative history. On The Volokh Conspiracy, see Justices Thomas and Sotomayor Debate Legislative History. — Joe
Lawfare’s Litigation Documents & Resources Related to Trump Executive Order on Immigration is an archive of court filings and related resources surrounding Trump’s three travel ban executive orders. — Joe
SCOTUS Notes is the newest crowdsourcing project under the Zooniverse platform originated at the University of Minnesota. “In this project, members of the public transcribe handwritten notes from U.S. Supreme Court justices. Unlike members of Congress, justices cast their votes in complete privacy during weekly conference meetings. Only justices are allowed in the Chief Justice’s conference room when they discuss, deliberate, and make initial decisions on cases that focus on some of the nation’s most pressing legal issues. The only record of what has been said, and by whom, is provided by the handwritten personal notes the justices themselves take during conference. These crucial documents detail the discussions and debates that took place in thousands of cases spanning multiple decades.”
The project is seeking volunteers. Interesting. H/T to beSpacific. — Joe