From the introduction to An Overview of State and Federal Authority to Impose Vaccination Requirements (LSB 10300, May 22, 2019):

“In addition to measles, for about every 5 years since 2006, outbreaks of other vaccine – preventable diseases, such as mumps, have also been reported in the United States. In light of these outbreaks and their association with unvaccinated individuals, this Sidebar provides an overview of the relevant state and federal authority to require vaccination for U.S. residents.”

With continued advances in AI, machine learning and legal analytics anticipated, we can expect that legal information platforms will be supplanted by legal intelligence platforms in the not too distant future.  But what would a legal intelligence (or “smart law”) platform look like? Well, I can’t describe a prototypical legal intelligence platform in any technical detail. But it will exist at the point of the agile convergence of expert analysis, text and data-driven features for core legal search for all market segments.  I do, however, see what some “smart law” platform elements would be when looking at what Fastcase and Casetext are offering right now.

In my opinion, the best contemporary perspective on what a legal intelligence platform would be is to imagine that Fastcase and Casetext were one company.  The imagined vendor would offer in integrated fashion Fastcase and Casetext’s extensive collection of primary and secondary resources including legal news and contemporary analysis from the law blogosphere, Fastcase’s search engine algorithms for keyword searching, Casetext’s CLARA for contextual searching, Casetext’s SmartCite, Fastcase’s Docket Alarm, Fastcase BK, and Fastcase’s install base of some 70-75% of US attorneys, all in the context of the industry’s most transparent pricing model which both Fastcase and Casetext have already adopted.

Obviously, pricing models are not an essential element of a legal intelligence platform. But wouldn’t most potential “smart law” customers prefer transparent pricing? That won’t happen if WEXIS deploys the first legal intelligence platforms.  Neither Fastcase nor Casetext (nor Thomson Reuters, LexisNexis, BBNA, or WK) has a ‘smart law” platform right now. Who will be the first? Perhaps one possibility is hiding in plain sight.

Informed Comment reports that internal ExxonMobil documents show that the company’s scientists predicted in 1982 that by 2020, parts per million of carbon dioxide in earth’s atmosphere would reach 410-420 ppm. For the first time this spring ppm of CO2 exceeded 415.

The memo says in part,

“Considerable uncertainty also surrounds the possible impact on society of such a warming trend, should it occur. At the low end of the predicted temperature range there could be some impact on agricultural growth and rainfall patterns which could be beneficial in some regions and detrimental in others.

“At the high end, some scientists suggest there could be considerable adverse impact including the flooding of some coastal land masses as a result of a rise in sea level due to melting of the Antarctic ice sheet.”

Read the CO2 “Greenhouse” Effect report here.

On May 16, 2019, the U.S. District Court for the District of Columbia unsealed sentencing documents for Michael Flynn. Included in those documents were descriptions of how Flynn assisted prosecutors, including both the cases he had been involved with, and the way in which his cooperation had assisted the investigation.

From the Daily Kos: “The documents show that Flynn’s cooperation was vital in three different cases: the criminal investigation into how Flynn’s former business partner acted as an unregistered agent for Turkey; the special counsel investigation into connections between the Trump campaign and Russia; and a third case, the description of which remains redacted. Some speculation has suggested that the third case may be related to the ongoing trial of Roger Stone, or other potential cases related to WikiLeaks. Others have suggested that it might represent Flynn’s knowledge of some financial matter related to the Trump Organization … which mostly shows just how little is known about the multiple cases still pending in various jurisdictions following the Mueller investigation.”

An excerpt from the executive summary of Building the Data-Driven Law Firm, ed. by Alex Davies (Ark Group, May 2019):

Despite the huge amount of data in the average law firm, data-driven decision-making is relatively new and uncharted. With the knowledge that some 90 percent of data in the world today was created in the last two years, this needs to change.

Building the data-driven law firm looks at how the use of data has become inextricably linked with the practice of law; how it can be utilized to the good and the safeguards that must be put in place to mitigate the bad; how Big Data will revolutionize the way lawyers work, and the cases they will work on; and how new uses for data (including blockchain and the Internet of Things) will influence the law firm of the future.

Table of Contents:

Chapter 1: The data-driven mindset – the culture and habits of data-centric organizations

By David Curle, Director, Enterprise Content – Technology and Innovation, Thomson Reuters Legal Executive Institute

Chapter 2: The richness – and bias – of legal data

By Thomas Hamilton, VP of strategy and operations at ROSS Intelligence

Chapter 3: How to use data to make your business better

By Holly Urban, CEO and co-founder, EffortlessLegal LLC

Chapter 4: Using machine learning and AI to improve legal practice and drive value to stakeholders and clients

By Aaron Crews, chief data analytics officer, Littler

Chapter 5: Unlocking contractual data

By Edward Chan, partner, Linklaters LLP

Chapter 6: Mining litigation data

By Josh Becker, founder, Lex Machina

Chapter 7: How data is transforming the relationship between lawyer and client

By Jennifer Roberts, manager, strategic research, InTapp

Chapter 8: Blockchain and the data-driven law firm

By Robert Millard, founder and partner, Cambridge Strategy Group

Chapter 9: Legal aspects of data

By Joanne Frears, solicitor, Lionshead Law

Chapter 10: How data will enable the shift towards the productization of legal services

By Simon Drane, managing director of earlsferry advisory, former executive director of business development at the Law Society and LexisNexis

A snip from Casetext’s blog post, Cite-checking the Smart Way: An Interview about SmartCite with Casetext Co-Founder and Chief Product Officer, Pablo Arredondo (May 15, 2019):

“SmartCite was developed through a combination of cutting-edge machine learning, natural language processing, and experienced editorial review. Let’s start with the technology.

“SmartCite looks for patterns in millions of cases and uses judges’ own words to determine whether a case is good law and how a case has been cited by other cases. There are three key data sources analyzed by SmartCite. First, SmartCite looks at “explanatory parentheticals.” You know how judges will summarize other cases using parentheses? By looking for these phrases in opinions, we were able to extract 4.3 million case summaries and explanations written by judges! These explanatory parentheticals provide what I call “artisanal citator entries”: they are insightful, reliable, judge-written summaries of cases.

“The second key data source leveraged by SmartCite are phrases in judicial opinions that indicate that a case has been negatively treated. For example, when a judicial decision cites to a case that is bad law, the judge will often explain why that case is bad law by saying “overruled by” or “reversed by” or “superseded by statute, as stated in…” The same is true with good law. Judicial opinions will often indicate that a case is “affirmed by” another case.

“The third data source we use are Bluebook signals that judges use to characterize and distinguish cases. Bluebook signals can actually tell us a lot about a case. For example, when a judge introduces a case using “but see” or “cf.” or “contra,” the judge is indicating that this case is contrary authority, or that it has treated a legal issue differently from other cases. These contrary signals are powerful indicators of tension in the case law.

“However, using machine learning to look for judicial phrases and Bluebook signals is only the starting point of SmartCite’s analysis. We also rely on experienced editors to manage that process, review the case law, and make decisions on the ‘edge cases.'”

See also this page for SmartCite product information.

From the blurb for John Paul Stevens, The Making of a Justice: Reflections on My First 94 Year (May 14, 2019):

When Justice John Paul Stevens retired from the Supreme Court of the United States in 2010, he left a legacy of service unequaled in the history of the Court. During his thirty-four-year tenure, Justice Stevens was a prolific writer, authoring in total more than 1000 opinions. In THE MAKING OF A JUSTICE, John Paul Stevens recounts the first ninety-four years of his extraordinary life, offering an intimate and illuminating account of his service on the nation’s highest court.

Appointed by President Gerald Ford and eventually retiring during President Obama’s first term, Justice Stevens has been witness to, and an integral part of, landmark changes in American society.

With stories of growing up in Chicago, his work as a naval traffic analyst at Pearl Harbor during World War II, and his early days in private practice, as well as a behind-the-scenes look at some of the most important Supreme Court decisions over the last four decades, THE MAKING OF A JUSTICE offers a warm and fascinating account of Justice Stevens’ unique and transformative American life.This comprehensive memoir is a must read for those trying to better understand our country and the Constitution.

From the abstract for Aziz Z. Huq, A Right to a Human Decision, Virginia Law Review, Vol. 105:

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.

This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

From the abstract for Bill Treanor, Framer’s Intent: Gouverneur Morris, the Committee of Style, and the Creation of the Federalist Constitution (2019):

At the end of the proceedings of the federal constitutional convention, the delegates appointed the Committee on Style and Arrangement to bring together the textual provisions that the convention had previously agreed to and to prepare a final constitution. Pennsylvania delegate Gouverneur Morris was assigned to draft the document for the committee, and, with few revisions and little debate, the convention subsequently adopted the Committee’s proposed constitution. For more than two hundred years, questions have been raised as to whether Morris as drafter covertly made changes in the text in order to advance his constitutional vision, but the legal scholars and historians studying the convention have concluded that Morris was an honest scrivener. No prior article, however, has systematically compared the Committee’s draft to the previously adopted resolutions or discussed the implications of those changes for constitutional law. This article reveals how many changes Morris made to the text delegates had previously agreed to and how important those changes were (and are). It shows that many of the central elements of the Constitution (including the Preamble; the basic Article I, Article II, and Article III structure; and the contract clause) were wholly or largely the product of the Committee’s work. In total, Morris made twelve significant changes to the Constitution, and these textual changes advanced his constitutional goals, including strengthening the national government, the executive, and the judiciary; protecting private property; and fighting the spread of slavery. Finally, it shows that, in central debates in the early republic, Federalists, and, notably, fellow committee member Alexander Hamilton repeatedly drew on language crafted by the Committee as they fought for their expansive vision of the Constitution. In revising the constitutional text, Morris created the basis for what was to become the Hamiltonian reading of the Constitution.

This history has significant implications for modern constitutional law. While the Supreme Court has never been presented with a case that reveals the extent of the Committee’s changes, in four cases it has confronted situations in which the Committee’s text arguably had a different meaning than the provision previously adopted by the convention, and the Court has consistently treated the Committee’s work as substantively meaningless and concluded that the prior resolutions were controlling. That approach should be rejected because it is at odds with the majoritarian premise of constitutional ratification by “the people.” The text that was ratified is controlling. At the same time, in most circumstances, Morris’s language was ambiguous. A modern public meaning originalist approach leads to the conclusion that Morris’s revisions made possible alternate readings of the Constitution: it supported what was to become the Federalist approach, but did not prevent Republican textualist readings. On important contemporary issues, focus on Morris’s text makes us aware of originalist understandings of the text that have been frequently dismissed or wholly forgotten; although it does not eliminate the originalist basis for narrower readings, that focus provides new originalist support for broad understandings of congressional, judicial, and presidential power and for protection of private property.

From the blurb for Andrew Coan, Prosecuting the President: How Special Prosecutors Hold Presidents Accountable and Protect the Rule of Law (Oxford UP, 2019):

In this exceptionally timely book, law professor Andrew Coan explains what every American needs to know about special prosecutors – perhaps the most important and misunderstood public officials of our time.

The first special prosecutor was appointed by President Ulysses S. Grant in 1875, to investigate a bribery scandal involving his close friends and associates. Ever since, presidents of both parties have appointed special prosecutors and empowered them to operate with unusual independence. Also called special counsels and independent counsels, such appointments became a standard method for neutralizing political scandals and demonstrating the President’s commitment to the rule of law. Special counsel Robert Mueller is the latest example.

In Prosecuting the President, Andrew Coan offers a highly engaging look at the long, mostly forgotten history of special prosecutors in American politics. For more than a century, special prosecutors have struck fear into the hearts of Presidents, who have the power to fire them at any time. How could this be, Coan asks? And how could the nation entrust such a high responsibility to such subordinate officials? With vivid storytelling and historical examples, Coan demonstrates that special prosecutors can do much to protect the rule of law under the right circumstances.

Many have been thwarted by the formidable challenges of investigating a sitting President and his close associates; a few have abused the powers entrusted to them. But at their best, special prosecutors function as catalysts of democracy, channeling an unfocused popular will to safeguard the rule of law. By raising the visibility of high-level misconduct, they enable the American people to hold the President accountable. Yet, if a President thinks he can fire a special prosecutor without incurring serious political damage, he has the power to do so. Ultimately, Coan concludes, only the American people can decide whether the President is above the law.

From the introduction to The National Popular Vote (NPV) Initiative: Direct Election of the President by Interstate Compact (R43823, updated May 9, 2019):

“The National Popular Vote (NPV) initiative proposes an agreement among the states, an interstate compact that would effectively achieve direct popular election of the President and Vice President without a constitutional amendment. It relies on the Constitution’s grant of authority to the states in Article II, Section 1 to appoint presidential electors “in such Manner as the Legislature thereof may direct …. Any state that joins the NPV compact pledges that if the compact comes into effect, its legislature will award all the state’s electoral votes to the presidential ticket that wins the most popular votes nationwide, regardless of who wins in that particular state. The compact would, however, come into effect only if its success has been assured; that is, only if states controlling a majority of electoral votes (270 or more) join the compact.”

From the Thomson Reuters’ Legal Current:

“In honor of the treatise’s anniversary, Thomson Reuters is pleased to present a podcast series throughout 2019 featuring Professor Arthur Miller, one of the founding authors of the treatise. Professor Miller will sit down with other scholars and thought leaders to discuss some of the challenges facing judges and practitioners in the federal court system today.

“In this first episode, we start off the series with a conversation between Professor Miller and Jean Maess, vice president of editorial operations at Thomson Reuters, about how the duo of Professors Wright & Miller developed Federal Practice & Procedure.”

H/T to Civil Procedure & Federal Courts Blog.

From the Pew FactTank:

“Roughly three-in-ten adults with household incomes below $30,000 a year (29%) don’t own a smartphone. More than four-in-ten don’t have home broadband services (44%) or a traditional computer (46%). And a majority of lower-income Americans are not tablet owners. By comparison, each of these technologies are nearly ubiquitous among adults in households earning $100,000 or more a year.”

H/T to InfoDocket.

In addition to proposed revisions to Article VI., Nominations and Elections; Article IX., Committees, Other Organizations, and Representatives; and Article X., Special Interest Sections of the AALL Bylaws, the Executive Board approved extensive proposed revisions to  our association’s Ethical Principles, which were last revised in 1999. The proposed revisions may be reviewed in mark-up format here. From the revised preamble:

“The following Ethical Principles are informed by longstanding best practices and in consideration of emerging ethical issues. They are not enforceable by AALL and they are not meant to govern specific situations. They are aspirational, constructive, and generally achievable principles meant to guide Association members in ethical reflection concerning their professional activities.”

According to AALL’s press release, all revisions to the AALL Bylaws and the AALL Ethical Principles will be voted on by rank and file membership. Voting will open on Monday, July 29, 2019 and will remain open through Thursday, August 29, 2019. The results of the vote will be announced on Friday, August 30, 2019.

End note: For comparison’s sake, see ALA’s Code of Ethics and SLA’s Professional Ethics Guidelines.

This year’s inductees are Paul George, associate dean for curriculum development and the Biddle Law Library at the University of Pennsylvania Law School, Jolande Goldberg, senior cataloging policy specialist for law classification, policy & standards at the Library of Congress and Robert Oaks, retired chief library and records officer at Latham & Watkins LLP. Read the press release here.

Kudos to all

Facebook co-founder Chris Hughes argues that Facebook should indeed be broken up in his NYT opinion piece, It’s Time to Break Up Facebook, May 9, 2019. One snip:

“Facebook has earned the prize of domination. It is worth half a trillion dollars and commands, by my estimate, more than 80 percent of the world’s social networking revenue. It is a powerful monopoly, eclipsing all of its rivals and erasing competition from the social networking category.”

Recommended.