H/T to Legal Skills Prof Blog for calling attention to Nikos Harris, The Risks of Technology in the Law Classroom: Why the Next Great Development In Legal Education Might Be Going Low-Tech, 51 UBC L REV 773 (2018). Here’s the abstract:

It is often assumed that technology improves every facet of our lives, including learning in the university classroom. However, there is mounting evidence that traditional lecturing and note-taking techniques may provide the optimal learning environment. Student use of laptops, and professor use of electronic course slides, may actually impair learning in a manner which has particular significance for legal education. This emerging evidence suggests that law professors can make a justifiable decision to bring about a “low tech revolution” in their classrooms. Achieving that revolution is more complicated when it comes to student use of laptops, but there are a number of techniques which can be used to encourage students to consider dusting off a pen and pad of paper.

From the blurb for Proof of Collusion: How Trump Betrayed America (Simon & Schuster, Nov. 13, 2018) by Seth Abramson:

In Proof of Collusion, Seth Abramson “finally gives us a record of the unthinkable—a president compromising American foreign policy in exchange for financial gain and covert election assistance. The attorney, professor, and former criminal investigator has used his exacting legal mind and forensic acumen to compile, organize, and analyze every piece of the Trump-Russia story. His conclusion is clear: the case for collusion is staring us in the face. Drawing from American and European news outlets, he takes readers through the Trump-Russia scandal chronologically, putting the developments in context and showing how they connect.”

It would appear that SCOTUS clerk bonuses draw many more clerks to private practice than in past decades. Law.com’s Tony Mauro is reporting that the prevailing hiring bonus for Supreme Court clerks is $400,000—up from $300,000 in 2015. And that does not include salaries. If the trend continues, the clerk bonus will soon approach twice the annual salary of the justices they work for.

From the blurb for Jonathan Gienapp, The Second Creation: Fixing the American Constitution in the Founding Era (Belknap Press, Oct. 2018):

Americans widely believe that the United States Constitution was created when it was drafted in 1787 and ratified in 1788. But in a shrewd rereading of the founding era, Jonathan Gienapp upends this long-held assumption, recovering the unknown story of American constitutional creation in the decade after its adoption—a story with explosive implications for current debates over constitutional originalism and interpretation.

When the Constitution first appeared, it was shrouded in uncertainty. Not only was its meaning unclear, but so too was its essential nature. Was the American Constitution a written text, or something else? Was it a legal text? Was it finished or unfinished? What rules would guide its interpretation? Who would adjudicate competing readings? As political leaders put the Constitution to work, none of these questions had answers. Through vigorous debates they confronted the document’s uncertainty, and—over time—how these leaders imagined the Constitution radically changed. They had begun trying to fix, or resolve, an imperfect document, but they ended up fixing, or cementing, a very particular notion of the Constitution as a distinctively textual and historical artifact circumscribed in space and time. This means that some of the Constitution’s most definitive characteristics, ones which are often treated as innate, were only added later and were thus contingent and optional.

The Washington Post is reporting that “President Trump threw his support Wednesday behind legislation that would loosen some mandatory minimum sentencing laws — a measure backed by powerful Senate Republicans and Democrats, but which could run into opposition from some tough-on-crime conservatives.” The New York Times opines “And now that Mr. Sessions is gone, a bipartisan collection of senators is pushing a plan that addresses some of the core shortcomings of an earlier House version of the legislation that was supported by the White House. The hope is to move the bill during the lame duck session, before the chaos of the new Congress, with its newly Democratic House majority, takes hold in January.”

Jean-Marc Deltorn and Franck Macrez have posted Authorship in the Age of Machine Learning and Artificial Intelligence, to be published in Sean M. O’Connor (ed.), The Oxford Handbook of Music Law and Policy, Oxford University Press, 2019 (Forthcoming):

New generations of algorithmic tools have recently become available to artists. Based on the latest development in the field of machine learning – the theoretical framework driving the current surge in artificial intelligence applications -, and relying on access to unprecedented amounts of both computational power and data, these technological intermediaries are opening the way to unexpected forms of creation. Instead of depending on a set of man-made rules to produce novel artworks, generative processes can be automatically learnt from a corpus of training examples. Musical features can be extracted and encoded in a statistical model with no or minimal human input and be later used to produce original compositions, from baroque polyphony to jazz improvisations. The advent of such creative tools, and the corollary vanishing presence of the human in the creative pipeline, raises a number of fundamental questions in terms of copyright protection. Assuming AI generated compositions are protected by copyright, who is the author when the machine contributes to the creative process? And, what are the minimal requirements to be rewarded with authorship?

Published on LLRX, this guide by Marcus Zillman is a comprehensive listing of free privacy applications, tools and services that users may implement across multiple devices. These applications are from a range of sources that include small and large tech companies as well as subject matter specific websites, consumer industry groups and organizations. The focus of this article is on leveraging the latest technology and information that allows users to: (1) identify privacy issues and (2) implement privacy protections specific to their requirements, that span email, phone calls, chats, text messages, web browsing, computer drives and files, networks, collaboration spaces, and your photos.

The Justice Department’s Office of Legal Counsel (OLC) has advised President Trump that it was within his authority to nominate an official such as former DOJ chief of staff Matthew Whitaker as acting attorney general following Jeff Session’s resignation. In its written opinion, the OLC argued that the the Vacancies Reform Act (VRA) and AG Succession Act present two possible legal avenues for choosing a temporary AG successor, and that neither supersedes the other. The VRA allows the president to choose a high-level agency official who has served for 90 days, regardless of whether they have received Senate confirmation. Here’s the text of the opinion.

Pew Charitable Trusts is entering the A2J field, with a plan to develop two applications: one for online dispute resolution, and the other to provide better access to legal information by way of legal navigator websites, using natural language processing to help people diagnose their legal issues and identify a path forward. See Susan K. Urahn’s (Executive vice president and chief program officer for the Pew Charitable Trusts) comments on Governing, this Bob Ambrogi Above the Law post and this Artificial Lawyer post for more.

Jean O’Grady reports on widespread layoffs at Thomson Reuters Legal: “Over the past few weeks multiple sources have confirmed to me that executives, managers and staff across TR have been ‘invited to find new employers.’ Some of the people impacted have been fixtures in the legal publishing and tech industry for decades. … Next year many familiar TR faces will be absent from the conference rooms and exhibit halls at the ILTA, Legal Tech and AALL conferences. I guess we can all understand the need to ‘rightsize’ an organization but the timing … right before the holidays is brutal.”

Unfortunately “right before the holidays” a/k/a just ahead of Q4 and year-end financial results is not unusual. For details, see this Dewey B Strategic post.

From the introduction to Lame Duck Sessions of Congress Following a Majority-Changing Election: In Brief (R45402, Nov. 13, 2018):

“Lame duck” sessions of Congress take place whenever one Congress meets after its successor is elected but before the term of the current Congress ends. Their primary purpose is to complete action on legislation. They have also been used to prevent recess appointments and pocket vetoes, to consider motions of censure or impeachment, to keep Congress assembled on a standby basis, or to approve nominations (Senate only). In recent years, most lame duck sessions have focused on program authorizations, trade-related measures, appropriations, and the budget.

Thirty-four civil rights, consumer, and privacy organizations have united to release this set of privacy legislation principles. The privacy principles outline four concepts that any meaningful data protection legislation should incorporate at a minimum:

  1. Privacy protections must be strong, meaningful, and comprehensive.
  2. Data practices must protect civil rights, prevent unlawful discrimination, and advance equal opportunity.
  3. Governments at all levels should play a role in protecting and enforcing privacy rights.
  4. Legislation should provide redress for privacy violations.

H/T InfoDocket.

Kevin P. Tobia has posted Testing Original Public Meaning (Nov. 6, 2018). Here’s the abstract:

Various interpretive theories recommend using dictionaries or corpus linguistics to provide evidence about the “original public meaning” of legal texts. Such an interpretive inquiry is typically understood as an empirical one, aiming to discover a fact about public meaning: How did people actually understand the text at the time it became law? When dictionaries or corpora are used for this project, they are empirical tools, which might be reliable or unreliable instruments. However, the central question about these tools’ reliability remains unanswered: Do dictionaries and corpus linguistics reliably reflect original public meaning?

This paper develops a novel method to assess this question. It begins by examining the public meaning of modern terms. It compares people’s judgments about meaning to the verdicts that modern dictionaries and corpus linguistics deliver about (modern) public meaning. Eight experimental studies (total N = 1,327) reveal systematic divergences among the verdicts delivered by ordinary concept use, dictionary use, and corpus linguistics use. For example, the way in which people today apply the concept of a vehicle is systematically different from the way in which people apply the modern dictionary definition of a “vehicle” or the modern corpus linguistics data concerning vehicles. Strikingly similar results arise across levels of legal expertise; participants included 999 ordinary people, 230 “elite-university” law students (e.g. at Harvard and Yale), and 98 United States judges. These findings provide evidence about the reliability of dictionaries and corpus linguistics in estimating modern public meaning. I argue that these studies also provide evidence about these tools’ reliability in estimating original public meaning, in historical times.

The paper develops both the positive and critical implications of these experimental findings. Positively, the results reveal systematic patterns of the use of dictionaries and corpora. Corpus linguistics tends to generate prototypical uses, while dictionaries tend to generate more extensive uses. This discovery grounds normative principles for improving the use of both tools in legal interpretation. Critically, the results support five argumentative fallacies that arise in legal-interpretive arguments that rely on corpus linguistics or dictionaries. More broadly, the results suggest that two central methods of determining original public meaning are surprisingly unreliable. This shifts the argumentative burden to public meaning originalism and other theories that rely upon these tools; those theories must provide a non-arbitrary account of these tools’ use and a demonstration that such methods are, in fact, reliable.

Michael A. Livermore, et al. have posted Law Search as Prediction (Nov. 9, 2018). Here’s the abstract:

The process of searching for relevant legal materials is fundamental to legal reasoning. However, despite its enormous practical and theoretical importance, law search has been given inadequate attention by scholars. In this article, we define the problem of law search, examine its normative and empirical dimensions, and investigate one particularly promising computationally based approach. We implement a model of law search based on a notion of search space and search strategies and apply that model to the corpus of U.S. Supreme Court opinions. We test the success of the model against both citation information and hand-coded legal relevance determinations.

Interesting.

Joshua Kastenberg has posted Safeguarding Judicial Integrity During the Trump Presidency: Richard Nixon’s Attempt to Impeach Justice William O. Douglas and the Use of National Security as a Case Study, Campbell Law Review, Vol. 40, No. 1, 2018. Here’s the abstract:

In April 1970, Congressman Gerald Ford called for the impeachment of Justice William O. Douglas. Although Douglas had been accused by anticivil rights Southern Democrats of unprofessional conduct in his association with a political foundation as well as his four marriages, Ford reasoned that, in addition to the past allegations, Justice Douglas had become a threat to national security. Within two weeks of Ford’s allegations, United States military forces invaded Cambodia without the express consent of Congress. Nixon’s involvement in Ford’s attempts to have Justice Douglas impeached give rise to the possibility that, in addition to trying to reshape the judiciary and further architect the “Southern Strategy” by bringing conservative Southern Democrats into the Republican Party, the impeachment would serve as a means to divert attention away from the Cambodian invasion. Ford’s irresponsible conduct in this matter (and Justice Douglas’s overall conduct) have never been historically addressed and, as a result, did not leave to future political leaders and judges a means by which to gauge behavior that can undermine the independence of the judicial branch. This Article is intended to provide a historical model of accountability.