From the abstract for Peter A. Joy, Special Counsel Investigations and Legal Ethics: The Role of Secret Taping, 57 Duquesne University Law Review ___ (2019):

In July 2016, Michael Cohen, then presidential candidate Donald Trump’s lawyer, secretly recorded Trump discussing how they would use the publisher for the National Enquirer to purchase former Playboy model Karen McDougal’s story about an alleged affair with Trump in order to stop it from becoming public before the 2016 presidential election. The National Enquirer’s publisher purchased McDougal’s story in August 2016. In a similar move to quash another alleged affair from going public in October 2016, Cohen set up a corporation to purchase adult film star Stormy Daniels’s story of her affair with Trump. Trump was elected President in November 2016. Cohen’s secret recording contradicted Trump’s claims that he knew nothing about payments to McDougal, and it raises issues concerning the lengths to which Trump has gone to keep his private life a secret.

The taped conversation between Cohen and Trump later became public when Cohen’s lawyer released a copy of the tape in July 2018, which was after the Federal Bureau of Investigation (FBI) raided and seized audio tapes from Cohen’s office, home, and hotel room. It was also reported that Cohen was cooperating with Special Counsel Robert Mueller’s investigation into Russia’s interference in the 2016 presidential election and possible coordination between the Russian government and individuals associated with Trump’s presidential campaign. Reacting to the release of the tape, Trump tweeted: “Even more inconceivable that a lawyer would tape a client–totally unheard of & perhaps illegal.” Trump also tweeted: “What kind of a lawyer would tape a client? So sad! Is this a first, never heard of it before? . . . I hear there are other clients and many reporters that are taped–can this be so? Too bad!”

Contrary to Trump’s Twitter rant, this incident is not the first time a lawyer has secretly taped a conversation with a client or others. Even secret tapings involving Presidents and special counsel investigations have happened previously. Indeed, evidence that special counsels obtained through secret tapes was partially responsible for one former U.S. President to resign and another to be impeached.

Secret taping, though, raises a number of questions, including the following: Is secret taping legal? Is secret taping by a lawyer ethical? Lastly, if legal and ethical, what are the pros and cons of a lawyer secretly taping conversations? This essay sets out to answer those questions.

The 2015 Paris Agreement set a global goal to reach net zero emissions in the second half of the century. An increasing number of governments are translating that into national strategy, setting out visions of a carbon-free future. Is it enough? Of course not. But it is becoming the benchmark for leadership on the world stage. See Climate Change News’ Which countries have a net zero carbon goal? for details (regularly updated).

From the blurb for Lawrence Lessig, Fidelity & Constraint: How the Supreme Court Has Read the American Constitution (Oxford UP, 2019):

The fundamental fact about our Constitution is that it is old — the oldest written constitution in the world. The fundamental challenge for interpreters of the Constitution is how to read that old document over time.

In Fidelity & Constraint, legal scholar Lawrence Lessig explains that one of the most basic approaches to interpreting the constitution is the process of translation. Indeed, some of the most significant shifts in constitutional doctrine are products of the evolution of the translation process over time. In every new era, judges understand their translations as instances of “interpretive fidelity,” framed within each new temporal context.

Yet, as Lessig also argues, there is a repeatedly occurring countermove that upends the process of translation. Throughout American history, there has been a second fidelity in addition to interpretive fidelity: what Lessig calls “fidelity to role.” In each of the cycles of translation that he describes, the role of the judge — the ultimate translator — has evolved too. Old ways of interpreting the text now become illegitimate because they do not match up with the judge’s perceived role. And when that conflict occurs, the practice of judges within our tradition has been to follow the guidance of a fidelity to role. Ultimately, Lessig not only shows us how important the concept of translation is to constitutional interpretation, but also exposes the institutional limits on this practice.

The first work of both constitutional and foundational theory by one of America’s leading legal minds, Fidelity & Constraint maps strategies that both help judges understand the fundamental conflict at the heart of interpretation whenever it arises and work around the limits it inevitably creates.

The Law Society’s Technology and Law Public Policy Commission was created to explore the role of, and concerns about, the use of algorithms in the justice system. Among the recommendations, the UK needs to create a ‘national register of algorithms’ used in the criminal justice system that would include a record of the datasets that were used in training. Interesting. Read the report.

From Oleksii Kharkovyna’s A Beginner’s Guide To Data Science (June 10, 2019): “[T]he popularity of Data Science lies in the fact it encompasses the collection of large arrays of structured and unstructured data and their conversion into human-readable format, including visualization, work with statistics and analytical methods–machine and deep learning, probability analysis and predictive models, neural networks and their application for solving actual problems.” Read more about it.

From the abstract for Sophia Z. Lee, Our Administered Constitution: Administrative Constitutionalism from the Founding to the Present (University of Pennsylvania Law Review, Forthcoming):

This article argues that administrative agencies have been primary interpreters and implementers of the federal Constitution throughout the history of the United States, although the scale and scope of this “administrative constitutionalism” has changed significantly over time as the balance of opportunities and constraints has shifted. Courts have nonetheless cast an increasingly long shadow over the administered Constitution. In part, this is because of the well-known expansion of judicial review in the 20th century. But the shift has as much to do with changes in the legal profession, legal theory, and lawyers’ roles in agency administration. The result is that administrative constitutionalism may still be the most frequent form of constitutional governance, but it has grown, paradoxically, more suspect even as it has also become far more dependent on and deferential to judicial interpretations.

This article also contends that the history of administrative constitutionalism poses a problem for critics of the modern administrative state who seek to restore administrative law to its 19th-century foundations. These critics hold out constitutional law as uniquely important; it is what powers their arguments that the United States should turn back the clock. And they prefer 19th-century agencies because they depict them as exercising little consequential legal power. But this history suggests that those agencies had the first and often final word on the Constitution’s meaning. These critics also assume that reinstating the 19th-century constitutional order would empower courts to more closely scrutinize agency action. The history presented here instead suggests that returning to 19th-century administrative law would all but eliminate judicial review of the constitutionality of agency actions. Indeed, the burgeoning history of administrative constitutionalism suggests that anyone who wants to ensure that courts review the constitutionality of agency action has to appeal to theories that are rooted in constitutional change not origins, and in 20th- not 19th-century administrative law and judicial practice.

From the blurb for Jim Acosta, The Enemy of the People: A Dangerous Time to Tell the Truth in America (Harper, June 11, 2019):

In Mr. Trump’s campaign against what he calls “Fake News,” CNN Chief White House Correspondent, Jim Acosta, is public enemy number one. From the moment Mr. Trump announced his candidacy in 2015, he has attacked the media, calling journalists “the enemy of the people.”

Acosta presents a revealing examination of bureaucratic dysfunction, deception, and the unprecedented threat the rhetoric Mr. Trump is directing has on our democracy. When the leader of the free world incites hate and violence, Acosta doesn’t back down, and he urges his fellow citizens to do the same.

At CNN, Acosta offers a never-before-reported account of what it’s like to be the President’s least favorite correspondent. Acosta goes head-to-head with the White House, even after Trump supporters have threatened his life with words as well as physical violence.

From the hazy denials and accusations meant to discredit the Mueller investigation, to the president’s scurrilous tweets, Jim Acosta is in the eye of the storm while reporting live to millions of people across the world. After spending hundreds of hours with the revolving door of White House personnel, Acosta paints portraits of the personalities of Sarah Huckabee Sanders, Stephen Miller, Steve Bannon, Sean Spicer, Hope Hicks, Jared Kushner and more. Acosta is tenacious and unyielding in his public battle to preserve the First Amendment and #RealNews.

In a non-technical article Roger Chua explains that natural language processing (NLP) is an area of machine learning focused on teaching computers to understand natural human language better. NLP draws on research from AI, but also from linguistics, mathematics, psychology, and other fields. NLP enables computer programs to understand unstructured data, to make inferences and provide context to language, just a human brain does. For more see A simple way to explain Natural Language Processing (NLP).

For more, see A simple way to explain Natural Language Processing (NLP)

From the blurb for Preet Bharara, Doing Justice: A Prosecutor’s Thoughts on Crime, Punishment, and the Rule of Law (Knopf, March 19, 2019):

Preet Bharara has spent much of his life examining our legal system, pushing to make it better, and prosecuting those looking to subvert it. Bharara believes in our system and knows it must be protected, but to do so, we must also acknowledge and allow for flaws in the system and in human nature.

The book is divided into four sections: Inquiry, Accusation, Judgment and Punishment. He shows why each step of this process is crucial to the legal system, but he also shows how we all need to think about each stage of the process to achieve truth and justice in our daily lives.

Bharara uses anecdotes and case histories from his legal career–the successes as well as the failures–to illustrate the realities of the legal system, and the consequences of taking action (and in some cases, not taking action, which can be just as essential when trying to achieve a just result).

Much of what Bharara discusses is inspiring–it gives us hope that rational and objective fact-based thinking, combined with compassion, can truly lead us on a path toward truth and justice. Some of what he writes about will be controversial and cause much discussion. Ultimately, it is a thought-provoking, entertaining book about the need to find the humanity in our legal system–and in our society.

From the introduction to Regulating Big Tech: Legal Implications (LSB10309, June 11, 2019):

Amidst growing debate over the legal framework governing social media sites and other technology companies, several Members of Congress have expressed interest in expanding current regulations of the major American technology companies, often referred to as “Big Tech.” This Legal Sidebar provides a high-level overview of the current regulatory framework governing Big Tech, several proposed changes to that framework, and the legal issues those proposals may implicate. The Sidebar also contains a list of additional resources that may be helpful for a more detailed evaluation of any given regulatory proposal.

From the blurb for Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Campridge UP, 2017):

The field of artificial intelligence (AI) and the law is on the cusp of a revolution that began with text analytic programs like IBM’s Watson and Debater and the open-source information management architectures on which they are based. Today, new legal applications are beginning to appear and this book – designed to explain computational processes to non-programmers – describes how they will change the practice of law, specifically by connecting computational models of legal reasoning directly with legal text, generating arguments for and against particular outcomes, predicting outcomes and explaining these predictions with reasons that legal professionals will be able to evaluate for themselves. These legal applications will support conceptual legal information retrieval and allow cognitive computing, enabling a collaboration between humans and computers in which each does what it can do best. Anyone interested in how AI is changing the practice of law should read this illuminating work.

From the introduction to Corporate Drug Trafficking Liability—-a New Legal Front in the Opioid Crisis (LSB 10207, June 6, 2019):

In April 2019, the U.S. Department of Justice (DOJ) opened a new front in the struggle against the illicit distribution of prescription opioids by indicting Rochester Drug Cooperative, Inc. (Rochester Drug) and two of its executives under the Controlled Substances Act (CSA) based on the company’s sale of oxycodone and fentanyl to pharmacies that illegally distributed the drugs.

Although pharmaceutical companies and their executives have previously been subject to civil sanctions and criminal prosecution related to the marketing and distribution of opioids, the Rochester Drug indictments mark the first time DOJ has brought felony charges against a pharmaceutical company under the general drug trafficking provisions of the CSA. This Sidebar contextualizes the indictments by first providing an overview of the key laws governing prescription opioids, the CSA and the Federal Food Drug and Cosmetic Act (FD&C Act).

From the blurb for Mark Chinen, Law and Autonomous Machines: The Co-evolution of Legal Responsibility and Technology (Edward Elgar Pub, May 31, 2019):

This book sets out a possible trajectory for the co-development of legal responsibility on the one hand and artificial intelligence and the machines and systems driven by it on the other.

As autonomous technologies become more sophisticated it will be harder to attribute harms caused by them to the humans who design or work with them. This will put pressure on legal responsibility and autonomous technologies to co-evolve. Mark Chinen illustrates how these factors strengthen incentives to develop even more advanced systems, which in turn inspire nascent calls to grant legal and moral status to autonomous machines.

This book is a valuable resource for scholars and practitioners of legal doctrine, ethics and autonomous technologies, as well as legislators and policy makers, and engineers and designers who are interested in the broader implications of their work.

From the abstract for Jarrod Shobe, Enacted Legislative Findings and Purposes, University of Chicago Law Review, Vol. 86, 2019:

Statutory interpretation scholarship generally imagines a sharp divide between statutory text and legislative history. This Article shows that scholars have failed to consider the implications of a hybrid type of text that is enacted by Congress and signed by the president, but which looks like legislative history. This text commonly appears at the beginning of a bill under headings such as “Findings” and “Purposes.” This enacted text often provides a detailed rationale for legislation and sets out Congress’s intent and purposes. Notably, it is drafted in plain language by political congressional staff rather than technical drafters, so it may be the portion of the enacted text that is most accessible to members of Congress and their high-level staff. Despite enacted findings and purposes’ apparent importance to interpretation, courts infrequently reference them and lack a coherent theory of how they should be used in statutory interpretation. In most cases in which courts have referenced them, they have relegated them to a status similar to that of unenacted legislative history despite the fact that they are less subject to formalist and pragmatic objections. Perhaps because courts have infrequently and inconsistently relied on enacted findings and purposes, scholars have also failed to consider them, so their relevance to statutory interpretation has gone mostly unrecognized and untheorized in the legal literature.

This Article argues that all of the enacted text of a statute must be read together and with equal weight, as part of the whole law Congress enacted, to come up with an interpretation that the entire text can bear. This is more likely to generate an interpretation in line with Congress’s intent than a mode of interpretation that focuses on the specific meaning of isolated terms based on dictionaries, canons, unenacted legislative history, or other unenacted tools. This Article shows that, when textualists’ formalist arguments against legislative history are taken off the table, there may be less that divides textualists from purposivists. Enacted findings and purposes may offer a text-based, and therefore more constrained and defensible, path forward for purposivism, which has been in retreat in recent decades in the face of strong textualist attacks.

“This white paper is presented by LexisNexis on behalf of the author. The opinions may not represent the opinions of LexisNexis. This document is for educational purposes only.” But the name of the author was not disclosed, the paper is branded with the LexisNexis logo on every page, and the paper is hosted online by LexisNexis. The paper is about as “educational” as anything Trump opines about.

In the whitepaper, Are Free & Low-Cost Legal Resources Worth the Risk?, LexisNexis once again goes after low cost (but high tech) legal information vendors using the paper’s critique of Google Scholar to slip in false claims about Casetext (and Fastcase). This is another instance of the mantra “low cost can cost you” the folks in LN’s C suite like to chant on the deck of the Titanic of very expensive legal information vendors.

In LexisNexis, scared of competition, lies about Casetext (June 4, 2019) Casetext’s Tara McCarty corrects some of the whitepaper’s falsehoods in a footnote:

“A few examples: (1) They say Casetext’s citator, SmartCite (our alternative to Shepard’s), is “based on algorithms rather than human editors.” While we do use algorithms to make the process more efficient, a team of human editors reviews SmartCite results. By using both, we actually improve accuracy, allowing computers to catch human error and visa versa. (2) They say Casetext doesn’t have slip opinions. Slip opinions are available on Casetext within 24 hours of publication. (3) They say Casetext doesn’t have case summaries. Not only does Casetext have over four million case summaries — those summaries are penned by judges, rather than nameless editors.”

McCarty’s editorial is recommended. The whitepaper, not so much.  Enough said.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.