From the introduction to Regulating Big Tech: Legal Implications (LSB10309, June 11, 2019):

Amidst growing debate over the legal framework governing social media sites and other technology companies, several Members of Congress have expressed interest in expanding current regulations of the major American technology companies, often referred to as “Big Tech.” This Legal Sidebar provides a high-level overview of the current regulatory framework governing Big Tech, several proposed changes to that framework, and the legal issues those proposals may implicate. The Sidebar also contains a list of additional resources that may be helpful for a more detailed evaluation of any given regulatory proposal.

From the blurb for Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Campridge UP, 2017):

The field of artificial intelligence (AI) and the law is on the cusp of a revolution that began with text analytic programs like IBM’s Watson and Debater and the open-source information management architectures on which they are based. Today, new legal applications are beginning to appear and this book – designed to explain computational processes to non-programmers – describes how they will change the practice of law, specifically by connecting computational models of legal reasoning directly with legal text, generating arguments for and against particular outcomes, predicting outcomes and explaining these predictions with reasons that legal professionals will be able to evaluate for themselves. These legal applications will support conceptual legal information retrieval and allow cognitive computing, enabling a collaboration between humans and computers in which each does what it can do best. Anyone interested in how AI is changing the practice of law should read this illuminating work.

From the introduction to Corporate Drug Trafficking Liability—-a New Legal Front in the Opioid Crisis (LSB 10207, June 6, 2019):

In April 2019, the U.S. Department of Justice (DOJ) opened a new front in the struggle against the illicit distribution of prescription opioids by indicting Rochester Drug Cooperative, Inc. (Rochester Drug) and two of its executives under the Controlled Substances Act (CSA) based on the company’s sale of oxycodone and fentanyl to pharmacies that illegally distributed the drugs.

Although pharmaceutical companies and their executives have previously been subject to civil sanctions and criminal prosecution related to the marketing and distribution of opioids, the Rochester Drug indictments mark the first time DOJ has brought felony charges against a pharmaceutical company under the general drug trafficking provisions of the CSA. This Sidebar contextualizes the indictments by first providing an overview of the key laws governing prescription opioids, the CSA and the Federal Food Drug and Cosmetic Act (FD&C Act).

From the blurb for Mark Chinen, Law and Autonomous Machines: The Co-evolution of Legal Responsibility and Technology (Edward Elgar Pub, May 31, 2019):

This book sets out a possible trajectory for the co-development of legal responsibility on the one hand and artificial intelligence and the machines and systems driven by it on the other.

As autonomous technologies become more sophisticated it will be harder to attribute harms caused by them to the humans who design or work with them. This will put pressure on legal responsibility and autonomous technologies to co-evolve. Mark Chinen illustrates how these factors strengthen incentives to develop even more advanced systems, which in turn inspire nascent calls to grant legal and moral status to autonomous machines.

This book is a valuable resource for scholars and practitioners of legal doctrine, ethics and autonomous technologies, as well as legislators and policy makers, and engineers and designers who are interested in the broader implications of their work.

From the abstract for Jarrod Shobe, Enacted Legislative Findings and Purposes, University of Chicago Law Review, Vol. 86, 2019:

Statutory interpretation scholarship generally imagines a sharp divide between statutory text and legislative history. This Article shows that scholars have failed to consider the implications of a hybrid type of text that is enacted by Congress and signed by the president, but which looks like legislative history. This text commonly appears at the beginning of a bill under headings such as “Findings” and “Purposes.” This enacted text often provides a detailed rationale for legislation and sets out Congress’s intent and purposes. Notably, it is drafted in plain language by political congressional staff rather than technical drafters, so it may be the portion of the enacted text that is most accessible to members of Congress and their high-level staff. Despite enacted findings and purposes’ apparent importance to interpretation, courts infrequently reference them and lack a coherent theory of how they should be used in statutory interpretation. In most cases in which courts have referenced them, they have relegated them to a status similar to that of unenacted legislative history despite the fact that they are less subject to formalist and pragmatic objections. Perhaps because courts have infrequently and inconsistently relied on enacted findings and purposes, scholars have also failed to consider them, so their relevance to statutory interpretation has gone mostly unrecognized and untheorized in the legal literature.

This Article argues that all of the enacted text of a statute must be read together and with equal weight, as part of the whole law Congress enacted, to come up with an interpretation that the entire text can bear. This is more likely to generate an interpretation in line with Congress’s intent than a mode of interpretation that focuses on the specific meaning of isolated terms based on dictionaries, canons, unenacted legislative history, or other unenacted tools. This Article shows that, when textualists’ formalist arguments against legislative history are taken off the table, there may be less that divides textualists from purposivists. Enacted findings and purposes may offer a text-based, and therefore more constrained and defensible, path forward for purposivism, which has been in retreat in recent decades in the face of strong textualist attacks.

“This white paper is presented by LexisNexis on behalf of the author. The opinions may not represent the opinions of LexisNexis. This document is for educational purposes only.” But the name of the author was not disclosed, the paper is branded with the LexisNexis logo on every page, and the paper is hosted online by LexisNexis. The paper is about as “educational” as anything Trump opines about.

In the whitepaper, Are Free & Low-Cost Legal Resources Worth the Risk?, LexisNexis once again goes after low cost (but high tech) legal information vendors using the paper’s critique of Google Scholar to slip in false claims about Casetext (and Fastcase). This is another instance of the mantra “low cost can cost you” the folks in LN’s C suite like to chant on the deck of the Titanic of very expensive legal information vendors.

In LexisNexis, scared of competition, lies about Casetext (June 4, 2019) Casetext’s Tara McCarty corrects some of the whitepaper’s falsehoods in a footnote:

“A few examples: (1) They say Casetext’s citator, SmartCite (our alternative to Shepard’s), is “based on algorithms rather than human editors.” While we do use algorithms to make the process more efficient, a team of human editors reviews SmartCite results. By using both, we actually improve accuracy, allowing computers to catch human error and visa versa. (2) They say Casetext doesn’t have slip opinions. Slip opinions are available on Casetext within 24 hours of publication. (3) They say Casetext doesn’t have case summaries. Not only does Casetext have over four million case summaries — those summaries are penned by judges, rather than nameless editors.”

McCarty’s editorial is recommended. The whitepaper, not so much.  Enough said.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.

The editors of Perspectives – Teaching Legal Research and Writing are seeking articles for the Fall 2019 issue. From Legal Skills Prof Blog:

The Spring 2019 issue of Perspectives: Teaching Legal Research and Writing is in final production with an anticipated publication date of June 2019. However, we presently have a few spots available for the Fall 2019 issue and thus the Board of Editors is actively seeking articles to fill that volume. So if you’re working on an article idea appropriate for Perspectives (see below), or can develop a good manuscript in the next couple of months, please consider submitting it to us for consideration. There is no formal deadline since we will accept articles on rolling basis but the sooner the better if you’d like it published in the Fall issue.

From the executive summary of Technological Convergence: Regulatory, Digital Privacy, and Data Security Issues (R45746, May 30, 2019):

Technological convergence, in general, refers to the trend or phenomenon where two or more independent technologies integrate and form a new outcome. … Technological convergent devices share three key characteristics. First, converged devices can execute multiple functions to serve blended purpose. Second, converged devices can collect and use data in various formats and employ machine learning techniques to deliver enhanced user experience. Third, converged devices are connected to a network directly and/or are interconnected with other devices to offer ubiquitous access to users.

Technological convergence may present a range of issues where Congress may take legislative and/or oversight actions. Three selected issue areas associated with technological convergence are regulatory jurisdiction, digital privacy, and data security.

From the abstract for Alli Orr Larsen & Jeffrey L. Fisher, Virtual Briefing at the Supreme Court (109 Cornell Law Review (2019, Forthcoming)):

The open secret of Supreme Court advocacy in a digital era is that there is a new way to argue to the Justices. Today’s Supreme Court arguments are developed online: They are dissected and explored in blog posts, fleshed out in popular podcasts, and analyzed and re-analyzed by experts who do not represent parties or have even filed a brief in the case at all. This “virtual briefing” (as we call it) is intended to influence the Justices and their law clerks but exists completely outside of traditional briefing rules. This article describes virtual briefing and makes a case that the key players inside the Court are listening. In particular, we show that the Twitter patterns of law clerks indicate they are paying close attention to producers of virtual briefing, and threads of these arguments (proposed and developed online) are starting to appear in the Court’s decisions.

We argue that this “crowdsourcing” dynamic to Supreme Court decision-making is at least worth a serious pause. There is surely merit to enlarging the dialogue around the issues the Supreme Court decides – maybe the best ideas will come from new voices in the crowd. But the confines of the adversarial process have been around for centuries, and there are significant risks that come with operating outside of it particularly given the unique nature and speed of online discussions. We analyze those risks in this article and suggest it is time to think hard about embracing virtual briefing — truly assessing what can be gained and what will be lost along the way.

“The central problem” writes Robert Parnell “is that not all samples of legal data contain sufficient information to be usefully applied to decision making. By the time big data sets are filtered down to the type of matter that is relevant, sample sizes may be too small and measurements may be exposed to potentially large sampling errors. If Big Data becomes ‘small data’, it may in fact be quite useless.”

Parnell adds

“In practice, although the volume of available legal data will sometimes be sufficient to produce statistically meaningful insights, this will not always be the case. While litigants and law firms would no doubt like to use legal data to extract some kind of informational signal from the random noise that is ever-present in data samples, the hard truth is that there will not always be one. Needless to say, it is important for legal professionals to be able to identify when this is the case.

“Overall, the quantitative analysis of legal data is much more challenging and error-prone than is generally acknowledged. Although it is appealing to view data analytics as a simple tool, there is a danger of neglecting the science in what is basically data science. The consequences of this can be harmful to decision making. To draw an analogy, legal data analytics without inferential statistics is like legal argument without case law or rules of precedent — it lacks a meaningful point of reference and authority.”

For more see When Big Legal Data isn’t Big Enough: Limitations in Legal Data Analytics (Settlement Analytics, 2016). Recommended.

From the abstract for Karni Chagal, Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers (Stanford Law & Policy Review, Forthcoming):

Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms’ self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.

A growing number of scholars and entities now acknowledge that whenever certain “sophisticated” or “autonomous” decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their “traditional” predecessors. What is it that separates “traditional” algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call “thinking algorithms,” that seem to warrant their own custom-made treatment? Why have “auto-pilots,” for example, been traditionally treated as “products,” while autonomous vehicles are suddenly perceived as a more “human-like” system that requires different treatment? Where is the line between machines drawn?

Scholars who touch on this question, have generally referred to the system’s level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system’s level of autonomy, but not on its compatibility with products liability laws.

This article therefore proposes a new approach to distinguishing traditional products from “thinking algorithms” for the determining whether products liability should apply. Instead of examining the vague concept of “autonomy,” the article analyzes the system’s specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to “sophisticated” systems and when it should not.

From the abstract for James Grimmelmann, All Smart Contracts Are Ambiguous, Penn Journal of Law and Innovation (Forthcoming):

Smart contracts are written in programming languages rather than in natural languages. This might seem to insulate them from ambiguity, because the meaning of a program is determined by technical facts rather than by social ones.

It does not. Smart contracts can be ambiguous, too, because technical facts depend on socially determined ones. To give meaning to a computer program, a community of programmers and users must agree on the semantics of the programming language in which it is written. This is a social process, and a review of some famous controversies involving blockchains and smart contracts shows that it regularly creates serious ambiguities. In the most famous case, The DAO hack, more than $150 million in virtual currency turned on the contested semantics of a blockchain-based smart-contract programming language.

From the blurb for John Nichols, Horsemen of the Trumpocalypse: A Field Guide to the Most Dangerous People in America (Hachette Books, 2018):

A line-up of the dirty dealers and defenders of the indefensible who are definitely not “making America great again”. Donald Trump has assembled a rogue’s gallery of alt-right hatemongers, crony capitalists, immigrant bashers, and climate-change deniers to run the American government. To survive the next four years, we the people need to know whose hands are on the levers of power. And we need to know how to challenge their abuses. John Nichols, veteran political correspondent at the Nation, has been covering many of these deplorables for decades. Sticking to the hard facts and unafraid to dig deep into the histories and ideologies of the people who make up Trump’s inner circle, Nichols delivers a clear-eyed and complete guide to this wrecking-crew administration.

Public.Resource.Org and its President and Founder Carl Malamud are recipients of the 2019 AALL Public Access to Government Information Award. From the press release:

“The activism of Carl Malamud and Public.Resource.Org in the public domain has been crucial in providing the public with vital access to essential government information,” said AALL President Femi Cadmus. “For his critical work and advocacy in advancing government transparency, AALL is proud to recognize Carl and his organization with the 2019 Public Access to Government Information Award.”

Long overdue. Congratulations!

Shay Elbaum, Reference Librarian, Stanford Law School, recounts his first experence at providing data services for empirical legal research. “As a new librarian with just enough tech know-how to be dangerous, working on this project has been a learning experience in several dimensions. I’m sharing some highlights here in the hope that others in the same position will glean something useful.” Details on the CS-SIS blog. Interesting.

H/T to Bob Ambrogi for reporting that Dean E. Sonderegger has been appointed senior vice president and general manager of Wolters Kluwer Legal & Regulatory U.S. (LRUS). He has been vice president in charge of legal markets and innovation since joining LRUS in 2015. Before that, he was with Bloomberg BNA for 13 years, where he oversaw strategy and marketing for software products. Here’s the press release.

Based on Edgar Alan Rayo’s assessment of companies’ offerings in the legal field, current applications of AI appear to fall in six major categories:

  1. Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
  2. Prediction technology – An AI software generates results that forecast litigation outcome.
  3. Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
  4. Document automation – Law firms use software templates to create filled out documents based on data input.
  5. Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
  6. Electronic billing – Lawyers’ billable hours are computed automatically.

Rayo explores the major areas of current AI applications in law, individually and in depth here.

From the abstract for Richard M. Re & Alicia Solow-Niederman Developing Artificially Intelligent Justice (Stanford Technology Law Review, Forthcoming):

Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.