From the abstract for Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Colum.L.Rev. ____ (Forthcoming 2019):

A recurrent concern about machine learning algorithms is that they operate as “black boxes,” making it difficult to identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges will confront machine learning algorithms with increasing frequency, including in criminal, administrative, and tort cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or predictions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of “explainable artificial intelligence” (or “xAI”). Using the tools of the common law, courts can develop what xAI should mean in different legal contexts.

There are advantages to having courts to play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Further, courts are likely to stimulate the production of different forms of xAI that are responsive to distinct legal settings and audiences. More generally, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.

At the ABA’s annual meeting in August the ABA adopted this AI resolution:

RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.

In this Digital Detectives podcast episode on Legal Talk Network, hosts Sharon Nelson and John Simek are joined by Fastcase CEO Ed Walters to discuss this resolution. Recommended.

End Note: It will be interesting to see if, and if so, how the ABA fulfills its promise regarding controlling and oversight of AI vendors.

From the abstract for Harry Surden’s The Ethics of Artificial Intelligence in Law: Basic Questions (Forthcoming chapter in Oxford Handbook of Ethics of AI, 2020):

Ethical issues surrounding the use of Artificial Intelligence (AI) in law often share a common theme. As AI becomes increasingly integrated within the legal system, how can society ensure that core legal values are preserved?

Among the most important of these legal values are: equal treatment under the law; public, unbiased, and independent adjudication of legal disputes; justification and explanation for legal outcomes; outcomes based upon law, principle, and facts rather than social status or power; outcomes premised upon reasonable, and socially justifiable grounds; the ability to appeal decisions and seek independent review; procedural fairness and due process; fairness in design and application of the law; public promulgation of laws; transparency in legal substance and process; adequate access to justice for all; integrity and honesty in creation and application of law; and judicial, legislative, and administrative efficiency.

The use of AI in law may diminish or enhance how these values are actually expressed within the legal system or alter their balance relative to one another. This chapter surveys some of the most important ethical topics involving the use of AI within the legal system itself (but not its use within society more broadly) and examines how central legal values might unintentionally (or intentionally) change with increased use of AI in law.

The ABA Profile of the Legal Profession survey reports when lawyers begin a research project 37% say they start with a general search engine like Google, 31% start with a paid online resource, 11% start with a free state bar-sponsored legal research service and 8% start with print resources.

A large majority (72%) use fee-based online resources for research. Westlaw is the most-used paid online legal research service, used by nearly two-thirds of all lawyers (64%) and preferred over other paid online services by nearly half of all lawyers (46%).

When it comes to free websites used most often for legal research 19% said Cornell’s Legal Information Institute, followed by Findlaw, Fastcase and government websites (17% each), Google Scholar 13%, and Casemaker 11%. Despite the popularity of online sources, 44% still use print materials regularly.

The survey also reports that 10% of lawyers say their firms use artificial intelligence-based technology tools while 36% think artificial intelligence tools will become mainstream in the legal profession in the next three to five years.

ROSS Intelligence goes after “legacy” search platforms (i.e., WEXIS) in this promotional blog post, How ROSS AI Turns Legal Research On Its Head, Aug. 6, 2019. The post claims that ROSS supplants secondary analytical sources and makes West KeyCite and LexisNexis Shepard’s obsolete because its search function provides all the relevant applied AI search output for the research task at hand. In many respects, Fastcase and Casetext also could characterize their WEXIS competitors as legacy legal search platforms. Perhaps they have and I have just missed that.

To the best of my recollection, Fastcase, Casetext and ROSS have not explicitly promoted competition with each other. WEXIS has always been the primary target in their promotions. So why are Fastcase, Casetext and ROSS competing with each other in the marketplace? What if they joined forces in such a compelling manner that users abandon WEXIS for core legal search? Two or all three of the companies could merge. In the alternative, they could find a creative way to offer license-one-get-all options.

Perhaps the first step is to reconsider the sole provider option. It’s time to revise the licensing equation; perhaps it should be (Westlaw or Lexis) + (Fastcase or Casetext or ROSS).

H/T to Bob Ambrogi for featuring results from the 2019 Aderant Business of Law and Legal Technology Survey. The survey results answered the question: What technology tools rank most important to lawyers in driving efficiency? In the section on technology tools and cloud adoption, the survey asked lawyers about the technology tools that have the greatest impact on their ability to work efficiently and manage their work effectively. Out of 18 categories of tools, the two lowest ranked were AI and blockchain. Knowledge management ranked seventh. Details on LawSites.

Andrew Martineau’s Reinforcing the ‘Crumbling Infrastructure of Legal Research’ Through Court-Authored Metadata, Law Libr. J. (Forthcoming) “examines the role of the court system in publishing legal information and how that role should be viewed in a digital, online environment. In order to ensure that the public retains access to useful legal information into the future, courts should fully embrace the digital format by authoring detailed, standardized metadata for their written work product—appellate-level case law, especially. If court systems took full advantage of the digital format, this would result in immediate, identifiable improvements in free and low-cost case law databases. Looking to the future, we can speculate on how court-authored metadata might impact the next generation of “A.I.”-powered research systems. Ultimately, courts should view their metadata responsibilities as an opportunity to “reinforce” the structure of the law itself.”

“Natural language generation (NLG) is a subset of natural language processing (NLP) that aims to produce natural language from structured data,” wrote Sam Del Rowe. “It can be used in chatbot conversations, but also for various types of content creation, such as summarizing data and generating product descriptions for online shopping. Companies in the space offer various use cases for this type of automated content creation, but the technology requires human oversight—a necessity that is likely to remain in the near future.” For more, see Get Started With Natural Language Content Generation, EContent, July 22, 2019.

Your litigation analytical tool says your win rate for summary judgement motions in class action employment discrimination cases is ranked the best in your local jurisdiction according to the database used. Forget the problem with using PACER data for litigation analytics, possible modeling error or possible bias embedded in the tool. Can you communicate this applied AI output to a client or potential client? Are you creating an “unjustified expectation” that your client or potential client will achieve the same result for your next client matter?

According to the ABA’s Model Rules of Professional Conduct Rule 7.1, you are probably creating an “unjustified expectation.” However you may be required to use that information under Model Rule 1.1 because that rule creates a duty of technological competence. This tension between Model Rule 7.1 and Model Rule 1.1 is just begining to be played out.

For more, see Roy Strom’s The Algorithm Says You’ll Win the Case. What Do You Say? US Law Week’s Big Law Business column for August 5, 2019. See also Melissa Heelan Stanzione, Courts, Lawyers Must Address AI Ethics, ABA Proposal Says, Bloomberg Law, August 6, 2019.

From the abstract for Clark D. Asay, Artifical Stupidity, 61 William & Mary Law Review (2020, Forthcoming):

Artificial intelligence is everywhere. And yet, the experts tell us, it is not yet actually anywhere. This is so because we are yet to achieve true artificial intelligence, or artificially intelligent systems that are capable of thinking for themselves and adapting to their circumstances. Instead, all the AI hype — and it is constant — concerns rather mundane forms of artificial intelligence, which are confined to performing specific, narrow tasks, and nothing more. The promise of true artificial intelligence thus remains elusive. Artificial stupidity reigns supreme.

What are the best set of policies to achieve true artificial intelligence? Surprisingly, scholars have paid little attention to this question. Scholars have spent considerable time assessing a number of important legal questions relating to artificial intelligence, including privacy, bias, tort, and intellectual property issues. But little effort has been devoted to exploring what set of policies are best suited to helping artificial intelligence developers achieve greater levels of innovation. And examining such issues is not some niche exercise, since artificial intelligence has already or soon will affect every sector of society. Hence, the question goes to the heart of future technological innovation policy more broadly.

This Article examines this question by exploring how well intellectual property rights promote innovation in artificial intelligence. I focus on intellectual property rights because these are often viewed as the most important piece of United States innovation policy. Overall, I find that intellectual property rights, particularly patents, are ill-suited to promote radical forms of artificial intelligence innovation. And even the intellectual property forms that are a better fit for artificial intelligence innovators, such as trade secrecy, come with problems of their own. In fact, the poor fit of patents in particular is likely to contribute to heavy industry consolidation in the AI field, and heavy consolidation in an industry is typically associated with lower levels of innovation than ideal.

I conclude by arguing, however, that neither strengthening AI patents rights nor looking to other forms of law, such as antitrust, holds much promise in achieving true artificial intelligence. Instead, as with many earlier radical innovations, significant government backing, coupled with an engaged entrepreneurial sector, is at least one key to avoiding enduring artificial stupidity.

From the abstract for Valeri Craigle, Law Libraries Embracing AI (Law Librarianship in the Age of AI, Ellyssa Valenti, Ed., 2019, Forthcoming):

The utilization of AI provides insights for legal clients, future-proofs careers for attorneys and law librarians, and elevates the status of the information suite. AI training in law schools makes students more practice-ready in an increasingly tech-centric legal environment; Access to Justice initiatives are embracing AI’s capabilities to provide guidance to educational resources and legal services for the under-represented. AI’s presence in the legal community is becoming so common that it can no longer been seen as an anomaly, or even cutting edge. Some even argue that its absence in law firms will eventually be akin to malpractice.

This chapter explores some practical uses of AI in legal education and law firms, with a focus on professionals who have gone beyond the role of AI consumers to that of AI developers, data curators and system designers.

The Law Society’s Technology and Law Public Policy Commission was created to explore the role of, and concerns about, the use of algorithms in the justice system. Among the recommendations, the UK needs to create a ‘national register of algorithms’ used in the criminal justice system that would include a record of the datasets that were used in training. Interesting. Read the report.

In a non-technical article Roger Chua explains that natural language processing (NLP) is an area of machine learning focused on teaching computers to understand natural human language better. NLP draws on research from AI, but also from linguistics, mathematics, psychology, and other fields. NLP enables computer programs to understand unstructured data, to make inferences and provide context to language, just a human brain does. For more see A simple way to explain Natural Language Processing (NLP).

For more, see A simple way to explain Natural Language Processing (NLP)

From the blurb for Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Campridge UP, 2017):

The field of artificial intelligence (AI) and the law is on the cusp of a revolution that began with text analytic programs like IBM’s Watson and Debater and the open-source information management architectures on which they are based. Today, new legal applications are beginning to appear and this book – designed to explain computational processes to non-programmers – describes how they will change the practice of law, specifically by connecting computational models of legal reasoning directly with legal text, generating arguments for and against particular outcomes, predicting outcomes and explaining these predictions with reasons that legal professionals will be able to evaluate for themselves. These legal applications will support conceptual legal information retrieval and allow cognitive computing, enabling a collaboration between humans and computers in which each does what it can do best. Anyone interested in how AI is changing the practice of law should read this illuminating work.

From the blurb for Mark Chinen, Law and Autonomous Machines: The Co-evolution of Legal Responsibility and Technology (Edward Elgar Pub, May 31, 2019):

This book sets out a possible trajectory for the co-development of legal responsibility on the one hand and artificial intelligence and the machines and systems driven by it on the other.

As autonomous technologies become more sophisticated it will be harder to attribute harms caused by them to the humans who design or work with them. This will put pressure on legal responsibility and autonomous technologies to co-evolve. Mark Chinen illustrates how these factors strengthen incentives to develop even more advanced systems, which in turn inspire nascent calls to grant legal and moral status to autonomous machines.

This book is a valuable resource for scholars and practitioners of legal doctrine, ethics and autonomous technologies, as well as legislators and policy makers, and engineers and designers who are interested in the broader implications of their work.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.

From the abstract for Karni Chagal, Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers (Stanford Law & Policy Review, Forthcoming):

Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms’ self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.

A growing number of scholars and entities now acknowledge that whenever certain “sophisticated” or “autonomous” decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their “traditional” predecessors. What is it that separates “traditional” algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call “thinking algorithms,” that seem to warrant their own custom-made treatment? Why have “auto-pilots,” for example, been traditionally treated as “products,” while autonomous vehicles are suddenly perceived as a more “human-like” system that requires different treatment? Where is the line between machines drawn?

Scholars who touch on this question, have generally referred to the system’s level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system’s level of autonomy, but not on its compatibility with products liability laws.

This article therefore proposes a new approach to distinguishing traditional products from “thinking algorithms” for the determining whether products liability should apply. Instead of examining the vague concept of “autonomy,” the article analyzes the system’s specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to “sophisticated” systems and when it should not.