The Law Society’s Technology and Law Public Policy Commission was created to explore the role of, and concerns about, the use of algorithms in the justice system. Among the recommendations, the UK needs to create a ‘national register of algorithms’ used in the criminal justice system that would include a record of the datasets that were used in training. Interesting. Read the report.

In a non-technical article Roger Chua explains that natural language processing (NLP) is an area of machine learning focused on teaching computers to understand natural human language better. NLP draws on research from AI, but also from linguistics, mathematics, psychology, and other fields. NLP enables computer programs to understand unstructured data, to make inferences and provide context to language, just a human brain does. For more see A simple way to explain Natural Language Processing (NLP).

For more, see A simple way to explain Natural Language Processing (NLP)

From the blurb for Kevin D. Ashley, Artificial Intelligence and Legal Analytics: New Tools for Law Practice in the Digital Age (Campridge UP, 2017):

The field of artificial intelligence (AI) and the law is on the cusp of a revolution that began with text analytic programs like IBM’s Watson and Debater and the open-source information management architectures on which they are based. Today, new legal applications are beginning to appear and this book – designed to explain computational processes to non-programmers – describes how they will change the practice of law, specifically by connecting computational models of legal reasoning directly with legal text, generating arguments for and against particular outcomes, predicting outcomes and explaining these predictions with reasons that legal professionals will be able to evaluate for themselves. These legal applications will support conceptual legal information retrieval and allow cognitive computing, enabling a collaboration between humans and computers in which each does what it can do best. Anyone interested in how AI is changing the practice of law should read this illuminating work.

From the blurb for Mark Chinen, Law and Autonomous Machines: The Co-evolution of Legal Responsibility and Technology (Edward Elgar Pub, May 31, 2019):

This book sets out a possible trajectory for the co-development of legal responsibility on the one hand and artificial intelligence and the machines and systems driven by it on the other.

As autonomous technologies become more sophisticated it will be harder to attribute harms caused by them to the humans who design or work with them. This will put pressure on legal responsibility and autonomous technologies to co-evolve. Mark Chinen illustrates how these factors strengthen incentives to develop even more advanced systems, which in turn inspire nascent calls to grant legal and moral status to autonomous machines.

This book is a valuable resource for scholars and practitioners of legal doctrine, ethics and autonomous technologies, as well as legislators and policy makers, and engineers and designers who are interested in the broader implications of their work.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.

From the abstract for Karni Chagal, Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers (Stanford Law & Policy Review, Forthcoming):

Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms’ self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.

A growing number of scholars and entities now acknowledge that whenever certain “sophisticated” or “autonomous” decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their “traditional” predecessors. What is it that separates “traditional” algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call “thinking algorithms,” that seem to warrant their own custom-made treatment? Why have “auto-pilots,” for example, been traditionally treated as “products,” while autonomous vehicles are suddenly perceived as a more “human-like” system that requires different treatment? Where is the line between machines drawn?

Scholars who touch on this question, have generally referred to the system’s level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system’s level of autonomy, but not on its compatibility with products liability laws.

This article therefore proposes a new approach to distinguishing traditional products from “thinking algorithms” for the determining whether products liability should apply. Instead of examining the vague concept of “autonomy,” the article analyzes the system’s specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to “sophisticated” systems and when it should not.

Based on Edgar Alan Rayo’s assessment of companies’ offerings in the legal field, current applications of AI appear to fall in six major categories:

  1. Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
  2. Prediction technology – An AI software generates results that forecast litigation outcome.
  3. Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
  4. Document automation – Law firms use software templates to create filled out documents based on data input.
  5. Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
  6. Electronic billing – Lawyers’ billable hours are computed automatically.

Rayo explores the major areas of current AI applications in law, individually and in depth here.

From the abstract for Richard M. Re & Alicia Solow-Niederman Developing Artificially Intelligent Justice (Stanford Technology Law Review, Forthcoming):

Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.

Aryan Pegwar asks and answers the post title’s question. “Today Modern technologies like artificial intelligence, machine learning, data science have become the buzzwords. Everybody talks about but no one fully understands. They seem very complex to a layman. People often get confused by words like AI, ML and data science. In this article, we explain these technologies in simple words so that you can easily understand the difference between them.” Details here.

Related:

TechRepublic reports that Microsoft has announced that Word Online will incorporate a feature known as Ideas this fall. Backed by artificial intelligence and machine learning courtesy of Microsoft Graph, Ideas will suggest ways to help you enhance your writing and create better documents. Ideas will also show you how to better organize and structure your documents by suggesting tables, styles, and other features already available in Word.

From the abstract for Daniel L. Chen, Machine Learning and the Rule of Law, Computational Analysis of Law, Santa Fe Institute Press, ed. M. Livermore and D. Rockmore, Forthcoming:

“Predictive judicial analytics holds the promise of increasing the fairness of law. Much empirical work observes inconsistencies in judicial behavior. By predicting judicial decisions—with more or less accuracy depending on judicial attributes or case characteristics—machine learning offers an approach to detecting when judges most likely to allow extra legal biases to influence their decision making. In particular, low predictive accuracy may identify cases of judicial “indifference,” where case characteristics (interacting with judicial attributes) do no strongly dispose a judge in favor of one or another outcome. In such cases, biases may hold greater sway, implicating the fairness of the legal system.”

With continued advances in AI, machine learning and legal analytics anticipated, we can expect that legal information platforms will be supplanted by legal intelligence platforms in the not too distant future.  But what would a legal intelligence (or “smart law”) platform look like? Well, I can’t describe a prototypical legal intelligence platform in any technical detail. But it will exist at the point of the agile convergence of expert analysis, text and data-driven features for core legal search for all market segments.  I do, however, see what some “smart law” platform elements would be when looking at what Fastcase and Casetext are offering right now.

In my opinion, the best contemporary perspective on what a legal intelligence platform would be is to imagine that Fastcase and Casetext were one company.  The imagined vendor would offer in integrated fashion Fastcase and Casetext’s extensive collection of primary and secondary resources including legal news and contemporary analysis from the law blogosphere, Fastcase’s search engine algorithms for keyword searching, Casetext’s CLARA for contextual searching, Casetext’s SmartCite, Fastcase’s Docket Alarm, Fastcase BK, and Fastcase’s install base of some 70-75% of US attorneys, all in the context of the industry’s most transparent pricing model which both Fastcase and Casetext have already adopted.

Obviously, pricing models are not an essential element of a legal intelligence platform. But wouldn’t most potential “smart law” customers prefer transparent pricing? That won’t happen if WEXIS deploys the first legal intelligence platforms.  Neither Fastcase nor Casetext (nor Thomson Reuters, LexisNexis, BBNA, or WK) has a ‘smart law” platform right now. Who will be the first? Perhaps one possibility is hiding in plain sight.

From the abstract for Aziz Z. Huq, A Right to a Human Decision, Virginia Law Review, Vol. 105:

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.

This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

From the abstract for Ed Walters, The Model Rules of Autonomous Conduct: Ethical Responsibilities of Lawyers and Artificial Intelligence, Georgia State University Law Review, Forthcoming:

Lawyers are increasingly using software tools and artificial intelligence to augment their provision of legal services. This paper reviews the professional responsibilities of those lawyers under the Model Rules of Professional Conduct and previews how the rules might apply to AI software not yet developed but just on the horizon. Although lawyers frequently use their professional responsibility as a brake on innovation, the Model Rules in many cases actually require them to adopt new methods of delivering legal services. The paper also surveys ways that the Model Rules might be changed to protect consumers in the near future as AI tools grow in scope.

From the European Commission press release:

Following the publication of the draft ethics guidelines in December 2018 to which more than 500 comments were received, the independent expert group presents today their ethics guidelines for trustworthy artificial intelligence.

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.

H/T InfoDocket.

From the abstract for Daniel J. Gervais, The Machine As Author (Iowa Law Review, Vol. 105, 2019):

The use of Artificial Intelligence (AI) machines using deep learning neural networks to create material that facially looks like it should be protected by copyright is growing exponentially. From articles in national news media to music, film, poetry and painting, AI machines create material that has economic value and that competes with productions of human authors. The Article reviews both normative and doctrinal arguments for and against the protection by copyright of literary and artistic productions made by AI machines. The Article finds that the arguments in favor of protection are flawed and unconvincing and that a proper analysis of the history, purpose, and major doctrines of copyright law all lead to the conclusion that productions that do not result from human creative choices belong to the public domain. The Article proposes a test to determine which productions should be protected, including in case of collaboration between human and machine. Finally, the Article applies the proposed test to three specific fact patterns to illustrate its application.

Lawrence B. Solum’s Artificially Intelligent Law (Mar. 11, 2019) “explores a series of thought experiments that postulate the existence of “artificially intelligent law.” An artificially-intelligent legal system is defined as one with three functional capacities: 1. The system has the capacity to generate legal norms. 2. The system has the capacity to apply the legal norms that it generates. 3. The system has the capacity to use deep learning to modify the legal norms that it generates. The paper then considers the question whether such a system would be desirable as a matter of legitimacy and justice. The core idea of the paper is that the key to the evaluation of artificially intelligent law is to focus on the functional capacities of the system in comparison to comparable human systems, such as regulatory agencies.”

Here’s the abstract for Robert Blacksberg’s Envisioning the AI-Enabled Legal Team of the Future, 2018 ctrl ALT del conference:

Artificial intelligence (AI) can be found again in the front row of technology investment and development. Perhaps better standing for “augmented intelligence,” AI tools that employ machine learning, natural language processing and expert systems, among others, with the power to handle big data, are earning a place at the legal team’s table.

Now we must bring to the table the people with the necessary skills and understanding to incorporate AI in legal practice.They need to participate in the organization and delivery of legal services from the beginning of engagements and become active members of project teams. There emerged from the ALT conference a vision of a less hierarchical team, with a broader set of skills and a greater degree of client involvement than the traditional phalanx of rainmaker partner, engagement partner, associate and support staff.