Based on Edgar Alan Rayo’s assessment of companies’ offerings in the legal field, current applications of AI appear to fall in six major categories:

  1. Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
  2. Prediction technology – An AI software generates results that forecast litigation outcome.
  3. Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
  4. Document automation – Law firms use software templates to create filled out documents based on data input.
  5. Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
  6. Electronic billing – Lawyers’ billable hours are computed automatically.

Rayo explores the major areas of current AI applications in law, individually and in depth here.

From the abstract for Richard M. Re & Alicia Solow-Niederman Developing Artificially Intelligent Justice (Stanford Technology Law Review, Forthcoming):

Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.

Aryan Pegwar asks and answers the post title’s question. “Today Modern technologies like artificial intelligence, machine learning, data science have become the buzzwords. Everybody talks about but no one fully understands. They seem very complex to a layman. People often get confused by words like AI, ML and data science. In this article, we explain these technologies in simple words so that you can easily understand the difference between them.” Details here.

Related:

TechRepublic reports that Microsoft has announced that Word Online will incorporate a feature known as Ideas this fall. Backed by artificial intelligence and machine learning courtesy of Microsoft Graph, Ideas will suggest ways to help you enhance your writing and create better documents. Ideas will also show you how to better organize and structure your documents by suggesting tables, styles, and other features already available in Word.

From the abstract for Daniel L. Chen, Machine Learning and the Rule of Law, Computational Analysis of Law, Santa Fe Institute Press, ed. M. Livermore and D. Rockmore, Forthcoming:

“Predictive judicial analytics holds the promise of increasing the fairness of law. Much empirical work observes inconsistencies in judicial behavior. By predicting judicial decisions—with more or less accuracy depending on judicial attributes or case characteristics—machine learning offers an approach to detecting when judges most likely to allow extra legal biases to influence their decision making. In particular, low predictive accuracy may identify cases of judicial “indifference,” where case characteristics (interacting with judicial attributes) do no strongly dispose a judge in favor of one or another outcome. In such cases, biases may hold greater sway, implicating the fairness of the legal system.”

With continued advances in AI, machine learning and legal analytics anticipated, we can expect that legal information platforms will be supplanted by legal intelligence platforms in the not too distant future.  But what would a legal intelligence (or “smart law”) platform look like? Well, I can’t describe a prototypical legal intelligence platform in any technical detail. But it will exist at the point of the agile convergence of expert analysis, text and data-driven features for core legal search for all market segments.  I do, however, see what some “smart law” platform elements would be when looking at what Fastcase and Casetext are offering right now.

In my opinion, the best contemporary perspective on what a legal intelligence platform would be is to imagine that Fastcase and Casetext were one company.  The imagined vendor would offer in integrated fashion Fastcase and Casetext’s extensive collection of primary and secondary resources including legal news and contemporary analysis from the law blogosphere, Fastcase’s search engine algorithms for keyword searching, Casetext’s CLARA for contextual searching, Casetext’s SmartCite, Fastcase’s Docket Alarm, Fastcase BK, and Fastcase’s install base of some 70-75% of US attorneys, all in the context of the industry’s most transparent pricing model which both Fastcase and Casetext have already adopted.

Obviously, pricing models are not an essential element of a legal intelligence platform. But wouldn’t most potential “smart law” customers prefer transparent pricing? That won’t happen if WEXIS deploys the first legal intelligence platforms.  Neither Fastcase nor Casetext (nor Thomson Reuters, LexisNexis, BBNA, or WK) has a ‘smart law” platform right now. Who will be the first? Perhaps one possibility is hiding in plain sight.

From the abstract for Aziz Z. Huq, A Right to a Human Decision, Virginia Law Review, Vol. 105:

Recent advances in computational technologies have spurred anxiety about a shift of power from human to machine decision-makers. From prison sentences to loan approvals to college applications, corporate and state actors increasingly lean on machine learning tools (a subset of artificial intelligence) to allocate goods and to assign coercion. Machine-learning tools are perceived to be eclipsing, even extinguishing, human agency in ways that sacrifice important individual interests. An emerging legal response to such worries is a right to a human decision. European law has already embraced the idea in the General Data Protection Regulation. American law, especially in the criminal justice domain, is already moving in the same direction. But no jurisdiction has defined with precision what that right entails, or furnished a clear justification for its creation.

This Article investigates the legal possibilities of a right to a human decision. I first define the conditions of technological plausibility for that right as applied against state action. To understand its technological predicates, I specify the margins along which machine decisions are distinct from human ones. Such technological contextualization enables a nuanced exploration of why, or indeed whether, the gaps that do separate human and machine decisions might have normative import. Based on this technological accounting, I then analyze the normative stakes of a right to a human decision. I consider three potential normative justifications: (a) an appeal to individual interests to participation and reason-giving; (b) worries about the insufficiently reasoned or individuated quality of state action; and (c) arguments based on negative externalities. A careful analysis of these three grounds suggests that there is no general justification for adopting a right to a human decision by the state. Normative concerns about insufficiently reasoned or accurate decisions, which have a particularly powerful hold on the legal imagination, are best addressed in other ways. Similarly, concerns about the ways that algorithmic tools create asymmetries of social power are not parried by a right to a human decision. Indeed, rather than firmly supporting a right to a human decision, available evidence tentatively points toward a countervailing ‘right to a well-calibrated machine decision’ as ultimately more normatively well- grounded.

From the abstract for Ed Walters, The Model Rules of Autonomous Conduct: Ethical Responsibilities of Lawyers and Artificial Intelligence, Georgia State University Law Review, Forthcoming:

Lawyers are increasingly using software tools and artificial intelligence to augment their provision of legal services. This paper reviews the professional responsibilities of those lawyers under the Model Rules of Professional Conduct and previews how the rules might apply to AI software not yet developed but just on the horizon. Although lawyers frequently use their professional responsibility as a brake on innovation, the Model Rules in many cases actually require them to adopt new methods of delivering legal services. The paper also surveys ways that the Model Rules might be changed to protect consumers in the near future as AI tools grow in scope.

From the European Commission press release:

Following the publication of the draft ethics guidelines in December 2018 to which more than 500 comments were received, the independent expert group presents today their ethics guidelines for trustworthy artificial intelligence.

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.

H/T InfoDocket.

From the abstract for Daniel J. Gervais, The Machine As Author (Iowa Law Review, Vol. 105, 2019):

The use of Artificial Intelligence (AI) machines using deep learning neural networks to create material that facially looks like it should be protected by copyright is growing exponentially. From articles in national news media to music, film, poetry and painting, AI machines create material that has economic value and that competes with productions of human authors. The Article reviews both normative and doctrinal arguments for and against the protection by copyright of literary and artistic productions made by AI machines. The Article finds that the arguments in favor of protection are flawed and unconvincing and that a proper analysis of the history, purpose, and major doctrines of copyright law all lead to the conclusion that productions that do not result from human creative choices belong to the public domain. The Article proposes a test to determine which productions should be protected, including in case of collaboration between human and machine. Finally, the Article applies the proposed test to three specific fact patterns to illustrate its application.

Lawrence B. Solum’s Artificially Intelligent Law (Mar. 11, 2019) “explores a series of thought experiments that postulate the existence of “artificially intelligent law.” An artificially-intelligent legal system is defined as one with three functional capacities: 1. The system has the capacity to generate legal norms. 2. The system has the capacity to apply the legal norms that it generates. 3. The system has the capacity to use deep learning to modify the legal norms that it generates. The paper then considers the question whether such a system would be desirable as a matter of legitimacy and justice. The core idea of the paper is that the key to the evaluation of artificially intelligent law is to focus on the functional capacities of the system in comparison to comparable human systems, such as regulatory agencies.”

Here’s the abstract for Robert Blacksberg’s Envisioning the AI-Enabled Legal Team of the Future, 2018 ctrl ALT del conference:

Artificial intelligence (AI) can be found again in the front row of technology investment and development. Perhaps better standing for “augmented intelligence,” AI tools that employ machine learning, natural language processing and expert systems, among others, with the power to handle big data, are earning a place at the legal team’s table.

Now we must bring to the table the people with the necessary skills and understanding to incorporate AI in legal practice.They need to participate in the organization and delivery of legal services from the beginning of engagements and become active members of project teams. There emerged from the ALT conference a vision of a less hierarchical team, with a broader set of skills and a greater degree of client involvement than the traditional phalanx of rainmaker partner, engagement partner, associate and support staff.

Politico is reporting that “powerful companies such as LexisNexis have begun hoovering up the data from insurance claims, digital health records, housing records, and even information about a patient’s friends, family and roommates, without telling the patient they are accessing the information, and creating risk scores for health care providers and insurers.” The risk score is the product of confidential algorithms. While the data collection is aimed at helping doctors and insurers make more informed decisions on prescribing opioids, it could also lead to blacklisting of some patients and keep them from getting the drugs they need, according to patient advocates. Details here.

From the abstract for Sonia Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. REV. 54 (2019):

In this Article, I explore the impending conflict between the protection of civil rights and artificial intelligence (AI). While both areas of law have amassed rich and well-developed areas of scholarly work and doctrinal support, a growing body of scholars are interrogating the intersection between them. This Article argues that the issues surrounding algorithmic accountability demonstrate a deeper, more structural tension within a new generation of disputes regarding law and technology. As I argue, the true promise of AI does not lie in the information we reveal to one another, but rather in the questions it raises about the interaction of technology, property, and civil rights.

For this reason, I argue that we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability. Instead, we must turn to other ways to ensure more transparency and accountability that stem from private industry, rather than public regulation. The issue of algorithmic bias represents a crucial new world of civil rights concerns, one that is distinct in nature from the ones that preceded it. Since we are in a world where the activities of private corporations, rather than the state, are raising concerns about privacy, due process, and discrimination, we must focus on the role of private corporations in addressing the issue. Towards this end, I discuss a variety of tools to help eliminate the opacity of AI, including codes of conduct, impact statements, and whistleblower protection, which I argue carries the potential to encourage greater endogeneity in civil rights enforcement. Ultimately, by examining the relationship between private industry and civil rights, we can perhaps develop a new generation of forms of accountability in the process.

The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems was launched on May 16, 2018 by Human Rights Watch and a coalition of rights and technology groups which joined in a landmark statement on human rights standards for machine learning.

Known as the Toronto Declaration, the statement calls on governments and companies to ensure that machine learning applications respect the principles of equality and non-discrimination. The document articulates the human rights norms that the public and private sector should meet to ensure that algorithms used in a wide array of fields – from policing and criminal justice to employment and education – are applied equally and fairly, and that those who believe their rights have been violated have a meaningful avenue to redress.

Michael Murphy, Just and Speedy: On Civil Discovery Sanctions for Luddite Lawyers, 25 George Mason Law Review 36 (2017) presents a theoretical model by which a judge could impose civil sanctions on an attorney – relying in part on Rule 1 of the Federal Rules of Civil Procedure – for that attorney’s failure to utilize time- and expense-saving technology. From the abstract:

Rule 1 now charges all participants in the legal system to ensure the “just, speedy and inexpensive” resolution of disputes. In today’s litigation environment, a lawyer managing a case in discovery needs robust technological competence to meet that charge. However, the legal industry is slow to adopt technology, favoring “tried and true” methods over efficiency. This conflict is evident in data showing clients’ and judges’ frustration with the lack of technological competency among the Bar, especially as it pertains to electronic discovery. Such frustration has led judges to publicly scold luddite attorneys, and has led state bar associations to pass anti-luddite ethical rules. Sanctions for “luddite” attorneys are an extreme, but theoretically possible, amplification of that normative movement. Sanctions leveraging Rule 1 require a close reading of the revised rule, and challenge the notion of Rule 1 as a “guide” to the Rules, but a case can be made for such sanctions based on the Rule’s affirmative charge.

The article briefly explores two examples of conduct warranting such sanctions in rare scenarios such as: (1) the failure of an attorney to utilize machine intelligence and concept analytics in the review of information for production and (2) the failure of an attorney to produce documents in accessible electronic format.

The article concludes by suggesting that well-publicized sanctions for “luddite” attorneys may break through the traditional barriers that limit innovation in the legal industry.

Cass R. Sunstein has posted Algorithms, Correcting Biases on SSRN. Here is the abstract:

A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of “availability bias,” a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.