An Artificial Lawyer post, The Third Wave of Legal AI by Kripa Rajshekhar, the founder of legal AI company Metonymy Labs, has three goals (1) Introduce the Third Wave of AI, (2) Outline, in broad strokes, what this means for AI and Law, (3) Illustrate the path forward with a specific application of the approach: Metonymy Labs’ work to augment diligence with AI. Interesting. — Joe
Category Archives: Information Technology
According to John Launchbury, director of DARPA’s Information Innovation Office, the development of artificial intelligence is progressing in three waves: handcrafted knowledge, statistical learning and contextual adaptation. In the below video, Launchbury explains his theory. From the YouTube description:
John Launchbury … attempts to demystify AI–what it can do, what it can’t do, and where it is headed. Through a discussion of the “three waves of AI” and the capabilities required for AI to reach its full potential, John provides analytical context to help understand the roles AI already has played, does play now, and could play in the future.
The video is a companion communication for Launchbury’s stack, A DARPA Perspective on Artificial Intelligence. Recommended. — Joe
In Learning Analytics and the Academic Library: Professional Ethics Commitments at a Crossroad, College & Research Libraries, Forthcoming, Kyle Jones and Dorothea Salo discuss learning analytics and the ways academic libraries are beginning to participate in wider institutional learning analytics initiatives. The authors address how learning analytics implicates professional commitments to promote intellectual freedom; protect patron privacy and confidentiality; and balance intellectual property interests between library users, their institution, and content creators and vendors. From the article’s conclusion:
Though pursuing LA [learning analytics] may lead to good outcomes for students and their institutions, higher education and the library profession still face an ethical crossroads. LA practices present significant conflicts with the ALA’s Code of Ethics with respect to intellectual privacy, intellectual freedom, and intellectual property rights. We recommend that librarians respond by strategically embedding their values in LA through actively participating in the conversations, governance structures, and policies that ultimately shape the use of the technology on their respective campuses.
Here’s the abstract for Deven R. Desai and Joshua A. Kroll’s very interesting article, Trust But Verify: A Guide to Algorithms and the Law, Harvard Journal of Law & Technology, Forthcoming:
The call for algorithmic transparency as a way to manage the power of new data-driven decision-making techniques misunderstands the nature of the processes at issue and underlying technology. Part of the problem is that the term, algorithm, is broad. It encompasses disparate concepts even in mathematics and computer science. Matters worsen in law and policy. Law is driven by a linear, almost Newtonian, view of cause and effect where inputs and defined process lead to clear outputs. In that world, a call for transparency has the potential to work. The reality is quite different. Real computer systems use vast data sets not amenable to disclosure. The rules used to make decisions are often inferred from these data and cannot be readily explained or understood. And at a deep and mathematically provable level, certain things, including the exact behavior of an algorithm, can sometimes not be tested or analyzed. From a technical perspective, current attempts to expose algorithms to the sun will fail to deliver critics’ desired results and may create the illusion of clarity in cases where clarity is not possible.
At a high-level, the recent calls for algorithmic transparency follow a pattern that this paper seeks to correct. Policy makers and technologists often talk past each other about the realities of technology and the demands of policy. Policy makers may identify good concerns but offer solutions that misunderstand technology. This misunderstanding can lead to calls for regulation that make little to no sense to technologists. Technologists often see systems as neutral tools, with uses to be governed only when systems interact with the real world. Both sides think the other simply “does not get it,” and important problems receive little attention from either group. By setting out the core concerns over the use of algorithms, offering a primer on the nature of algorithms, and a guide on the way in which computer scientists deal with the inherent limits of their field, this paper shows that there are coherent ways to manage algorithms and the law.
Recommended. — Joe
“Machines are eating humans’ jobs talents. And it’s not just about jobs that are repetitive and low-skill. Automation, robotics, algorithms and artificial intelligence (AI) in recent times have shown they can do equal or sometimes even better work than humans who are dermatologists, insurance claims adjusters, lawyers, seismic testers in oil fields, sports journalists and financial reporters, crew members on guided-missile destroyers, hiring managers, psychological testers, retail salespeople, and border patrol agents. Moreover, there is growing anxiety that technology developments on the near horizon will crush the jobs of the millions who drive cars and trucks, analyze medical tests and data, perform middle management chores, dispense medicine, trade stocks and evaluate markets, fight on battlefields, perform government functions, and even replace those who program software – that is, the creators of algorithms” is how Lee Rainie and Janna Anderson start their Pew experts survey report titled The Future of Jobs and Jobs Training (May 3, 2017). As machines and technology continue to transform the workplace, the Pew Research Center says technologists, futurists and scholars are predicting a surge of interest in artificial-intelligence training program. Will that be enough to satisfy the labor demand?
The report identifies five major themes about the future of jobs training in the tech age. See below. — Joe
Two snips from David Skerrett’s very interesting EContent post, The Year AI-First Will Become the New Mobile-First:
For 6 years, achieving a mobile-first ethos has been a badge of honor within organizations and a genuine philosophy to solve problems for—or preview content on—the device users are most likely to turn to first. Now, true AI is progressing quickly, while there is already a proliferation of practical AI on our mobiles. Contextual assistants, bots, and digital concierges are right at our fingertips—notably Siri, Alexa, Cortana, and Google Now. … However, these intelligent helpers aren’t so helpful when it comes to playing well together. Once these assistants are able to “talk” to each other, it will create a more connected experience for us all, enabling even greater business transformation in the next few years.
Basically, AI will be completely and seamlessly woven into the fabric of our lives and how we get things done. Microsoft can already scan your emails and see you have a flight coming up. Cortana then adds it to your diary and helps get things done via reminders and prompts. Depending on how much trust we put in our devices and into AI, it will be like having a personal assistant in your pocket.
In Synthetic Crowdsourcing: A Machine-Learning Approach to the Problems of Inconsistency and Bias in Adjudication, Hannah Laqueur and Ryan Copus present an algorithmic approach to the problems of inconsistency and bias in legal decision making. From the abstract:
First, we propose a new tool for reducing inconsistency: “Synthetic Crowdsourcing Models” (“SCMs”) built with machine learning methods. By providing judges with recommendations generated from statistical models of themselves, such models can help those judges make better and more consistent decisions. To illustrate these advantages, we build an SCM of release decisions for the California Board of Parole Hearings. Second, we describe a means to address systematic biases that are embedded in an algorithm (e.g., disparate racial treatment). We argue for making direct changes to algorithmic output based on explicit estimates of bias. Most commentators concerned with embedded biases have focused on constructing algorithms without the use of bias-inducing variables. Given the complex ways that variables may correlate and interact, that approach is both practically difficult and harmful to predictive power. In contrast, our two-step approach can address bias with minimal sacrifice of predictive performance.
Here’s the abstract for Michael Rich’s (Elon Law) Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment:
At the conceptual intersection of machine learning and government data collection lie Automated Suspicion Algorithms, or ASAs, algorithms created through the application of machine learning methods to collections of government data with the purpose of identifying individuals likely to be engaged in criminal activity. The novel promise of ASAs is that they can identify data-supported correlations between innocent conduct and criminal activity and help police prevent crime. ASAs present a novel doctrinal challenge, as well, as they intrude on a step of the Fourth Amendment’s individualized suspicion analysis previously the sole province of human actors: the determination of when reasonable suspicion or probable cause can be inferred from established facts. This Article analyzes ASAs under existing Fourth Amendment doctrine for the benefit of courts who will soon be asked to deal with ASAs. In the process, the Article reveals how that doctrine is inadequate to the task of handling these new technologies and proposes extra-judicial means of ensuring that ASAs are accurate and effective.
Here’s the abstract for Rory Van Loo’s (Boston Univ. School of Law) interesting article, Rise of the Digital Regulator, 66 Duke Law Journal 1267 (2017):
The administrative state is leveraging algorithms to influence individuals’ private decisions. Agencies have begun to write rules to shape for-profit websites such as Expedia and have launched their own online tools such as the Consumer Financial Protection Bureau’s mortgage calculator. These digital intermediaries aim to guide people toward better schools, healthier food, and more savings. But enthusiasm for this regulatory paradigm rests on two questionable assumptions. First, digital intermediaries effectively police consumer markets. Second, they require minimal government involvement. Instead, some for-profit online advisers such as travel websites have become what many mortgage brokers were before the 2008 financial crisis. Although they make buying easier, they can also subtly advance their interests at the expense of those they serve. Publicly run alternatives lack accountability or—like the Affordable Care Act health-insurance exchanges—are massive undertakings. The unpleasant truth is that creating effective digital regulators would require investing heavily in a new oversight regime or sophisticated state machines. Either path would benefit from an interdisciplinary uniform process to modernize administrative, antitrust, commercial, and intellectual property laws. Ideally, a technology meta-agency would then help keep that legal framework updated.
Following up on an earlier LLB post, this is a detailed visual introduction to the concepts behind a blockchain. The creator introduces the idea of an immutable ledger using an interactive web demonstration. See also Blockgeeks’ What is Blockchain Technology? A Step-by-Step Guide For Beginners. — Joe
From Brian Forde’s Harvard Business Review article, Using Blockchain to Keep Public Data Public:
The public blockchain would fundamentally change the way we govern and do business. Rather than asking companies and consumers to downgrade their digital interactions in order to comply with the law, the government would create an adaptable system that would reduce the amount of paperwork and compliance for businesses and consumers. Rather than force emerging technologies and business models into legal gray areas, the government would use algorithmic regulation to create a level playing field for incumbent companies in their respective industries.
H/T to Gary Price’s InfoDocket post. — Joe
Summarizing their work, Daniel Katz, Michael Bommarito and Josh Blackman wrote in A General Approach for Predicting the Behavior of the Supreme Court of the United States:
[W]e offer the first generalized, consistent and out-of-sample applicable machine learning model for predicting decisions of the Supreme Court of the United States. Casting predictions over nearly two centuries, our model achieves 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently over the past century, we outperform an in-sample optimized null model by nearly 5 %. Among other things, we believe such improvements in modeling should be of interest to court observers, litigants, citizens and markets. Indeed, with respect to markets, given judicial decisions can impact publicly traded companies, as highlighted in [Katz DM, Bommarito MJ, Soellinger T, Chen JM. Law on the Market? Evaluating the Securities Market Impact of Supreme Court Decisions 2015], even modest gains in prediction can produce significant financial rewards.
Here’s the abstract:
Building on developments in machine learning and prior work in the science of judicial prediction, we construct a model designed to predict the behavior of the Supreme Court of the United States in a generalized, out-of-sample context. To do so, we develop a time evolving random forest classifier which leverages some unique feature engineering to predict more than 240,000 justice votes and 28,000 cases outcomes over nearly two centuries (1816-2015). Using only data available prior to decision, our model outperforms null (baseline) models at both the justice and case level under both parametric and non-parametric tests. Over nearly two centuries, we achieve 70.2% accuracy at the case outcome level and 71.9% at the justice vote level. More recently, over the past century, we outperform an in-sample optimized null model by nearly 5%. Our performance is consistent with, and improves on the general level of prediction demonstrated by prior work; however, our model is distinctive because it can be applied out-of-sample to the entire past and future of the Court, not a single term. Our results represent an important advance for the science of quantitative legal prediction and portend a range of other potential applications.
From the abstract of Marcella Atzori’s Blockchain Technology and Decentralized Governance: Is the State Still Necessary?:
The core technology of Bitcoin, the blockchain, has recently emerged as a disruptive innovation with a wide range of applications, potentially able to redesign our interactions in business, politics and society at large. Although scholarly interest in this subject is growing, a comprehensive analysis of blockchain applications from a political perspective is severely lacking to date. This paper aims to fill this gap and it discusses the key points of blockchain-based decentralized governance, which challenges to varying degrees the traditional mechanisms of State authority, citizenship and democracy. In particular, the paper verifies to which extent blockchain and decentralized platforms can be considered as hyper-political tools, capable to manage social interactions on large scale and dismiss traditional central authorities. The analysis highlights risks related to a dominant position of private powers in distributed ecosystems, which may lead to a general disempowerment of citizens and to the emergence of a stateless global society. While technological utopians urge the demise of any centralized institution, this paper advocates the role of the State as a necessary central point of coordination in society, showing that decentralization through algorithm-based consensus is an organizational theory, not a stand-alone political theory.
Algorithmic performance of legal tasks will save time but time-savings will be less significant than popular accounts suggest
The title of this post is one of the main conclusions Dana Remus and Frank Levy reach in Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law. Here’s the abstract:
We assess frequently-advanced arguments that automation will soon replace much of the work currently performed by lawyers. Our assessment addresses three core weaknesses in the existing literature: (i) a failure to engage with technical details to appreciate the capacities and limits of existing and emerging software; (ii) an absence of data on how lawyers divide their time among various tasks, only some of which can be automated; and (iii) inadequate consideration of whether algorithmic performance of a task conforms to the values, ideals and challenges of the legal profession.
Combining a detailed technical analysis with a unique data set on time allocation in large law firms, we estimate that automation has an impact on the demand for lawyers’ time that while measureable, is far less significant than popular accounts suggest. We then argue that the existing literature’s narrow focus on employment effects should be broadened to include the many ways in which computers are changing (as opposed to replacing) the work of lawyers. We show that the relevant evaluative and normative inquiries must begin with the ways in which computers perform various lawyering tasks differently than humans. These differences inform the desirability of automating various aspects of legal practice, while also shedding light on the core values of legal professionalism.
Recommended. H/T to Jeffrey Brandt’s Growing AI Redux post. — Joe
“Artificial intelligence (AI) and its legal practice applications are grabbing headlines in the legal industry. Ever since the early success stories of IBM Watson, the legal press has been buzzing with articles that debate whether AI is a threat or hope and whether AI will transform, disrupt, revolutionize, or even remake the legal industry. … Now it’s time to focus on the law librarian’s role regarding AI applications in legal research and aiding practitioners in minimizing potential risks due to AI utilization,” write Sherry Xin Chen and Mary Ann Neary in Artificial Intelligence, Legal Research and Law Librarians, AALL Spectrum (May-June 2017) at 17. And that is exactly what the authors do.
In this very interesting article the authors conclude:
Having a general understanding of database algorithms offers a glimpse into the inner workings of AI and makes it possible for attorneys and law librarians to evaluate and correct possible mistakes created by an AI program. Law librarians need to maintain their role in the information cycle as instructors, experts, knowledge curators, and technology consultants as AI is implemented in legal
practice and education.
Recommended. — Joe
From the YouTube blurb for the below video:
Over the past decade, an alternative digital paradigm has slowly been taking shape at the edges of the internet. This new paradigm is the blockchain. After incubating through millions of Bitcoin transactions and a host of developer projects, it is now on the tips of tongues of CEOs and CTOs, startup entrepreneurs, and even governance activists. Though these stakeholders are beginning to understand the disruptive potential of blockchain technology and are experimenting with its most promising applications, few have asked a more fundamental question: What will a world driven by blockchains look like a decade from now?
From the abstract of Algorithmic Entities, 95 Washington University Law Review (Forthcoming), by Lynn LoPucki (UCLA):
This Article argues that algorithmic entities — legal entities that have no human controllers — greatly exacerbate the threat of artificial intelligence. Algorithmic entities are likely to prosper first and most in criminal, terrorist, and other anti-social activities because that is where they have their greatest comparative advantage over human-controlled entities. Control of legal entities will contribute to algorithms’ prosperity by providing them with identities that will enable them to accumulate wealth and participate in commerce.
Four aspects of corporate law make the human race vulnerable to the threat of algorithmic entities. First, algorithms can lawfully have exclusive control of not just American LLC’s but also a large majority of the entity forms in most countries. Second, entities can change regulatory regimes quickly and easily through migration. Third, governments — particularly in the United States — lack the ability to determine who controls entities they charter and so cannot determine which have non-human controllers. Lastly, corporate charter competition, combined with ease of entity migration, makes it virtually impossible for any government to regulate algorithmic control of entities.
COIN, short for Contract Intelligence, is JPMorgan’s in-house learning machine that parses financial deals, deals that once took lawyers thousands of hours to perform. In JPMorgan Software Does in Seconds What Took Lawyers 360,000 Hours, Bloomberg Markets’ Hugh Son provides a backgrounder on COIN. — Joe
A new book, Jeff Seul, Josias N. Dewey and Shawn Amuial’s The Blockchain: A Guide for Legal and Business Professionals, published by Thomson Reuters last year, promises that “no prior experience with blockchain technology is necessary” to get started. Here’s the blurb from the Thomson Reuters e-commerce site:
The Blockchain: A Guide for Legal and Business Professionals provides professionals such as lawyers, accountants, consultants, and business executives, the information they need to know in order to understand more complex implementations and concepts associated with the technology and, more importantly, how it might be able to help their business. The book also provides knowledge and insight to those with a more in-depth understanding of blockchain technology by developing and emphasizing a legal and business perspective.
•The Fundamentals of Blockchain Technology
•Decentralized Autonomous Organizations
•Key Management for Business and Professional Firms
•Digital Identification on the Blockchain
•Related Technologies that Complement Blockchain Technology
•General Policy Considerations for Future Regulations
•Conclusions and Thoughts about the Future