From the abstract for Cary Coglianese & David Lehr, Transparency and Algorithmic Governance, Administrative Law Review, Forthcoming:
Machine-learning algorithms are improving and automating important functions in medicine, transportation, and business. Government officials have also started to take notice of the accuracy and speed that such algorithms provide, increasingly relying on them to aid with consequential public-sector functions, including tax administration, regulatory oversight, and benefits administration. Despite machine-learning algorithms’ superior predictive power over conventional analytic tools, algorithmic forecasts are difficult to understand and explain. Machine learning’s “black-box” nature has thus raised concern: Can algorithmic governance be squared with legal principles of governmental transparency? We analyze this question and conclude that machine-learning algorithms’ relative inscrutability does not pose a legal barrier to their responsible use by governmental authorities. We distinguish between principles of “fishbowl transparency” and “reasoned transparency,” explaining how both are implicated by algorithmic governance but also showing that neither conception compels anything close to total transparency. Although machine learning’s black-box features distinctively implicate notions of reasoned transparency, legal demands for reason-giving can be satisfied by explaining an algorithm’s purpose, design, and basic functioning. Furthermore, new technical advances will only make machine-learning algorithms increasingly more explainable. Algorithmic governance can meet both legal and public demands for transparency while still enhancing accuracy, efficiency, and even potentially legitimacy in government.
A judge capped the costs award in an occupier’s liability personal injury costs judgment, writing that the use of artificial intelligence should have “significantly reduced” counsel’s preparation time. The decision in Cass v. 1410088 Ontario Inc., 2018 ONSC 6959 reduced the starting point for disbursements by $11,404.08, citing both research fees as well as other aspects of the lawyers’ bill, and awarded a total cost award against the plaintiff of $20,000.
From the abstract for Milan Markovic, Rise of the Robot Lawyers? Arizona Law Review, Forthcoming:
The advent of artificial intelligence has provoked considerable speculation about the future of the American workforce, including highly educated professionals such as lawyers and doctors. Although most commentators are alarmed by the prospect of intelligent machines displacing millions of workers, not so with respect to the legal sector. Media accounts and some legal scholars envision a future where intelligent machines perform the bulk of legal work, and legal services are less expensive and more accessible. This future is purportedly near at hand as lawyers struggle to compete with technologically-savvy alternative legal service providers.
This Article challenges the notion that lawyers will be displaced by artificial intelligence on both empirical and normative grounds. Most legal tasks are inherently abstract and cannot be performed by even advanced artificial intelligence relying on deep-learning techniques. In addition, lawyer employment and wages have grown steadily over the last twenty years, evincing that the legal profession has benefited from new technologies, as it has throughout its history. Lastly, even if large-scale automation of legal work is possible, core societal values counsel against it. These values are not merely aspirational but are reflected in the multi-faceted role of lawyers and in the way that the legal system is structured.
From the blurb for Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2018):
In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner–the Master Algorithm–and discusses what it will mean for business, science, and society. If data-ism is today’s philosophy, this book is its bible.
From Public Attitudes Toward Computer Algorithms (Nov. 16, 2018): “Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations. … This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do.”
Jean-Marc Deltorn and Franck Macrez have posted Authorship in the Age of Machine Learning and Artificial Intelligence, to be published in Sean M. O’Connor (ed.), The Oxford Handbook of Music Law and Policy, Oxford University Press, 2019 (Forthcoming):
New generations of algorithmic tools have recently become available to artists. Based on the latest development in the field of machine learning – the theoretical framework driving the current surge in artificial intelligence applications -, and relying on access to unprecedented amounts of both computational power and data, these technological intermediaries are opening the way to unexpected forms of creation. Instead of depending on a set of man-made rules to produce novel artworks, generative processes can be automatically learnt from a corpus of training examples. Musical features can be extracted and encoded in a statistical model with no or minimal human input and be later used to produce original compositions, from baroque polyphony to jazz improvisations. The advent of such creative tools, and the corollary vanishing presence of the human in the creative pipeline, raises a number of fundamental questions in terms of copyright protection. Assuming AI generated compositions are protected by copyright, who is the author when the machine contributes to the creative process? And, what are the minimal requirements to be rewarded with authorship?
Here’s the abstract for Andrea L. Roth, Machine Testimony, 126 Yale Law Journal ___ (2017):
Machines play increasingly crucial roles in establishing facts in legal disputes. Some machines convey information — the images of cameras, the measurements of thermometers, the opinions of expert systems. When a litigant offers a human assertion for its truth, the law subjects it to testimonial safeguards — such as impeachment and the hearsay rule — to give juries the context necessary to assess the source’s credibility. But the law on machine conveyance is confused; courts shoehorn them into existing rules by treating them as “hearsay,” as “real evidence,” or as “methods” underlying human expert opinions. These attempts have not been wholly unsuccessful, but they are intellectually incoherent and fail to fully empower juries to assess machine credibility. This Article seeks to resolve this confusion and to offer a coherent framework for conceptualizing and regulating machine evidence. First, it explains that some machine evidence, like human testimony, depends on the credibility of a source. Just as so-called “hearsay dangers” lurk in human assertions, “black box dangers” — human and machine errors causing a machine to be false by design, inarticulate, or analytically unsound — potentially lurk in machine conveyances. Second, it offers a taxonomy of machine evidence, explaining which types implicate credibility and how courts have attempted to regulate them through existing law. Third, it offers a new vision of testimonial safeguards for machines. It explores credibility testing in the form of front-end design, input and operation protocols; pretrial disclosure and access rules; authentication and reliability rules; impeachment and courtroom testing mechanisms; jury instructions; and corroboration rules. And it explains why machine sources can be “witnesses” under the Sixth Amendment, refocusing the right of confrontation on meaningful impeachment. The Article concludes by suggesting how the decoupling of credibility testing from the prevailing courtroom-centered hearsay model could benefit the law of testimony more broadly.
From Brian Higgins’ California Appeals Court Denies Defendant Access to Algorithm That Contributed Evidence to His Conviction, Artificial Intelligence Technology and the Law Blog (Oct. 31, 2018):
The closely-followed issue of algorithmic transparency was recently considered by a California appellate court in People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018), in which the People sought relief from a discovery order requiring the production of software and source code used in the conviction of Florencio Jose Dominguez. Following a hearing and review of the record and amicus briefs in support of Dominguez filed by the American Civil Liberties Union, the American Civil Liberties Union of San Diego and Imperial Counties, the Innocence Project, Inc., the California Innocence Project, the Northern California Innocence Project at Santa Clara University School of Law, Loyola Law School’s Project for the Innocent, and the Legal Aid Society of New York City, the appeals court granted the People’s relief. In doing so, the court considered, but was not persuaded by, the defense team’s “black box” and “machine testimony” arguments.
Here’s the text of the opinion.
Vincent is “the first AI-powered intelligent legal research assistant of its kind. Only Vincent can analyze documents in two languages (English and Spanish) from 9 countries (and counting), and is built ready to incorporate content not only from vLex’s expansive global collection, but also from internal knowledge management resources, public sources and licensed databases simultaneously. How does Vincent do it, you ask? Well, it’s been trained on vLex’s extensive global collection of 100 million+ legal documents, and is built on top of the Iceberg AI platform.” For more information, see this vLex blog post.
AI Fairness 360 (AIF360) is a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias throughout the AI application lifecycle. Containing over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into actual practice. Here’s IBM’s press release.
From Quartz: California governor Jerry Brown signed SB 10 into law last week, a bill that replaces cash bail with an algorithmic system. Each county will have to put in place a system to ascertain a suspect’s risk of flight or committing another crime during the trial process, whether that means using a system from a third-party contractor or developing one themselves before the October 2019 deadline.
H/T to beSpacific. — Joe
From Orly Mazur, Taxing the Robots, 46 Pepperdine Law Review (Forthcoming):
Robots and other artificial intelligence-based technologies are increasingly outperforming humans in jobs previously thought safe from automation. This has led to growing concerns about the future of jobs, wages, economic equality and government revenues. To address these issues, there have been multiple calls around the world to tax the robots. Although the concerns that have led to the recent robot tax proposals may be valid, this Article cautions against the use of a robot tax. It argues that a tax that singles out robots is the wrong tool to address these critical issues and warns of the unintended consequences of such a tax, including limiting innovation. Rather, advances in robotics and other forms of artificial intelligence merely exacerbate the issues already caused by a tax system that under-taxes capital income and over-taxes labor income. Thus, this Article proposes tax policy measures that seek to rebalance our tax system so that capital income and labor income are taxed in parity. This Article also recommends non-tax policy measures that seek to improve the labor market, support displaced workers, and encourage innovation, because tax policy alone cannot solve all of the issues raised by the robotics revolution. Together, these changes have the potential to manage the threat of automation while also maximizing its advantages, thereby easing our transition into this new automation era.
From John Flood & Lachlan Robb, Professions and Expertise: How Machine Learning and Blockchain are Redesigning the Landscape of Professional Knowledge and Organisation (Aug. 21, 2018):
Machine learning has entered the world of the professions with differential impacts. Engineering, architecture, and medicine are early and enthusiastic adopters. Other professions, especially law, are late and in some cases reluctant adopters. And in the wider society automation will have huge impacts on the nature of work and society. This paper examines the effects of artificial intelligence and blockchain on professions and their knowledge bases. We start by examining the nature of expertise in general and then how it functions in law. Using examples from law, such as Gulati and Scott’s analysis of how lawyers create (or don’t create) legal agreements, we show that even non-routine and complex legal work is potentially amenable to automation. However, professions are different because they include both indeterminate and technical elements that make pure automation difficult to achieve. We go on to consider the future prospects of AI and blockchain on professions and hypothesise that as the technologies mature they will incorporate more human work through neural networks and blockchain applications such as the DAO. For law, and the legal profession, the role of lawyer as trusted advisor will again emerge as the central point of value.
“Deepfakes” is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos, usually without permission. Such digital impersonation is on the rise. Deepfakes raise the stakes for the “fake news” phenomenon in dramatic fashion (quite literally). Lawfare offers examples:
- Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery.
- Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.
- Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.
- Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.
- A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.
- A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election.
- A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence.
- False audio might convincingly depict U.S. officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.
- A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.
For more, see:
The impending war over deepfakes, Axios, July 22, 2018
Here’s why it’s so hard to spot deepfakes, CNN, Aug. 8, 2018
Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?, Lawfare, Feb. 21, 2018
In A-I is a G-O, Dyane O’Leary offers her perspective on ROSS, an artificial intelligence legal research tool. Artificial intelligence is a hot button issue, and this article explores what these new platforms might offer and whether LRW professors should be teaching them. — Joe
Here’s the abstract for Law Without Mind: AI, Ethics, and Jurisprudence by Joshua P. Davis:
Anything we can conceive that computers may do, it seems that they end up doing and that they end up doing it better than us and much sooner than we expected. They have gone from calculating mathematics for us to creating and maintaining our social networks to serving as our personal assistants. We are told they may soon become our friends and make life and death decisions driving our cars. Perhaps they will also take over interpreting our laws. It is not that hard to conceive of computers doing so to the extent legal interpretation involves mere description or prediction. It is much harder to conceive of computers making substantive moral judgments. So the ultimate bulwark against ceding legal interpretation to computers—from having computers usurp the responsibility and authority of attorneys, citizens, and even judges—may be to recognize the role of moral judgment in saying what the law is. That possibility connects the cutting edge with the traditional. The central dispute in jurisprudence for the past half century or more has been about the role of morality in legal interpretation. Suddenly, that dispute has great currency and urgency. Jurisprudence may help us to clarify and circumscribe the role of computers in our legal system. And contemplating AI may help us to resolve jurisprudential debates that have vexed us for decades.
AI in Law and Legal Practice – A Comprehensive View of 35 Current Applications explores the major areas of current AI applications in law, individually and in depth. Current AI applications fall in six major categories:
- Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
- Prediction technology – An AI software generates results that forecast litigation outcome.
- Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
- Document automation – Law firms use software templates to create filled out documents based on data input.
Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
- Electronic billing – Lawyers’ billable hours are computed automatically.
Here’s the abstract for Michal Gal’s Algorithms as Illegal Agreements, Berkeley Technology Law Journal, Forthcoming:
Despite the increased transparency, connectivity, and search abilities that characterize the digital marketplace, the digital revolution has not always yielded the bargain prices that many consumers expected. What is going on? Some researchers suggest that one factor may be coordination between the algorithms used by suppliers to determine trade terms. Simple coordination-facilitating algorithms are already available off the shelf, and such coordination is only likely to become more commonplace in the near future. This is not surprising. If algorithms offer a legal way to overcome obstacles to profit-boosting coordination, and create a jointly profitable status quo in the market, why should suppliers not use them? In light of these developments, seeking solutions – both regulatory and market-driven – is timely and essential. While current research has largely focused on the concerns raised by algorithmic-facilitated coordination, this article takes the next step, asking to what extent current laws can be fitted to effectively deal with this phenomenon.
To meet this challenge, this article advances in three stages. The first part analyzes the effects of algorithms on the ability of competitors to coordinate their conduct. While this issue has been addressed by other researchers, this article seeks to contribute to the analysis by systematically charting the technological abilities of algorithms that may affect coordination in the digital ecosystem in which they operate. Special emphasis is placed on the fact that the algorithms is a “recipe for action”, which can be directly or indirectly observed by competitors. The second part explores the promises as well as the limits of market solutions. In particular, it considers the use of algorithms by consumers and off-the-grid transactions to counteract some of the effects of algorithmic-facilitated coordination by suppliers. The shortcomings of such market solutions lead to the third part, which focuses on the ability of existing legal tools to deal effectively with algorithmic-facilitated coordination, while not harming the efficiencies they bring about. The analysis explores three interconnected questions that stand at the basis of designing a welfare-enhancing policy: What exactly do we wish to prohibit, and can we spell this out clearly for market participants? What types of conduct are captured under the existing antitrust laws? And is there justification for widening the regulatory net beyond its current prohibitions in light of the changing nature of the marketplace? In particular, the article explores the application of the concepts of plus factors and facilitating practices to algorithms. The analysis refutes the Federal Trade Commission’s acting Chairwoman’s claim that current laws are sufficient to deal with algorithmic-facilitated coordination.
From the abstract of Thomas King, et al., Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions:
Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which we term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems, and a possible solution space.
From the conclusion from Law Technology Today’s Legal Analytics vs. Legal Research: What’s the Difference?:
Technology is transforming the legal services industry. Some attorneys may resist this transformation out of fear that new technologies might change how they practice law or even make their jobs obsolete. Similar concerns were voiced when legal research moved from books to computers. But that transition did not reduce the need for attorneys skilled in legal research. Instead, it made attorneys better and more effective at their jobs.
Similarly, legal analytics will not make the judgment and expertise of seasoned lawyers obsolete. It will, however, enable those who employ it to provide better and more cost-effective representation for their clients and better compete with their opponents.