From the abstract for Ed Walters, The Model Rules of Autonomous Conduct: Ethical Responsibilities of Lawyers and Artificial Intelligence, Georgia State University Law Review, Forthcoming:

Lawyers are increasingly using software tools and artificial intelligence to augment their provision of legal services. This paper reviews the professional responsibilities of those lawyers under the Model Rules of Professional Conduct and previews how the rules might apply to AI software not yet developed but just on the horizon. Although lawyers frequently use their professional responsibility as a brake on innovation, the Model Rules in many cases actually require them to adopt new methods of delivering legal services. The paper also surveys ways that the Model Rules might be changed to protect consumers in the near future as AI tools grow in scope.

From the European Commission press release:

Following the publication of the draft ethics guidelines in December 2018 to which more than 500 comments were received, the independent expert group presents today their ethics guidelines for trustworthy artificial intelligence.

Trustworthy AI should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Already today, companies, public administrations and organisations can sign up to the European AI Alliance and receive a notification when the pilot starts.

Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.

H/T InfoDocket.

From the abstract for Daniel J. Gervais, The Machine As Author (Iowa Law Review, Vol. 105, 2019):

The use of Artificial Intelligence (AI) machines using deep learning neural networks to create material that facially looks like it should be protected by copyright is growing exponentially. From articles in national news media to music, film, poetry and painting, AI machines create material that has economic value and that competes with productions of human authors. The Article reviews both normative and doctrinal arguments for and against the protection by copyright of literary and artistic productions made by AI machines. The Article finds that the arguments in favor of protection are flawed and unconvincing and that a proper analysis of the history, purpose, and major doctrines of copyright law all lead to the conclusion that productions that do not result from human creative choices belong to the public domain. The Article proposes a test to determine which productions should be protected, including in case of collaboration between human and machine. Finally, the Article applies the proposed test to three specific fact patterns to illustrate its application.

Lawrence B. Solum’s Artificially Intelligent Law (Mar. 11, 2019) “explores a series of thought experiments that postulate the existence of “artificially intelligent law.” An artificially-intelligent legal system is defined as one with three functional capacities: 1. The system has the capacity to generate legal norms. 2. The system has the capacity to apply the legal norms that it generates. 3. The system has the capacity to use deep learning to modify the legal norms that it generates. The paper then considers the question whether such a system would be desirable as a matter of legitimacy and justice. The core idea of the paper is that the key to the evaluation of artificially intelligent law is to focus on the functional capacities of the system in comparison to comparable human systems, such as regulatory agencies.”

Here’s the abstract for Robert Blacksberg’s Envisioning the AI-Enabled Legal Team of the Future, 2018 ctrl ALT del conference:

Artificial intelligence (AI) can be found again in the front row of technology investment and development. Perhaps better standing for “augmented intelligence,” AI tools that employ machine learning, natural language processing and expert systems, among others, with the power to handle big data, are earning a place at the legal team’s table.

Now we must bring to the table the people with the necessary skills and understanding to incorporate AI in legal practice.They need to participate in the organization and delivery of legal services from the beginning of engagements and become active members of project teams. There emerged from the ALT conference a vision of a less hierarchical team, with a broader set of skills and a greater degree of client involvement than the traditional phalanx of rainmaker partner, engagement partner, associate and support staff.

Politico is reporting that “powerful companies such as LexisNexis have begun hoovering up the data from insurance claims, digital health records, housing records, and even information about a patient’s friends, family and roommates, without telling the patient they are accessing the information, and creating risk scores for health care providers and insurers.” The risk score is the product of confidential algorithms. While the data collection is aimed at helping doctors and insurers make more informed decisions on prescribing opioids, it could also lead to blacklisting of some patients and keep them from getting the drugs they need, according to patient advocates. Details here.

From the abstract for Sonia Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. REV. 54 (2019):

In this Article, I explore the impending conflict between the protection of civil rights and artificial intelligence (AI). While both areas of law have amassed rich and well-developed areas of scholarly work and doctrinal support, a growing body of scholars are interrogating the intersection between them. This Article argues that the issues surrounding algorithmic accountability demonstrate a deeper, more structural tension within a new generation of disputes regarding law and technology. As I argue, the true promise of AI does not lie in the information we reveal to one another, but rather in the questions it raises about the interaction of technology, property, and civil rights.

For this reason, I argue that we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability. Instead, we must turn to other ways to ensure more transparency and accountability that stem from private industry, rather than public regulation. The issue of algorithmic bias represents a crucial new world of civil rights concerns, one that is distinct in nature from the ones that preceded it. Since we are in a world where the activities of private corporations, rather than the state, are raising concerns about privacy, due process, and discrimination, we must focus on the role of private corporations in addressing the issue. Towards this end, I discuss a variety of tools to help eliminate the opacity of AI, including codes of conduct, impact statements, and whistleblower protection, which I argue carries the potential to encourage greater endogeneity in civil rights enforcement. Ultimately, by examining the relationship between private industry and civil rights, we can perhaps develop a new generation of forms of accountability in the process.

The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems was launched on May 16, 2018 by Human Rights Watch and a coalition of rights and technology groups which joined in a landmark statement on human rights standards for machine learning.

Known as the Toronto Declaration, the statement calls on governments and companies to ensure that machine learning applications respect the principles of equality and non-discrimination. The document articulates the human rights norms that the public and private sector should meet to ensure that algorithms used in a wide array of fields – from policing and criminal justice to employment and education – are applied equally and fairly, and that those who believe their rights have been violated have a meaningful avenue to redress.

Michael Murphy, Just and Speedy: On Civil Discovery Sanctions for Luddite Lawyers, 25 George Mason Law Review 36 (2017) presents a theoretical model by which a judge could impose civil sanctions on an attorney – relying in part on Rule 1 of the Federal Rules of Civil Procedure – for that attorney’s failure to utilize time- and expense-saving technology. From the abstract:

Rule 1 now charges all participants in the legal system to ensure the “just, speedy and inexpensive” resolution of disputes. In today’s litigation environment, a lawyer managing a case in discovery needs robust technological competence to meet that charge. However, the legal industry is slow to adopt technology, favoring “tried and true” methods over efficiency. This conflict is evident in data showing clients’ and judges’ frustration with the lack of technological competency among the Bar, especially as it pertains to electronic discovery. Such frustration has led judges to publicly scold luddite attorneys, and has led state bar associations to pass anti-luddite ethical rules. Sanctions for “luddite” attorneys are an extreme, but theoretically possible, amplification of that normative movement. Sanctions leveraging Rule 1 require a close reading of the revised rule, and challenge the notion of Rule 1 as a “guide” to the Rules, but a case can be made for such sanctions based on the Rule’s affirmative charge.

The article briefly explores two examples of conduct warranting such sanctions in rare scenarios such as: (1) the failure of an attorney to utilize machine intelligence and concept analytics in the review of information for production and (2) the failure of an attorney to produce documents in accessible electronic format.

The article concludes by suggesting that well-publicized sanctions for “luddite” attorneys may break through the traditional barriers that limit innovation in the legal industry.

Cass R. Sunstein has posted Algorithms, Correcting Biases on SSRN. Here is the abstract:

A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of “availability bias,” a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.

From the abstract for Cary Coglianese & David Lehr, Transparency and Algorithmic Governance, Administrative Law Review, Forthcoming:

Machine-learning algorithms are improving and automating important functions in medicine, transportation, and business. Government officials have also started to take notice of the accuracy and speed that such algorithms provide, increasingly relying on them to aid with consequential public-sector functions, including tax administration, regulatory oversight, and benefits administration. Despite machine-learning algorithms’ superior predictive power over conventional analytic tools, algorithmic forecasts are difficult to understand and explain. Machine learning’s “black-box” nature has thus raised concern: Can algorithmic governance be squared with legal principles of governmental transparency? We analyze this question and conclude that machine-learning algorithms’ relative inscrutability does not pose a legal barrier to their responsible use by governmental authorities. We distinguish between principles of “fishbowl transparency” and “reasoned transparency,” explaining how both are implicated by algorithmic governance but also showing that neither conception compels anything close to total transparency. Although machine learning’s black-box features distinctively implicate notions of reasoned transparency, legal demands for reason-giving can be satisfied by explaining an algorithm’s purpose, design, and basic functioning. Furthermore, new technical advances will only make machine-learning algorithms increasingly more explainable. Algorithmic governance can meet both legal and public demands for transparency while still enhancing accuracy, efficiency, and even potentially legitimacy in government.

A judge capped the costs award in an occupier’s liability personal injury costs judgment, writing that the use of artificial intelligence should have “significantly reduced” counsel’s preparation time. The decision in Cass v. 1410088 Ontario Inc., 2018 ONSC 6959 reduced the starting point for disbursements by $11,404.08, citing both research fees as well as other aspects of the lawyers’ bill, and awarded a total cost award against the plaintiff of $20,000.

From the abstract for Milan Markovic, Rise of the Robot Lawyers? Arizona Law Review, Forthcoming:

The advent of artificial intelligence has provoked considerable speculation about the future of the American workforce, including highly educated professionals such as lawyers and doctors. Although most commentators are alarmed by the prospect of intelligent machines displacing millions of workers, not so with respect to the legal sector. Media accounts and some legal scholars envision a future where intelligent machines perform the bulk of legal work, and legal services are less expensive and more accessible. This future is purportedly near at hand as lawyers struggle to compete with technologically-savvy alternative legal service providers.

This Article challenges the notion that lawyers will be displaced by artificial intelligence on both empirical and normative grounds. Most legal tasks are inherently abstract and cannot be performed by even advanced artificial intelligence relying on deep-learning techniques. In addition, lawyer employment and wages have grown steadily over the last twenty years, evincing that the legal profession has benefited from new technologies, as it has throughout its history. Lastly, even if large-scale automation of legal work is possible, core societal values counsel against it. These values are not merely aspirational but are reflected in the multi-faceted role of lawyers and in the way that the legal system is structured.

From the blurb for Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2018):

In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner–the Master Algorithm–and discusses what it will mean for business, science, and society. If data-ism is today’s philosophy, this book is its bible.

From Public Attitudes Toward Computer Algorithms (Nov. 16, 2018): “Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations. … This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do.”

H/T InfoDocket.

Jean-Marc Deltorn and Franck Macrez have posted Authorship in the Age of Machine Learning and Artificial Intelligence, to be published in Sean M. O’Connor (ed.), The Oxford Handbook of Music Law and Policy, Oxford University Press, 2019 (Forthcoming):

New generations of algorithmic tools have recently become available to artists. Based on the latest development in the field of machine learning – the theoretical framework driving the current surge in artificial intelligence applications -, and relying on access to unprecedented amounts of both computational power and data, these technological intermediaries are opening the way to unexpected forms of creation. Instead of depending on a set of man-made rules to produce novel artworks, generative processes can be automatically learnt from a corpus of training examples. Musical features can be extracted and encoded in a statistical model with no or minimal human input and be later used to produce original compositions, from baroque polyphony to jazz improvisations. The advent of such creative tools, and the corollary vanishing presence of the human in the creative pipeline, raises a number of fundamental questions in terms of copyright protection. Assuming AI generated compositions are protected by copyright, who is the author when the machine contributes to the creative process? And, what are the minimal requirements to be rewarded with authorship?

Here’s the abstract for Andrea L. Roth, Machine Testimony, 126 Yale Law Journal ___ (2017):

Machines play increasingly crucial roles in establishing facts in legal disputes. Some machines convey information — the images of cameras, the measurements of thermometers, the opinions of expert systems. When a litigant offers a human assertion for its truth, the law subjects it to testimonial safeguards — such as impeachment and the hearsay rule — to give juries the context necessary to assess the source’s credibility. But the law on machine conveyance is confused; courts shoehorn them into existing rules by treating them as “hearsay,” as “real evidence,” or as “methods” underlying human expert opinions. These attempts have not been wholly unsuccessful, but they are intellectually incoherent and fail to fully empower juries to assess machine credibility. This Article seeks to resolve this confusion and to offer a coherent framework for conceptualizing and regulating machine evidence. First, it explains that some machine evidence, like human testimony, depends on the credibility of a source. Just as so-called “hearsay dangers” lurk in human assertions, “black box dangers” — human and machine errors causing a machine to be false by design, inarticulate, or analytically unsound — potentially lurk in machine conveyances. Second, it offers a taxonomy of machine evidence, explaining which types implicate credibility and how courts have attempted to regulate them through existing law. Third, it offers a new vision of testimonial safeguards for machines. It explores credibility testing in the form of front-end design, input and operation protocols; pretrial disclosure and access rules; authentication and reliability rules; impeachment and courtroom testing mechanisms; jury instructions; and corroboration rules. And it explains why machine sources can be “witnesses” under the Sixth Amendment, refocusing the right of confrontation on meaningful impeachment. The Article concludes by suggesting how the decoupling of credibility testing from the prevailing courtroom-centered hearsay model could benefit the law of testimony more broadly.

From Brian Higgins’ California Appeals Court Denies Defendant Access to Algorithm That Contributed Evidence to His Conviction, Artificial Intelligence Technology and the Law Blog (Oct. 31, 2018):

The closely-followed issue of algorithmic transparency was recently considered by a California appellate court in People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018), in which the People sought relief from a discovery order requiring the production of software and source code used in the conviction of Florencio Jose Dominguez. Following a hearing and review of the record and amicus briefs in support of Dominguez filed by the American Civil Liberties Union, the American Civil Liberties Union of San Diego and Imperial Counties, the Innocence Project, Inc., the California Innocence Project, the Northern California Innocence Project at Santa Clara University School of Law, Loyola Law School’s Project for the Innocent, and the Legal Aid Society of New York City, the appeals court granted the People’s relief. In doing so, the court considered, but was not persuaded by, the defense team’s “black box” and “machine testimony” arguments.

Here’s the text of the opinion.

Vincent is “the first AI-powered intelligent legal research assistant of its kind. Only Vincent can analyze documents in two languages (English and Spanish) from 9 countries (and counting), and is built ready to incorporate content not only from vLex’s expansive global collection, but also from internal knowledge management resources, public sources and licensed databases simultaneously. How does Vincent do it, you ask? Well, it’s been trained on vLex’s extensive global collection of 100 million+ legal documents, and is built on top of the Iceberg AI platform.” For more information, see this vLex blog post.