Politico is reporting that “powerful companies such as LexisNexis have begun hoovering up the data from insurance claims, digital health records, housing records, and even information about a patient’s friends, family and roommates, without telling the patient they are accessing the information, and creating risk scores for health care providers and insurers.” The risk score is the product of confidential algorithms. While the data collection is aimed at helping doctors and insurers make more informed decisions on prescribing opioids, it could also lead to blacklisting of some patients and keep them from getting the drugs they need, according to patient advocates. Details here.

From the abstract for Sonia Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. REV. 54 (2019):

In this Article, I explore the impending conflict between the protection of civil rights and artificial intelligence (AI). While both areas of law have amassed rich and well-developed areas of scholarly work and doctrinal support, a growing body of scholars are interrogating the intersection between them. This Article argues that the issues surrounding algorithmic accountability demonstrate a deeper, more structural tension within a new generation of disputes regarding law and technology. As I argue, the true promise of AI does not lie in the information we reveal to one another, but rather in the questions it raises about the interaction of technology, property, and civil rights.

For this reason, I argue that we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability. Instead, we must turn to other ways to ensure more transparency and accountability that stem from private industry, rather than public regulation. The issue of algorithmic bias represents a crucial new world of civil rights concerns, one that is distinct in nature from the ones that preceded it. Since we are in a world where the activities of private corporations, rather than the state, are raising concerns about privacy, due process, and discrimination, we must focus on the role of private corporations in addressing the issue. Towards this end, I discuss a variety of tools to help eliminate the opacity of AI, including codes of conduct, impact statements, and whistleblower protection, which I argue carries the potential to encourage greater endogeneity in civil rights enforcement. Ultimately, by examining the relationship between private industry and civil rights, we can perhaps develop a new generation of forms of accountability in the process.

The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems was launched on May 16, 2018 by Human Rights Watch and a coalition of rights and technology groups which joined in a landmark statement on human rights standards for machine learning.

Known as the Toronto Declaration, the statement calls on governments and companies to ensure that machine learning applications respect the principles of equality and non-discrimination. The document articulates the human rights norms that the public and private sector should meet to ensure that algorithms used in a wide array of fields – from policing and criminal justice to employment and education – are applied equally and fairly, and that those who believe their rights have been violated have a meaningful avenue to redress.

Michael Murphy, Just and Speedy: On Civil Discovery Sanctions for Luddite Lawyers, 25 George Mason Law Review 36 (2017) presents a theoretical model by which a judge could impose civil sanctions on an attorney – relying in part on Rule 1 of the Federal Rules of Civil Procedure – for that attorney’s failure to utilize time- and expense-saving technology. From the abstract:

Rule 1 now charges all participants in the legal system to ensure the “just, speedy and inexpensive” resolution of disputes. In today’s litigation environment, a lawyer managing a case in discovery needs robust technological competence to meet that charge. However, the legal industry is slow to adopt technology, favoring “tried and true” methods over efficiency. This conflict is evident in data showing clients’ and judges’ frustration with the lack of technological competency among the Bar, especially as it pertains to electronic discovery. Such frustration has led judges to publicly scold luddite attorneys, and has led state bar associations to pass anti-luddite ethical rules. Sanctions for “luddite” attorneys are an extreme, but theoretically possible, amplification of that normative movement. Sanctions leveraging Rule 1 require a close reading of the revised rule, and challenge the notion of Rule 1 as a “guide” to the Rules, but a case can be made for such sanctions based on the Rule’s affirmative charge.

The article briefly explores two examples of conduct warranting such sanctions in rare scenarios such as: (1) the failure of an attorney to utilize machine intelligence and concept analytics in the review of information for production and (2) the failure of an attorney to produce documents in accessible electronic format.

The article concludes by suggesting that well-publicized sanctions for “luddite” attorneys may break through the traditional barriers that limit innovation in the legal industry.

Cass R. Sunstein has posted Algorithms, Correcting Biases on SSRN. Here is the abstract:

A great deal of theoretical work explores the possibility that algorithms may be biased in one or another respect. But for purposes of law and policy, some of the most important empirical research finds exactly the opposite. In the context of bail decisions, an algorithm designed to predict flight risk does much better than human judges, in large part because the latter place an excessive emphasis on the current offense. Current Offense Bias, as we might call it, is best seen as a cousin of “availability bias,” a well-known source of mistaken probability judgments. The broader lesson is that well-designed algorithms should be able to avoid cognitive biases of many kinds. Existing research on bail decisions also casts a new light on how to think about the risk that algorithms will discriminate on the basis of race (or other factors). Algorithms can easily be designed so as to avoid taking account of race (or other factors). They can also be constrained so as to produce whatever kind of racial balance is sought, and thus to reveal tradeoffs among various social values.

From the abstract for Cary Coglianese & David Lehr, Transparency and Algorithmic Governance, Administrative Law Review, Forthcoming:

Machine-learning algorithms are improving and automating important functions in medicine, transportation, and business. Government officials have also started to take notice of the accuracy and speed that such algorithms provide, increasingly relying on them to aid with consequential public-sector functions, including tax administration, regulatory oversight, and benefits administration. Despite machine-learning algorithms’ superior predictive power over conventional analytic tools, algorithmic forecasts are difficult to understand and explain. Machine learning’s “black-box” nature has thus raised concern: Can algorithmic governance be squared with legal principles of governmental transparency? We analyze this question and conclude that machine-learning algorithms’ relative inscrutability does not pose a legal barrier to their responsible use by governmental authorities. We distinguish between principles of “fishbowl transparency” and “reasoned transparency,” explaining how both are implicated by algorithmic governance but also showing that neither conception compels anything close to total transparency. Although machine learning’s black-box features distinctively implicate notions of reasoned transparency, legal demands for reason-giving can be satisfied by explaining an algorithm’s purpose, design, and basic functioning. Furthermore, new technical advances will only make machine-learning algorithms increasingly more explainable. Algorithmic governance can meet both legal and public demands for transparency while still enhancing accuracy, efficiency, and even potentially legitimacy in government.

A judge capped the costs award in an occupier’s liability personal injury costs judgment, writing that the use of artificial intelligence should have “significantly reduced” counsel’s preparation time. The decision in Cass v. 1410088 Ontario Inc., 2018 ONSC 6959 reduced the starting point for disbursements by $11,404.08, citing both research fees as well as other aspects of the lawyers’ bill, and awarded a total cost award against the plaintiff of $20,000.

From the abstract for Milan Markovic, Rise of the Robot Lawyers? Arizona Law Review, Forthcoming:

The advent of artificial intelligence has provoked considerable speculation about the future of the American workforce, including highly educated professionals such as lawyers and doctors. Although most commentators are alarmed by the prospect of intelligent machines displacing millions of workers, not so with respect to the legal sector. Media accounts and some legal scholars envision a future where intelligent machines perform the bulk of legal work, and legal services are less expensive and more accessible. This future is purportedly near at hand as lawyers struggle to compete with technologically-savvy alternative legal service providers.

This Article challenges the notion that lawyers will be displaced by artificial intelligence on both empirical and normative grounds. Most legal tasks are inherently abstract and cannot be performed by even advanced artificial intelligence relying on deep-learning techniques. In addition, lawyer employment and wages have grown steadily over the last twenty years, evincing that the legal profession has benefited from new technologies, as it has throughout its history. Lastly, even if large-scale automation of legal work is possible, core societal values counsel against it. These values are not merely aspirational but are reflected in the multi-faceted role of lawyers and in the way that the legal system is structured.

From the blurb for Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2018):

In the world’s top research labs and universities, the race is on to invent the ultimate learning algorithm: one capable of discovering any knowledge from data, and doing anything we want, before we even ask. In The Master Algorithm, Pedro Domingos lifts the veil to give us a peek inside the learning machines that power Google, Amazon, and your smartphone. He assembles a blueprint for the future universal learner–the Master Algorithm–and discusses what it will mean for business, science, and society. If data-ism is today’s philosophy, this book is its bible.

From Public Attitudes Toward Computer Algorithms (Nov. 16, 2018): “Pew Research Center survey of U.S. adults finds that the public is frequently skeptical of these tools when used in various real-life situations. … This skepticism spans several dimensions. At a broad level, 58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free. And in various contexts, the public worries that these tools might violate privacy, fail to capture the nuance of complex situations, or simply put the people they are evaluating in an unfair situation. Public perceptions of algorithmic decision-making are also often highly contextual. The survey shows that otherwise similar technologies can be viewed with support or suspicion depending on the circumstances or on the tasks they are assigned to do.”

H/T InfoDocket.

Jean-Marc Deltorn and Franck Macrez have posted Authorship in the Age of Machine Learning and Artificial Intelligence, to be published in Sean M. O’Connor (ed.), The Oxford Handbook of Music Law and Policy, Oxford University Press, 2019 (Forthcoming):

New generations of algorithmic tools have recently become available to artists. Based on the latest development in the field of machine learning – the theoretical framework driving the current surge in artificial intelligence applications -, and relying on access to unprecedented amounts of both computational power and data, these technological intermediaries are opening the way to unexpected forms of creation. Instead of depending on a set of man-made rules to produce novel artworks, generative processes can be automatically learnt from a corpus of training examples. Musical features can be extracted and encoded in a statistical model with no or minimal human input and be later used to produce original compositions, from baroque polyphony to jazz improvisations. The advent of such creative tools, and the corollary vanishing presence of the human in the creative pipeline, raises a number of fundamental questions in terms of copyright protection. Assuming AI generated compositions are protected by copyright, who is the author when the machine contributes to the creative process? And, what are the minimal requirements to be rewarded with authorship?

Here’s the abstract for Andrea L. Roth, Machine Testimony, 126 Yale Law Journal ___ (2017):

Machines play increasingly crucial roles in establishing facts in legal disputes. Some machines convey information — the images of cameras, the measurements of thermometers, the opinions of expert systems. When a litigant offers a human assertion for its truth, the law subjects it to testimonial safeguards — such as impeachment and the hearsay rule — to give juries the context necessary to assess the source’s credibility. But the law on machine conveyance is confused; courts shoehorn them into existing rules by treating them as “hearsay,” as “real evidence,” or as “methods” underlying human expert opinions. These attempts have not been wholly unsuccessful, but they are intellectually incoherent and fail to fully empower juries to assess machine credibility. This Article seeks to resolve this confusion and to offer a coherent framework for conceptualizing and regulating machine evidence. First, it explains that some machine evidence, like human testimony, depends on the credibility of a source. Just as so-called “hearsay dangers” lurk in human assertions, “black box dangers” — human and machine errors causing a machine to be false by design, inarticulate, or analytically unsound — potentially lurk in machine conveyances. Second, it offers a taxonomy of machine evidence, explaining which types implicate credibility and how courts have attempted to regulate them through existing law. Third, it offers a new vision of testimonial safeguards for machines. It explores credibility testing in the form of front-end design, input and operation protocols; pretrial disclosure and access rules; authentication and reliability rules; impeachment and courtroom testing mechanisms; jury instructions; and corroboration rules. And it explains why machine sources can be “witnesses” under the Sixth Amendment, refocusing the right of confrontation on meaningful impeachment. The Article concludes by suggesting how the decoupling of credibility testing from the prevailing courtroom-centered hearsay model could benefit the law of testimony more broadly.

From Brian Higgins’ California Appeals Court Denies Defendant Access to Algorithm That Contributed Evidence to His Conviction, Artificial Intelligence Technology and the Law Blog (Oct. 31, 2018):

The closely-followed issue of algorithmic transparency was recently considered by a California appellate court in People v. Superior Court of San Diego County, slip op. Case D073943 (Cal. App. 4th October 17, 2018), in which the People sought relief from a discovery order requiring the production of software and source code used in the conviction of Florencio Jose Dominguez. Following a hearing and review of the record and amicus briefs in support of Dominguez filed by the American Civil Liberties Union, the American Civil Liberties Union of San Diego and Imperial Counties, the Innocence Project, Inc., the California Innocence Project, the Northern California Innocence Project at Santa Clara University School of Law, Loyola Law School’s Project for the Innocent, and the Legal Aid Society of New York City, the appeals court granted the People’s relief. In doing so, the court considered, but was not persuaded by, the defense team’s “black box” and “machine testimony” arguments.

Here’s the text of the opinion.

Vincent is “the first AI-powered intelligent legal research assistant of its kind. Only Vincent can analyze documents in two languages (English and Spanish) from 9 countries (and counting), and is built ready to incorporate content not only from vLex’s expansive global collection, but also from internal knowledge management resources, public sources and licensed databases simultaneously. How does Vincent do it, you ask? Well, it’s been trained on vLex’s extensive global collection of 100 million+ legal documents, and is built on top of the Iceberg AI platform.” For more information, see this vLex blog post.

AI Fairness 360 (AIF360) is a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias throughout the AI application lifecycle. Containing over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into actual practice. Here’s IBM’s press release.

From Quartz: California governor Jerry Brown signed SB 10 into law last week, a bill that replaces cash bail with an algorithmic system. Each county will have to put in place a system to ascertain a suspect’s risk of flight or committing another crime during the trial process, whether that means using a system from a third-party contractor or developing one themselves before the October 2019 deadline.

H/T to beSpacific. — Joe

From Orly Mazur, Taxing the Robots, 46 Pepperdine Law Review (Forthcoming):

Robots and other artificial intelligence-based technologies are increasingly outperforming humans in jobs previously thought safe from automation. This has led to growing concerns about the future of jobs, wages, economic equality and government revenues. To address these issues, there have been multiple calls around the world to tax the robots. Although the concerns that have led to the recent robot tax proposals may be valid, this Article cautions against the use of a robot tax. It argues that a tax that singles out robots is the wrong tool to address these critical issues and warns of the unintended consequences of such a tax, including limiting innovation. Rather, advances in robotics and other forms of artificial intelligence merely exacerbate the issues already caused by a tax system that under-taxes capital income and over-taxes labor income. Thus, this Article proposes tax policy measures that seek to rebalance our tax system so that capital income and labor income are taxed in parity. This Article also recommends non-tax policy measures that seek to improve the labor market, support displaced workers, and encourage innovation, because tax policy alone cannot solve all of the issues raised by the robotics revolution. Together, these changes have the potential to manage the threat of automation while also maximizing its advantages, thereby easing our transition into this new automation era.

— Joe

From John Flood & Lachlan Robb, Professions and Expertise: How Machine Learning and Blockchain are Redesigning the Landscape of Professional Knowledge and Organisation (Aug. 21, 2018):

Machine learning has entered the world of the professions with differential impacts. Engineering, architecture, and medicine are early and enthusiastic adopters. Other professions, especially law, are late and in some cases reluctant adopters. And in the wider society automation will have huge impacts on the nature of work and society. This paper examines the effects of artificial intelligence and blockchain on professions and their knowledge bases. We start by examining the nature of expertise in general and then how it functions in law. Using examples from law, such as Gulati and Scott’s analysis of how lawyers create (or don’t create) legal agreements, we show that even non-routine and complex legal work is potentially amenable to automation. However, professions are different because they include both indeterminate and technical elements that make pure automation difficult to achieve. We go on to consider the future prospects of AI and blockchain on professions and hypothesise that as the technologies mature they will incorporate more human work through neural networks and blockchain applications such as the DAO. For law, and the legal profession, the role of lawyer as trusted advisor will again emerge as the central point of value.

— Joe

“Deepfakes” is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos, usually without permission. Such digital impersonation is on the rise. Deepfakes raise the stakes for the “fake news” phenomenon in dramatic fashion (quite literally). Lawfare offers examples:

  • Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery.
  • Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.
  • Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.
  • Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.
  • A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.
  • A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election.
  • A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence.
  • False audio might convincingly depict U.S. officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.
  • A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.

For more, see:

The impending war over deepfakes, Axios, July 22, 2018

Here’s why it’s so hard to spot deepfakes, CNN, Aug. 8, 2018

Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?, Lawfare, Feb. 21, 2018

— Joe