Vincent is “the first AI-powered intelligent legal research assistant of its kind. Only Vincent can analyze documents in two languages (English and Spanish) from 9 countries (and counting), and is built ready to incorporate content not only from vLex’s expansive global collection, but also from internal knowledge management resources, public sources and licensed databases simultaneously. How does Vincent do it, you ask? Well, it’s been trained on vLex’s extensive global collection of 100 million+ legal documents, and is built on top of the Iceberg AI platform.” For more information, see this vLex blog post.

AI Fairness 360 (AIF360) is a comprehensive open-source toolkit of metrics to check for unwanted bias in datasets and machine learning models, and state-of-the-art algorithms to mitigate such bias throughout the AI application lifecycle. Containing over 30 fairness metrics and 9 state-of-the-art bias mitigation algorithms developed by the research community, it is designed to translate algorithmic research from the lab into actual practice. Here’s IBM’s press release.

From Quartz: California governor Jerry Brown signed SB 10 into law last week, a bill that replaces cash bail with an algorithmic system. Each county will have to put in place a system to ascertain a suspect’s risk of flight or committing another crime during the trial process, whether that means using a system from a third-party contractor or developing one themselves before the October 2019 deadline.

H/T to beSpacific. — Joe

From Orly Mazur, Taxing the Robots, 46 Pepperdine Law Review (Forthcoming):

Robots and other artificial intelligence-based technologies are increasingly outperforming humans in jobs previously thought safe from automation. This has led to growing concerns about the future of jobs, wages, economic equality and government revenues. To address these issues, there have been multiple calls around the world to tax the robots. Although the concerns that have led to the recent robot tax proposals may be valid, this Article cautions against the use of a robot tax. It argues that a tax that singles out robots is the wrong tool to address these critical issues and warns of the unintended consequences of such a tax, including limiting innovation. Rather, advances in robotics and other forms of artificial intelligence merely exacerbate the issues already caused by a tax system that under-taxes capital income and over-taxes labor income. Thus, this Article proposes tax policy measures that seek to rebalance our tax system so that capital income and labor income are taxed in parity. This Article also recommends non-tax policy measures that seek to improve the labor market, support displaced workers, and encourage innovation, because tax policy alone cannot solve all of the issues raised by the robotics revolution. Together, these changes have the potential to manage the threat of automation while also maximizing its advantages, thereby easing our transition into this new automation era.

— Joe

From John Flood & Lachlan Robb, Professions and Expertise: How Machine Learning and Blockchain are Redesigning the Landscape of Professional Knowledge and Organisation (Aug. 21, 2018):

Machine learning has entered the world of the professions with differential impacts. Engineering, architecture, and medicine are early and enthusiastic adopters. Other professions, especially law, are late and in some cases reluctant adopters. And in the wider society automation will have huge impacts on the nature of work and society. This paper examines the effects of artificial intelligence and blockchain on professions and their knowledge bases. We start by examining the nature of expertise in general and then how it functions in law. Using examples from law, such as Gulati and Scott’s analysis of how lawyers create (or don’t create) legal agreements, we show that even non-routine and complex legal work is potentially amenable to automation. However, professions are different because they include both indeterminate and technical elements that make pure automation difficult to achieve. We go on to consider the future prospects of AI and blockchain on professions and hypothesise that as the technologies mature they will incorporate more human work through neural networks and blockchain applications such as the DAO. For law, and the legal profession, the role of lawyer as trusted advisor will again emerge as the central point of value.

— Joe

“Deepfakes” is an artificial intelligence-based human image synthesis technique. It is used to combine and superimpose existing images and videos onto source images or videos, usually without permission. Such digital impersonation is on the rise. Deepfakes raise the stakes for the “fake news” phenomenon in dramatic fashion (quite literally). Lawfare offers examples:

  • Fake videos could feature public officials taking bribes, uttering racial epithets, or engaging in adultery.
  • Politicians and other government officials could appear in locations where they were not, saying or doing horrific things that they did not.
  • Fake videos could place them in meetings with spies or criminals, launching public outrage, criminal investigations, or both.
  • Soldiers could be shown murdering innocent civilians in a war zone, precipitating waves of violence and even strategic harms to a war effort.
  • A deep fake might falsely depict a white police officer shooting an unarmed black man while shouting racial epithets.
  • A fake audio clip might “reveal” criminal behavior by a candidate on the eve of an election.
  • A fake video might portray an Israeli official doing or saying something so inflammatory as to cause riots in neighboring countries, potentially disrupting diplomatic ties or even motivating a wave of violence.
  • False audio might convincingly depict U.S. officials privately “admitting” a plan to commit this or that outrage overseas, exquisitely timed to disrupt an important diplomatic initiative.
  • A fake video might depict emergency officials “announcing” an impending missile strike on Los Angeles or an emergent pandemic in New York, provoking panic and worse.

For more, see:

The impending war over deepfakes, Axios, July 22, 2018

Here’s why it’s so hard to spot deepfakes, CNN, Aug. 8, 2018

Deep Fakes: A Looming Crisis for National Security, Democracy and Privacy?, Lawfare, Feb. 21, 2018

— Joe

In A-I is a G-O,  Dyane O’Leary offers her perspective on ROSS, an artificial intelligence legal research tool. Artificial intelligence is a hot button issue, and this article explores what these new platforms might offer and whether LRW professors should be teaching them. — Joe

Here’s the abstract for Law Without Mind: AI, Ethics, and Jurisprudence by Joshua P. Davis:

Anything we can conceive that computers may do, it seems that they end up doing and that they end up doing it better than us and much sooner than we expected. They have gone from calculating mathematics for us to creating and maintaining our social networks to serving as our personal assistants. We are told they may soon become our friends and make life and death decisions driving our cars. Perhaps they will also take over interpreting our laws. It is not that hard to conceive of computers doing so to the extent legal interpretation involves mere description or prediction. It is much harder to conceive of computers making substantive moral judgments. So the ultimate bulwark against ceding legal interpretation to computers—from having computers usurp the responsibility and authority of attorneys, citizens, and even judges—may be to recognize the role of moral judgment in saying what the law is. That possibility connects the cutting edge with the traditional. The central dispute in jurisprudence for the past half century or more has been about the role of morality in legal interpretation. Suddenly, that dispute has great currency and urgency. Jurisprudence may help us to clarify and circumscribe the role of computers in our legal system. And contemplating AI may help us to resolve jurisprudential debates that have vexed us for decades.

— Joe

AI in Law and Legal Practice – A Comprehensive View of 35 Current Applications explores the major areas of current AI applications in law, individually and in depth. Current AI applications fall in six major categories:

  • Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
  • Prediction technology – An AI software generates results that forecast litigation outcome.
  • Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
  • Document automation – Law firms use software templates to create filled out documents based on data input.
    Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
  • Electronic billing – Lawyers’ billable hours are computed automatically.

— Joe

Here’s the abstract for Michal Gal’s Algorithms as Illegal Agreements, Berkeley Technology Law Journal, Forthcoming:

Despite the increased transparency, connectivity, and search abilities that characterize the digital marketplace, the digital revolution has not always yielded the bargain prices that many consumers expected. What is going on? Some researchers suggest that one factor may be coordination between the algorithms used by suppliers to determine trade terms. Simple coordination-facilitating algorithms are already available off the shelf, and such coordination is only likely to become more commonplace in the near future. This is not surprising. If algorithms offer a legal way to overcome obstacles to profit-boosting coordination, and create a jointly profitable status quo in the market, why should suppliers not use them? In light of these developments, seeking solutions – both regulatory and market-driven – is timely and essential. While current research has largely focused on the concerns raised by algorithmic-facilitated coordination, this article takes the next step, asking to what extent current laws can be fitted to effectively deal with this phenomenon.

To meet this challenge, this article advances in three stages. The first part analyzes the effects of algorithms on the ability of competitors to coordinate their conduct. While this issue has been addressed by other researchers, this article seeks to contribute to the analysis by systematically charting the technological abilities of algorithms that may affect coordination in the digital ecosystem in which they operate. Special emphasis is placed on the fact that the algorithms is a “recipe for action”, which can be directly or indirectly observed by competitors. The second part explores the promises as well as the limits of market solutions. In particular, it considers the use of algorithms by consumers and off-the-grid transactions to counteract some of the effects of algorithmic-facilitated coordination by suppliers. The shortcomings of such market solutions lead to the third part, which focuses on the ability of existing legal tools to deal effectively with algorithmic-facilitated coordination, while not harming the efficiencies they bring about. The analysis explores three interconnected questions that stand at the basis of designing a welfare-enhancing policy: What exactly do we wish to prohibit, and can we spell this out clearly for market participants? What types of conduct are captured under the existing antitrust laws? And is there justification for widening the regulatory net beyond its current prohibitions in light of the changing nature of the marketplace? In particular, the article explores the application of the concepts of plus factors and facilitating practices to algorithms. The analysis refutes the Federal Trade Commission’s acting Chairwoman’s claim that current laws are sufficient to deal with algorithmic-facilitated coordination.

— Joe

From the abstract of Thomas King, et al., Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions:

Artificial Intelligence (AI) research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, which we term AI-Crime (AIC). We already know that AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing law enforcement and policy-makers with a synthesis of the current problems, and a possible solution space.

— Joe

From the conclusion from Law Technology Today’s Legal Analytics vs. Legal Research: What’s the Difference?:

Technology is transforming the legal services industry. Some attorneys may resist this transformation out of fear that new technologies might change how they practice law or even make their jobs obsolete. Similar concerns were voiced when legal research moved from books to computers. But that transition did not reduce the need for attorneys skilled in legal research. Instead, it made attorneys better and more effective at their jobs.

Similarly, legal analytics will not make the judgment and expertise of seasoned lawyers obsolete. It will, however, enable those who employ it to provide better and more cost-effective representation for their clients and better compete with their opponents.

— Joe

Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda (May 18, 2018): “Artificial intelligence generates challenges for human rights. Inviolability of human life is the central idea behind human rights, an underlying implicit assumption being the hierarchical superiority of humankind to other forms of life meriting less protection. These basic assumptions are questioned through the anticipated arrival of entities that are not alive in familiar ways but nonetheless are sentient and intellectually and perhaps eventually morally superior to humans. To be sure, this scenario may never come to pass and in any event lies in a part of the future beyond current grasp. But it is urgent to get this matter on the agenda. Threats posed by technology to other areas of human rights are already with us. My goal here is to survey these challenges in a way that distinguishes short-, medium term and long-term perspectives.” — Joe

From the blurb for Ronals K. L. Collins and David M. Skover’s Robotica: Speech Rights and Artificial Intelligence (Cambridge University Press, May 31, 2018):

In every era of communications technology – whether print, radio, television, or Internet – some form of government censorship follows to regulate the medium and its messages. Today we are seeing the phenomenon of ‘machine speech’ enhanced by the development of sophisticated artificial intelligence. Ronald K. L. Collins and David M. Skover argue that the First Amendment must provide defenses and justifications for covering and protecting robotic expression. It is irrelevant that a robot is not human and cannot have intentions; what matters is that a human experiences robotic speech as meaningful. This is the constitutional recognition of ‘intentionless free speech’ at the interface of the robot and receiver. Robotica is the first book to develop the legal arguments for these purposes. Aimed at law and communication scholars, lawyers, and free speech activists, this work explores important new problems and solutions at the interface of law and technology.

— Joe

According to the In-House Counsel’s LegalTech Buyer’s Guide 2018, the number of artificial intelligence companies catering to the legal field has grown by 65 percent in the last year, from 40 to 66. In his LawSites post, Bob Ambrogi offers some caveats:

First, its listing of AI companies is not complete. Most notably, it omits Thomson Reuters, whose Westlaw, with its natural-language processing, was one of the earliest AI products in legal. Thomson Reuters Labs and, within it, the Center for Cognitive Computing, are major initiatives devoted to the study of AI and data science. Just in January, TR rolled out an AI-powered product for data privacy law.

In addition, there are a number of small legal tech startups that are using AI but that are not included on this list.

Second, when the guide suggests that established players such as LexisNexis are joining the field, it should be pointed out, for the sake of accuracy, that LexisNexis, like TR, was using AI in its research platform well before most of these other players came along.

— Joe

In Libraries in the Age of Artificial Intelligence, Computers in Libraries, January/February 2018, Ben Johnson calls on public libraries to help provide open source AI tools. Two snips:

While libraries will certainly be changed by the AI revolution—and in ways we can’t imagine—it seems unlikely that they will cease to exist altogether. Indeed, public libraries and public universities may yet have a critical role to play in the AI revolution. Today’s mainstream AIs are dominated by proprietary software. Apple, Microsoft, Google, Facebook, and other major tech players all have their own AIs. These companies have invested heavily in research and development, and they have guarded their intellectual property closely. The algorithms that give rise to machine learning are mostly kept secret, and the code that results from machine learning is often so complex that even the human developers don’t understand exactly how their code works. So even if you wanted to know what AI was thinking, you would be out of luck. But if AI is a black box for which we have no key, public institutions can play in important role in providing open source AI solutions that allow for more transparency and more control.

From intellectual freedom to information literacy and more, libraries provide a set of principles that have helped guide intellectual growth for the past century. In the age of AI, those principles are more relevant than ever. But libraries are not the center of the information world anymore, and the new players don’t always share our values. As machine learning proliferates, what steps can we take to ensure that the values of librarianship are incorporated into AI systems? Advocacy should be directed not at maintaining traditional librarianship, but in influencing the development of the emerging information systems that may come to replace us.

— Joe

David Lat and Brian Dalton report that for the past few months Above the Law and Thomson Reuters have been taking a deep dive into what AI, machine learning, and other cutting-edge technologies mean for lawyers and the legal world. “We’ve been reading, researching, and reporting, talking to experts across the country, to learn how AI will affect law school, legal ethics, litigation, legal research, and the business of law, among other subjects.” Now they are launching a four-part multimedia exploration of how artificial intelligence and similar emerging technologies are reshaping the legal profession. Called Law2020, you can view the presentations here. See also Thomson Reuters’ Demystifying Artificial Intelligence (AI): A legal professional’s 7-step guide through the noise. — Joe

From the abstract of Robert H. Sloan and Richard Warner’s When Is an Algorithm Transparent?: Predictive Analytics, Privacy, and Public Policy (Oct. 12, 2017):

The rise of data mining and predictive analytics makes the problem of algorithmic transparency pressing. Solving that problem requires answers to two questions. What are the criteria of transparency? And how do you tell whether a predictive system meets those criteria? We confine our attention to consumers engaged in commercial transactions because this already raises most of the questions that concern us. We propose that predictive systems are transparent for consumers if they able to readily ascertain the risks and benefits associated with the predictive systems to which they are subject. We examine three ways to meet this condition: disclosing source code; techniques that reveal how an algorithm works without disclosing source code; and reliance on informational norms.

— Joe

From the abstract of Steven James Bartlett’s The Case for Government by Artificial Intelligence (Dec. 18, 2017):

Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more of a challenge, despite the predicted decrease in vehicle accidents thanks to artificial intelligence that isn’t subject to human distractions and errors of judgment. If turning vehicle control over to artificial intelligence is a challenge, it is a very mild one compared with the idea that we might one day recognize and want to implement the advantages of human government by AI. But, like autonomous vehicle control, government by AI is likely to offer decided benefits. In other publications, the author has studied a variety of widespread human limitations that, throughout human history, have led to much human suffering as well as ecological destruction. For the first time, these psychological and cognitive human shortcomings are taken into account in an essay that makes the case for government by artificial intelligence.

— Joe

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Feb. 2018) “surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.” — Joe