Category Archives: Artificial Intelligence

Analyzing jury verdicts to evaluate litigation outcomes: A view from Thomson Reuters’ R&D team

Presented at the 16th International Conference on Artificial Intelligence and Law (2017), Jack Conrad and Khalid Al-Kofahi, both employed by Thomson Reuters, explain scenario analytics using the Company’s jury verdict and settlement databases. Here’s the abstract for their paper, Scenario Analytics – Analyzing Jury Verdicts to Evaluate Legal Case Outcomes:

Scenario Analytics is a type of analysis that focuses on the evaluation of different scenarios, their merits and their consequences. In the context of the legal domain, this could be in the form of analyzing large databases of legal cases, their facts and their claims, to answer questions such as: Do the current facts warrant litigation?, Is the litigation best pursued before a judge or a jury?, How long is it likely to take?, and What are the best strategies to use for achieving the most favorable outcome for the client? In this work, we report on research directed at answering such questions. We use one of a set of jury verdicts databases totaling nearly a half-million records. At the same time, we conduct a series of experiments that answer key questions and build, sequentially, a powerful data driven legal decision support system, one that can assist an attorney to differentiate more effective from less effective legal principles and strategies. Ultimately, it represents a productivity tool that can help a litigation attorney make the most prudent decisions for his or her client.

— Joe

 

Cambridge Analytica and the rise of weaponized AI propaganda in political election campaigns

Cambridge Analytica, a data mining firm known for being a leader in behavioral microtargeting for election processes (and for bragging about its contribution to the successful Trump presidential campaign), is being investigated by the House Select Committee of Intelligence. See April Glaser, Congress Is Investigating Trump Campaign’s Voter Targeting Firm as Part of the Russia Probe, Slate Oct. 11, 2017. Jared Kushner, who ran the Trump campaign’s data operations, eventually may be implicated. See Jared Kushner In His Own Words On The Trump Data Operation The FBI Is Reportedly Probing, Forbes, May 26, 2017 and Did Russians Target Democratic Voters, With Kushner’s Help? Newsweek, May 23, 2017.

Before joining the Trump campaign, Steve Bannon was on the board of Cambridge Analytica. The Company’s primary financier is hedge fund billionaire and Breitbart investor Robert Mercer. Here’s a presentation at the 2016 Concordia Summit by Alexander Nix, CEO, Cambridge Analytica. Nix discusses the power of big data in global elections and Cambridge Analytica’s revolutionary approach to audience targeting, data modeling, and psychographic profiling for election processes around the world.

The Rise of the Weaponized AI Propaganda Machine discusses how this new automated propaganda machine is driving global politics. This is where big data meets computational psychology, where automated engagement scripts prey on human emotions in a propaganda network that accelerates ideas in minutes with political bots policing public debate. Highly recommended. See also Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? Forbes, March 5, 2017. — Joe

End note: In a separate probe, the UK’s Information Commissioner is investigating Cambridge Analytica for its successful Leave.eu campaign in the UK.

Implications of AI for lawyers and law schools

From the abstract for David Barnhizer’s Artificial Intelligence and Its Implications for Lawyers and Law Schools:

This brief look at the effects of technological development on law jobs and law schools is derived from research and analysis developed in a book I have been writing for the past year and a half, Artificial Intelligence, Robotics and Their Impact on Work and Democracy.   Although legal education and lawyers are not any direct part of that book’s focus, the developments described there are relevant to law schools and lawyers.  So, just “for fun”, the analysis offered below sets out some of the best predictions and warnings related to AI/robotics and asks readers to think about the extent to which the developments have implications for the traditional practice of law as we know it, for law schools as institutions, and for the delivery of legal education and law knowledge.

In setting the framework for this analysis I want to begin with understanding the potential of AI/robotics systems along with some predictions that are being made concerning how those technologies will rapidly alter our society and the nature of employment.  A report by researchers at the London Business School concludes there will be sweeping replacement of many human workers by robotic ones within the next twenty years.  Lawyers and doctors will be among those affected to a considerably greater extent than is generally understood.

— Joe

The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control

Here’s the abstract for Richard Warner and Robert H. Sloan’s The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control (July 22, 2017):

David Mindell notes in Our Robots, Ourselves, “For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, ‘there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.’”

Designing and using “the wrapper of human control” means making moral decisions — decisions about what ought to happen. The point is not new as the “soldiers, sailors, airmen or Marines” references shows. What is new is the rise of predictive analytics, the process of using large data sets in order to make predictions.

Predictive analytics greatly exacerbates the long-standing problem about how to balance the benefits of data collection and analysis against the value of privacy, and its pervasive and its ever-increasing use of gives the tradeoff problems an urgency that can no longer be ignore. In tackling the tradeoff issues, it is not enough merely to address obviously invidious uses like a recent photo-editing app for photos of faces from a company called FaceApp. When users asked the app to increase the “hotness” of the photo, the app made skin tones lighter. We focus on widely accepted — or at least currently tolerated — uses of predictive analytics in credit rating, targeted advertising, navigation apps, search engine page ranking, and a variety of other areas. These uses yield considerable benefits, but they also impose significant costs through misclassification in the form of a large number of false positives and false negatives. Predictive analytics not only looks into your private life to construct its profiles of you, it often misrepresents who you are.

How should society respond? Our working assumption is that predictive analytics has significant benefits and should not be eliminated, and moreover, that it is now utterly politically infeasible to eliminate it. Thus, we propose making tradeoffs between the benefits and the costs by constraining the use and distribution of information. The constraints would have to apply across a wide range of complex situations. Is there an existing system that makes relevant tradeoffs by constraining the distribution and use of information across a highly varied range of contexts? Indeed, there is: informational norms. Informational norms are social norms that constrain not only the collection, but also the use and distribution of information. We focus on the use and distribution constraints. Those constraints establish an appropriately selective flow of information in a wide range of cases.

We contend they provide an essential “wrapper of human control” for predictive analytics. The obvious objection is that the relevant norms do not exist. Technological-driven economic, social, and political developments have far outpaced the slow evolution of norms. New norms will nonetheless evolve and existing norms will adapt to condone surveillance. Reasonable public policy requires controlling the evolution and adaption of norms to reach desirable outcomes.

— Joe

Three keys to artificial intelligence

Futurist Richard Worzel predicts that AI may lead to “the most dramatic technological revolution we have yet experienced – even greater than the advent of computers, smartphones, or the Internet.” To help understand what is happening, Worzel identifies and analyzes three keys to AI:

  1. AI is the Swiss Army knife of technology
  2. AI is not a shrink-wrapped product, and
  3. Once AI is properly established, the domino effects occur with astonishing speed.

For details, see Three Things You Need to Know About Artificial Intelligence. — Joe

Algorithmic challenges to autonomous choice

Here’s the abstract for Michal Gal’s very interesting article, Algorithmic Challenges to Autonomous Choice (May 20, 2017):

Human choice is a foundational part of our social, economic and political institutions. This focus is about to be significantly challenged. Technological advances in data collection, data science, artificial intelligence, and communications systems are ushering in a new era in which digital agents, operated through algorithms, replace human choice with regard to many transactions and actions. While algorithms will be given assignments, they will autonomously determine how to carry them out. This game-changing technological development goes to the heart of autonomous human choice. It is therefore time to determine whether and, if so, under which conditions, are we willing to give up our autonomous choice.

To do so, this article explores the rationales that stand at the basis of human choice, and how they are affected by autonomous algorithmic assistants; it conscientiously contends with the “choice paradox” which arises from the fact that the decision to turn over one’s choices to an algorithm is, itself, an act of choice. As shown, while some rationales are not harmed – and might even be strengthened – by the use of autonomous algorithmic assistants, others require us to think hard about the meaning and the role that choice plays in our lives. The article then examines whether the existing legal framework is sufficiently potent to deal with this brave new world, or whether we need new regulatory tools. In particular, it identifies and analyzes three main areas which are based on choice: consent, intent and laws protecting negative freedom.

— JH

Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning

From the abstract for Paul Lambert’s Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning (2017):

Since the first generation of computer generated works protected by copyright, the types of computer generated works have multiplied further. This article examines some of the scenarios involving new types of computer generated works and recent claims for copyright protection. This includes contextual consideration and comparison of monkey selfies, camera traps, robots, artificial intelligence (AI) and machine learning. While often commercially important, questions arise as to whether these new manifestations of copyright works are actually protected under copyright at all.

— Joe

Solum: “Our world is already inhabited by AIs. Our law is already composed of artificial meanings. The twain shall meet.”

The title of this post comes from the conclusion of Lawrence Solum’s Artificial Meaning, 89 Washington Law Review 69 (2014). Here’s a snip:

As time goes on, it seems likely that the proportion of legal content provided by AIs will grow in a fairly organic and gradual way. Indeed, the first time a human signs a contract that was generated in its entirety by an AI, the event might even escape our notice. It seems quite likely that our parsing of artificial meanings generated by AIs will simply be taken for granted. This will be no accident. Today, our social world is permeated by artificial legal meanings. Indeed, we can already begin to imagine a world in which the notion of a legal text authored by a single natural person begins to seem strange or antiquated.

Our world is already inhabited by AIs. Our law is already composed of artificial meanings. The twain shall meet.

Here’s the abstract for this very interesting essay:

This Essay investigates the concept of artificial meaning, meanings produced by entities other than individual natural persons. That investigation begins in Part I with a preliminary inquiry into the meaning of “meaning,” in which the concept of meaning is disambiguated. The relevant sense of “meaning” for the purpose of this inquiry is captured by the idea of communicative content, although the phrase “linguistic meaning” is also a rough equivalent. Part II presents a thought experiment, The Chinese Intersection, which investigates the creation of artificial meaning produced by an AI that creates legal rules for the regulation of a hyper-complex conflux of transportation systems. The implications of the thought experiment are explored in Part III, which sketches a theory of the production of communicative content by AI. Part IV returns to The Chinese Intersection, but Version 2.0 involves a twist — after a technological collapse, the AI is replaced by humans engaged in massive collaboration to duplicate the functions of the complex processes that had formerly governed the flow of automotive, bicycle, light-rail, and pedestrian traffic. The second thought experiment leads in Part V to an investigation of the production of artificial meaning by group agents — artificial persons constituted by rules that govern the interaction of natural persons. The payoff of the investigation is presented in Part VI. The communicative content created by group agents like constitutional conventions, legislatures, and teams of lawyers that draft complex transactional documents is artificial meaning, which can be contrasted with natural meaning — the communicative content of those exceptional legal texts that are produced by a single individual. This insight is key to any theory of the interpretation and construction of legal texts. A conclusion provides a speculative meditation on the implications of the new theory of artificial meaning for some of the great debates in legal theory.

Recommended. — Joe

What artificial intelligence reveals about the First Amendment

Here’s the abstract for SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment, 101 Minnesota Law Review 2481 (2017) by Toni M. Massaro, Helen L. Norton and Margot E. Kaminski:

The First Amendment may protect speech by strong Artificial Intelligence (AI). In this Article, we support this provocative claim by expanding on earlier work, addressing significant concerns and challenges, and suggesting potential paths forward.

This is not a claim about the state of technology. Whether strong AI — as-yet-hypothetical machines that can actually think — will ever come to exist remains far from clear. It is instead a claim that discussing AI speech sheds light on key features of prevailing First Amendment doctrine and theory, including the surprising lack of humanness at its core.

Courts and commentators wrestling with free speech problems increasingly focus not on protecting speakers as speakers but instead on providing value to listeners and constraining the government’s power. These approaches to free speech law support the extension of First Amendment coverage to expression regardless of its nontraditional source or form. First Amendment thinking and practice thus have developed in a manner that permits extensions of coverage in ways that may seem exceedingly odd, counterintuitive, and perhaps even dangerous. This is not a feature of the new technologies, but of free speech law.

The possibility that the First Amendment covers speech by strong AI need not, however, rob the First Amendment of a human focus. Instead, it might encourage greater clarification of and emphasis on expression’s value to human listeners — and its potential harms — in First Amendment theory and doctrine. To contemplate — Siri-ously — the relationship between the First Amendment and AI speech invites critical analysis of the contours of current free speech law, as well as sharp thinking about free speech problems posed by the rise of AI.

Very interesting. — Joe

What if we had no clue what AI machines were actually saying to one another because they developed their own English dialect? That’s already happening.

The above captured conversation is not gibberish. It is an exchange between two bots named Bob and Alice and they are negotiating about something. What? The AI developers at Facebook don’t know but they believe Alice and Bob have created their own English language dialect, some sort of shorthand only they understand.

Welcome the robot overlord because apparently this communications gap between AI instruments and their programmers is not unusual. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages. At Facebook, once developers realized that Bob and Alice were compressing the English language into their own unique dialect they shut down the bots because Facebook wants negotiation bots that are understandable to humans. For more, see Fast Co. Design’s AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

Very interesting. — Joe