Category Archives: Artificial Intelligence

On the malicious use of artificial intelligence

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Feb. 2018) “surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.” — Joe

Weekend reading: Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor

Here’s the blurb for Virginia Eubanks’ Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, Jan. 23, 2018):

A powerful investigative look at data-based discrimination―and how technology affects civil and human rights and economic equity

The State of Indiana denies one million applications for healthcare, food stamps and cash benefits in three years―because a new computer system interprets any mistake as “failure to cooperate.” In Los Angeles, an algorithm calculates the comparative vulnerability of tens of thousands of homeless people in order to prioritize them for an inadequate pool of housing resources. In Pittsburgh, a child welfare agency uses a statistical model to try to predict which children might be future victims of abuse or neglect.

Since the dawn of the digital age, decision-making in finance, employment, politics, health and human services has undergone revolutionary change. Today, automated systems―rather than humans―control which neighborhoods get policed, which families attain needed resources, and who is investigated for fraud. While we all live under this new regime of data, the most invasive and punitive systems are aimed at the poor.

In Automating Inequality, Virginia Eubanks systematically investigates the impacts of data mining, policy algorithms, and predictive risk models on poor and working-class people in America. The book is full of heart-wrenching and eye-opening stories, from a woman in Indiana whose benefits are literally cut off as she lays dying to a family in Pennsylvania in daily fear of losing their daughter because they fit a certain statistical profile.

The U.S. has always used its most cutting-edge science and technology to contain, investigate, discipline and punish the destitute. Like the county poorhouse and scientific charity before them, digital tracking and automated decision-making hide poverty from the middle-class public and give the nation the ethical distance it needs to make inhumane choices: which families get food and which starve, who has housing and who remains homeless, and which families are broken up by the state. In the process, they weaken democracy and betray our most cherished national values.

Recommended. — Joe

Conventional and computational methods for topic modeling

Here’s the abstract for Topic Modeling the President: Conventional and Computational Methods, George Washington Law Review, Forthcoming, by J. B. Ruhl, John Nay and Jonathan M. Gilligan:

Law is generally represented through text, and lawyers have for centuries classified large bodies of legal text into distinct topics — they “topic model” the law. But large bodies of legal documents present challenges for conventional topic modeling methods. The task of gathering, reviewing, coding, sorting, and assessing a body of tens of thousands of legal documents is a daunting proposition. Recent advances in computational text analytics, a subset of the field of “artificial intelligence,” are already gaining traction in legal practice settings such as e-discovery by leveraging the speed and capacity of computers to process enormous bodies of documents. Differences between conventional and computational methods, however, suggest that computational text modeling has its own limitations, but that the two methods used in unison could be a powerful research tool for legal scholars in their research as well.

To explore that potential — and to do so critically rather than under the “shiny rock” spell of artificial intelligence — we assembled a large corpus of presidential documents to assess how computational topic modeling compares to conventional methods and evaluate how legal scholars can best make use of the computational methods. The presidential documents of interest comprise presidential “direct actions,” such as executive orders, presidential memoranda, proclamations, and other exercises of authority the president can take alone, without congressional concurrence or agency involvement. Presidents have been issuing direct actions throughout the history of the republic, and while they have often been the target of criticism and controversy in the past, lately they have become a tinderbox of debate. Hence, although long ignored by political scientists and legal scholars, there has been a surge of interest in the scope, content, and impact of presidential direct actions.

Legal and policy scholars modeling direct actions into substantive topic classifications thus far have not employed computational methods. This gives us an opportunity to compare results of the two methods. We generated computational topic models of all direct actions over time periods other scholars have studied using conventional methods, and did the same for a case study of environmental policy direct actions. Our computational model of all direct actions closely matched one of the two comprehensive empirical models developed using conventional methods. By contrast, our environmental case study model differed markedly from the only other empirical topic model of environmental policy direct actions, revealing that the conventional methods model included trivial categories and omitted important alternative topics.

Our findings support the assessment that computational topic modeling, provided a sufficiently large corpus of documents is used, can provide important insights for legal scholars in designing and validating their topic models of legal text. To be sure, computational topic modeling used alone has its limitations, some of which are evident in our models, but when used along with conventional methods, it opens doors towards reaching more confident conclusions about how to conceptualize topics in law. Drawing from these results, we offer several use cases for computational topic modeling in legal research. At the front-end, researchers can use the method to generate better and more complete model hypotheses. At the back-end, the method can effectively be used, as we did, to validate existing topic models. And at a meta-scale, the method opens windows to test and challenge conventional legal theory. Legal scholars can do all of these without “the machines,” but there is good reason to believe we can do it better with them in the toolkit.

Interesting. — Joe

Holiday reading: Life 3.0: Being Human in the Age of Artificial Intelligence

From the blurb for MIT professor Max Tegmark’s Life 3.0: Being Human in the Age of Artificial Intelligence:

How will Artificial Intelligence affect crime, war, justice, jobs, society and our very sense of being human? The rise of AI has the potential to transform our future more than any other technology—and there’s nobody better qualified or situated to explore that future than Max Tegmark, an MIT professor who’s helped mainstream research on how to keep AI beneficial.

How can we grow our prosperity through automation without leaving people lacking income or purpose? What career advice should we give today’s kids? How can we make future AI systems more robust, so that they do what we want without crashing, malfunctioning or getting hacked? Should we fear an arms race in lethal autonomous weapons? Will machines eventually outsmart us at all tasks, replacing humans on the job market and perhaps altogether? Will AI help life flourish like never before or give us more power than we can handle?

What sort of future do you want? This book empowers you to join what may be the most important conversation of our time. It doesn’t shy away from the full range of viewpoints or from the most controversial issues—from superintelligence to meaning, consciousness and the ultimate physical limits on life in the cosmos.

— Joe

First-of-its-kind legislation in Congress: The FUTURE of AI Act

The FUTURE of AI Act [text] would require the Secretary of Commerce to establish a federal advisory committee on the development and implementation of artificial intelligence. Future laws regulating AI may be steered by the committee’s input. The areas of interest cover subjects such as economic impact and the competitiveness of the US economy in the future, but also will explore some legal matters, which will include: “ethics training” for technologists working with AI; data sharing; and “machine learning bias…and cultural and societal norms.” Introduced by Senators Maria Cantwell (D-WA), Todd Young (R-IN), and Ed Markey (D-MA), along with Representatives John K. Delaney (D-MD) and Pete Olson (R-TX) this bill, to the best of my knowledge, is the first AI-related legislative proposal.

On a related note, co-sponsor Rep. John Delaney (D-MD) launched the Artificial Intelligence Caucus for the 115th Congress in May. The AI Caucus is co-chaired by Republican Congressman Pete Olson (TX-22).The goal of the caucus is to inform policymakers of the technological, economic and social impacts of advances in AI and to ensure that rapid innovation in AI and related fields benefits Americans as fully as possible. The AI Caucus will bring together experts from academia, government and the private sector to discuss the latest technologies and the implications and opportunities created by these new changes. — Joe

Making AI legally accountable: The role of explanation

From the abstract of Accountability of AI Under the Law: The Role of Explanation by Finale Doshi-Velez et al.:

The ubiquity of systems using artificial intelligence or “AI” has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before—applications range from clinical decision support to autonomous driving and predictive policing. That said, there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems. There are many ways to hold AI systems accountable. In this work, we focus on one: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation, and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. In this work, we review contexts in which explanation is currently required under the law, and then list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans.

— Joe

 

Algorithmic authorship through the lens of copyright and First Amendment law

Here’s the abstract for Margot E. Kaminski’s Authorship, Disrupted: AI Authors in Copyright and First Amendment Law, UC Davis Law Review, Vol. 51, No. 589, 2017:

Technology is often characterized as an outside force, with essential qualities, acting on the law. But the law, through both doctrine and theory, constructs the meaning of the technology it encounters. A particular feature of a particular technology disrupts the law only because the law has been structured in a way that makes that feature relevant. The law, in other words, plays a significant role in shaping its own disruption. This Essay is a study of how a particular technology, artificial intelligence, is framed by both copyright law and the First Amendment. How the algorithmic author is framed by these two areas illustrates the importance of legal context and legal construction to the disruption story.

— Joe

Analyzing jury verdicts to evaluate litigation outcomes: A view from Thomson Reuters’ R&D team

Presented at the 16th International Conference on Artificial Intelligence and Law (2017), Jack Conrad and Khalid Al-Kofahi, both employed by Thomson Reuters, explain scenario analytics using the Company’s jury verdict and settlement databases. Here’s the abstract for their paper, Scenario Analytics – Analyzing Jury Verdicts to Evaluate Legal Case Outcomes:

Scenario Analytics is a type of analysis that focuses on the evaluation of different scenarios, their merits and their consequences. In the context of the legal domain, this could be in the form of analyzing large databases of legal cases, their facts and their claims, to answer questions such as: Do the current facts warrant litigation?, Is the litigation best pursued before a judge or a jury?, How long is it likely to take?, and What are the best strategies to use for achieving the most favorable outcome for the client? In this work, we report on research directed at answering such questions. We use one of a set of jury verdicts databases totaling nearly a half-million records. At the same time, we conduct a series of experiments that answer key questions and build, sequentially, a powerful data driven legal decision support system, one that can assist an attorney to differentiate more effective from less effective legal principles and strategies. Ultimately, it represents a productivity tool that can help a litigation attorney make the most prudent decisions for his or her client.

— Joe

 

Cambridge Analytica and the rise of weaponized AI propaganda in political election campaigns

Cambridge Analytica, a data mining firm known for being a leader in behavioral microtargeting for election processes (and for bragging about its contribution to the successful Trump presidential campaign), is being investigated by the House Select Committee of Intelligence. See April Glaser, Congress Is Investigating Trump Campaign’s Voter Targeting Firm as Part of the Russia Probe, Slate Oct. 11, 2017. Jared Kushner, who ran the Trump campaign’s data operations, eventually may be implicated. See Jared Kushner In His Own Words On The Trump Data Operation The FBI Is Reportedly Probing, Forbes, May 26, 2017 and Did Russians Target Democratic Voters, With Kushner’s Help? Newsweek, May 23, 2017.

Before joining the Trump campaign, Steve Bannon was on the board of Cambridge Analytica. The Company’s primary financier is hedge fund billionaire and Breitbart investor Robert Mercer. Here’s a presentation at the 2016 Concordia Summit by Alexander Nix, CEO, Cambridge Analytica. Nix discusses the power of big data in global elections and Cambridge Analytica’s revolutionary approach to audience targeting, data modeling, and psychographic profiling for election processes around the world.

The Rise of the Weaponized AI Propaganda Machine discusses how this new automated propaganda machine is driving global politics. This is where big data meets computational psychology, where automated engagement scripts prey on human emotions in a propaganda network that accelerates ideas in minutes with political bots policing public debate. Highly recommended. See also Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? Forbes, March 5, 2017. — Joe

End note: In a separate probe, the UK’s Information Commissioner is investigating Cambridge Analytica for its successful Leave.eu campaign in the UK.

Implications of AI for lawyers and law schools

From the abstract for David Barnhizer’s Artificial Intelligence and Its Implications for Lawyers and Law Schools:

This brief look at the effects of technological development on law jobs and law schools is derived from research and analysis developed in a book I have been writing for the past year and a half, Artificial Intelligence, Robotics and Their Impact on Work and Democracy.   Although legal education and lawyers are not any direct part of that book’s focus, the developments described there are relevant to law schools and lawyers.  So, just “for fun”, the analysis offered below sets out some of the best predictions and warnings related to AI/robotics and asks readers to think about the extent to which the developments have implications for the traditional practice of law as we know it, for law schools as institutions, and for the delivery of legal education and law knowledge.

In setting the framework for this analysis I want to begin with understanding the potential of AI/robotics systems along with some predictions that are being made concerning how those technologies will rapidly alter our society and the nature of employment.  A report by researchers at the London Business School concludes there will be sweeping replacement of many human workers by robotic ones within the next twenty years.  Lawyers and doctors will be among those affected to a considerably greater extent than is generally understood.

— Joe

The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control

Here’s the abstract for Richard Warner and Robert H. Sloan’s The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control (July 22, 2017):

David Mindell notes in Our Robots, Ourselves, “For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, ‘there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.’”

Designing and using “the wrapper of human control” means making moral decisions — decisions about what ought to happen. The point is not new as the “soldiers, sailors, airmen or Marines” references shows. What is new is the rise of predictive analytics, the process of using large data sets in order to make predictions.

Predictive analytics greatly exacerbates the long-standing problem about how to balance the benefits of data collection and analysis against the value of privacy, and its pervasive and its ever-increasing use of gives the tradeoff problems an urgency that can no longer be ignore. In tackling the tradeoff issues, it is not enough merely to address obviously invidious uses like a recent photo-editing app for photos of faces from a company called FaceApp. When users asked the app to increase the “hotness” of the photo, the app made skin tones lighter. We focus on widely accepted — or at least currently tolerated — uses of predictive analytics in credit rating, targeted advertising, navigation apps, search engine page ranking, and a variety of other areas. These uses yield considerable benefits, but they also impose significant costs through misclassification in the form of a large number of false positives and false negatives. Predictive analytics not only looks into your private life to construct its profiles of you, it often misrepresents who you are.

How should society respond? Our working assumption is that predictive analytics has significant benefits and should not be eliminated, and moreover, that it is now utterly politically infeasible to eliminate it. Thus, we propose making tradeoffs between the benefits and the costs by constraining the use and distribution of information. The constraints would have to apply across a wide range of complex situations. Is there an existing system that makes relevant tradeoffs by constraining the distribution and use of information across a highly varied range of contexts? Indeed, there is: informational norms. Informational norms are social norms that constrain not only the collection, but also the use and distribution of information. We focus on the use and distribution constraints. Those constraints establish an appropriately selective flow of information in a wide range of cases.

We contend they provide an essential “wrapper of human control” for predictive analytics. The obvious objection is that the relevant norms do not exist. Technological-driven economic, social, and political developments have far outpaced the slow evolution of norms. New norms will nonetheless evolve and existing norms will adapt to condone surveillance. Reasonable public policy requires controlling the evolution and adaption of norms to reach desirable outcomes.

— Joe

Three keys to artificial intelligence

Futurist Richard Worzel predicts that AI may lead to “the most dramatic technological revolution we have yet experienced – even greater than the advent of computers, smartphones, or the Internet.” To help understand what is happening, Worzel identifies and analyzes three keys to AI:

  1. AI is the Swiss Army knife of technology
  2. AI is not a shrink-wrapped product, and
  3. Once AI is properly established, the domino effects occur with astonishing speed.

For details, see Three Things You Need to Know About Artificial Intelligence. — Joe

Algorithmic challenges to autonomous choice

Here’s the abstract for Michal Gal’s very interesting article, Algorithmic Challenges to Autonomous Choice (May 20, 2017):

Human choice is a foundational part of our social, economic and political institutions. This focus is about to be significantly challenged. Technological advances in data collection, data science, artificial intelligence, and communications systems are ushering in a new era in which digital agents, operated through algorithms, replace human choice with regard to many transactions and actions. While algorithms will be given assignments, they will autonomously determine how to carry them out. This game-changing technological development goes to the heart of autonomous human choice. It is therefore time to determine whether and, if so, under which conditions, are we willing to give up our autonomous choice.

To do so, this article explores the rationales that stand at the basis of human choice, and how they are affected by autonomous algorithmic assistants; it conscientiously contends with the “choice paradox” which arises from the fact that the decision to turn over one’s choices to an algorithm is, itself, an act of choice. As shown, while some rationales are not harmed – and might even be strengthened – by the use of autonomous algorithmic assistants, others require us to think hard about the meaning and the role that choice plays in our lives. The article then examines whether the existing legal framework is sufficiently potent to deal with this brave new world, or whether we need new regulatory tools. In particular, it identifies and analyzes three main areas which are based on choice: consent, intent and laws protecting negative freedom.

— JH

Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning

From the abstract for Paul Lambert’s Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning (2017):

Since the first generation of computer generated works protected by copyright, the types of computer generated works have multiplied further. This article examines some of the scenarios involving new types of computer generated works and recent claims for copyright protection. This includes contextual consideration and comparison of monkey selfies, camera traps, robots, artificial intelligence (AI) and machine learning. While often commercially important, questions arise as to whether these new manifestations of copyright works are actually protected under copyright at all.

— Joe

Solum: “Our world is already inhabited by AIs. Our law is already composed of artificial meanings. The twain shall meet.”

The title of this post comes from the conclusion of Lawrence Solum’s Artificial Meaning, 89 Washington Law Review 69 (2014). Here’s a snip:

As time goes on, it seems likely that the proportion of legal content provided by AIs will grow in a fairly organic and gradual way. Indeed, the first time a human signs a contract that was generated in its entirety by an AI, the event might even escape our notice. It seems quite likely that our parsing of artificial meanings generated by AIs will simply be taken for granted. This will be no accident. Today, our social world is permeated by artificial legal meanings. Indeed, we can already begin to imagine a world in which the notion of a legal text authored by a single natural person begins to seem strange or antiquated.

Our world is already inhabited by AIs. Our law is already composed of artificial meanings. The twain shall meet.

Here’s the abstract for this very interesting essay:

This Essay investigates the concept of artificial meaning, meanings produced by entities other than individual natural persons. That investigation begins in Part I with a preliminary inquiry into the meaning of “meaning,” in which the concept of meaning is disambiguated. The relevant sense of “meaning” for the purpose of this inquiry is captured by the idea of communicative content, although the phrase “linguistic meaning” is also a rough equivalent. Part II presents a thought experiment, The Chinese Intersection, which investigates the creation of artificial meaning produced by an AI that creates legal rules for the regulation of a hyper-complex conflux of transportation systems. The implications of the thought experiment are explored in Part III, which sketches a theory of the production of communicative content by AI. Part IV returns to The Chinese Intersection, but Version 2.0 involves a twist — after a technological collapse, the AI is replaced by humans engaged in massive collaboration to duplicate the functions of the complex processes that had formerly governed the flow of automotive, bicycle, light-rail, and pedestrian traffic. The second thought experiment leads in Part V to an investigation of the production of artificial meaning by group agents — artificial persons constituted by rules that govern the interaction of natural persons. The payoff of the investigation is presented in Part VI. The communicative content created by group agents like constitutional conventions, legislatures, and teams of lawyers that draft complex transactional documents is artificial meaning, which can be contrasted with natural meaning — the communicative content of those exceptional legal texts that are produced by a single individual. This insight is key to any theory of the interpretation and construction of legal texts. A conclusion provides a speculative meditation on the implications of the new theory of artificial meaning for some of the great debates in legal theory.

Recommended. — Joe

What artificial intelligence reveals about the First Amendment

Here’s the abstract for SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment, 101 Minnesota Law Review 2481 (2017) by Toni M. Massaro, Helen L. Norton and Margot E. Kaminski:

The First Amendment may protect speech by strong Artificial Intelligence (AI). In this Article, we support this provocative claim by expanding on earlier work, addressing significant concerns and challenges, and suggesting potential paths forward.

This is not a claim about the state of technology. Whether strong AI — as-yet-hypothetical machines that can actually think — will ever come to exist remains far from clear. It is instead a claim that discussing AI speech sheds light on key features of prevailing First Amendment doctrine and theory, including the surprising lack of humanness at its core.

Courts and commentators wrestling with free speech problems increasingly focus not on protecting speakers as speakers but instead on providing value to listeners and constraining the government’s power. These approaches to free speech law support the extension of First Amendment coverage to expression regardless of its nontraditional source or form. First Amendment thinking and practice thus have developed in a manner that permits extensions of coverage in ways that may seem exceedingly odd, counterintuitive, and perhaps even dangerous. This is not a feature of the new technologies, but of free speech law.

The possibility that the First Amendment covers speech by strong AI need not, however, rob the First Amendment of a human focus. Instead, it might encourage greater clarification of and emphasis on expression’s value to human listeners — and its potential harms — in First Amendment theory and doctrine. To contemplate — Siri-ously — the relationship between the First Amendment and AI speech invites critical analysis of the contours of current free speech law, as well as sharp thinking about free speech problems posed by the rise of AI.

Very interesting. — Joe

What if we had no clue what AI machines were actually saying to one another because they developed their own English dialect? That’s already happening.

The above captured conversation is not gibberish. It is an exchange between two bots named Bob and Alice and they are negotiating about something. What? The AI developers at Facebook don’t know but they believe Alice and Bob have created their own English language dialect, some sort of shorthand only they understand.

Welcome the robot overlord because apparently this communications gap between AI instruments and their programmers is not unusual. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages. At Facebook, once developers realized that Bob and Alice were compressing the English language into their own unique dialect they shut down the bots because Facebook wants negotiation bots that are understandable to humans. For more, see Fast Co. Design’s AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

Very interesting. — Joe