From the abstract for Paul Lambert’s Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning (2017):
Since the first generation of computer generated works protected by copyright, the types of computer generated works have multiplied further. This article examines some of the scenarios involving new types of computer generated works and recent claims for copyright protection. This includes contextual consideration and comparison of monkey selfies, camera traps, robots, artificial intelligence (AI) and machine learning. While often commercially important, questions arise as to whether these new manifestations of copyright works are actually protected under copyright at all.
The title of this post comes from the conclusion of Lawrence Solum’s Artificial Meaning, 89 Washington Law Review 69 (2014). Here’s a snip:
As time goes on, it seems likely that the proportion of legal content provided by AIs will grow in a fairly organic and gradual way. Indeed, the first time a human signs a contract that was generated in its entirety by an AI, the event might even escape our notice. It seems quite likely that our parsing of artificial meanings generated by AIs will simply be taken for granted. This will be no accident. Today, our social world is permeated by artificial legal meanings. Indeed, we can already begin to imagine a world in which the notion of a legal text authored by a single natural person begins to seem strange or antiquated.
Our world is already inhabited by AIs. Our law is already composed of artificial meanings. The twain shall meet.
Here’s the abstract for this very interesting essay:
This Essay investigates the concept of artificial meaning, meanings produced by entities other than individual natural persons. That investigation begins in Part I with a preliminary inquiry into the meaning of “meaning,” in which the concept of meaning is disambiguated. The relevant sense of “meaning” for the purpose of this inquiry is captured by the idea of communicative content, although the phrase “linguistic meaning” is also a rough equivalent. Part II presents a thought experiment, The Chinese Intersection, which investigates the creation of artificial meaning produced by an AI that creates legal rules for the regulation of a hyper-complex conflux of transportation systems. The implications of the thought experiment are explored in Part III, which sketches a theory of the production of communicative content by AI. Part IV returns to The Chinese Intersection, but Version 2.0 involves a twist — after a technological collapse, the AI is replaced by humans engaged in massive collaboration to duplicate the functions of the complex processes that had formerly governed the flow of automotive, bicycle, light-rail, and pedestrian traffic. The second thought experiment leads in Part V to an investigation of the production of artificial meaning by group agents — artificial persons constituted by rules that govern the interaction of natural persons. The payoff of the investigation is presented in Part VI. The communicative content created by group agents like constitutional conventions, legislatures, and teams of lawyers that draft complex transactional documents is artificial meaning, which can be contrasted with natural meaning — the communicative content of those exceptional legal texts that are produced by a single individual. This insight is key to any theory of the interpretation and construction of legal texts. A conclusion provides a speculative meditation on the implications of the new theory of artificial meaning for some of the great debates in legal theory.
Recommended. — Joe
Here’s the abstract for SIRI-OUSLY 2.0: What Artificial Intelligence Reveals about the First Amendment, 101 Minnesota Law Review 2481 (2017) by Toni M. Massaro, Helen L. Norton and Margot E. Kaminski:
The First Amendment may protect speech by strong Artificial Intelligence (AI). In this Article, we support this provocative claim by expanding on earlier work, addressing significant concerns and challenges, and suggesting potential paths forward.
This is not a claim about the state of technology. Whether strong AI — as-yet-hypothetical machines that can actually think — will ever come to exist remains far from clear. It is instead a claim that discussing AI speech sheds light on key features of prevailing First Amendment doctrine and theory, including the surprising lack of humanness at its core.
Courts and commentators wrestling with free speech problems increasingly focus not on protecting speakers as speakers but instead on providing value to listeners and constraining the government’s power. These approaches to free speech law support the extension of First Amendment coverage to expression regardless of its nontraditional source or form. First Amendment thinking and practice thus have developed in a manner that permits extensions of coverage in ways that may seem exceedingly odd, counterintuitive, and perhaps even dangerous. This is not a feature of the new technologies, but of free speech law.
The possibility that the First Amendment covers speech by strong AI need not, however, rob the First Amendment of a human focus. Instead, it might encourage greater clarification of and emphasis on expression’s value to human listeners — and its potential harms — in First Amendment theory and doctrine. To contemplate — Siri-ously — the relationship between the First Amendment and AI speech invites critical analysis of the contours of current free speech law, as well as sharp thinking about free speech problems posed by the rise of AI.
Very interesting. — Joe
The above captured conversation is not gibberish. It is an exchange between two bots named Bob and Alice and they are negotiating about something. What? The AI developers at Facebook don’t know but they believe Alice and Bob have created their own English language dialect, some sort of shorthand only they understand.
Welcome the robot overlord because apparently this communications gap between AI instruments and their programmers is not unusual. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages. At Facebook, once developers realized that Bob and Alice were compressing the English language into their own unique dialect they shut down the bots because Facebook wants negotiation bots that are understandable to humans. For more, see Fast Co. Design’s AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?
Very interesting. — Joe