In Defining Autonomy in the Context of Tort Liability: Is Machine Learning Indicative of Robotic Responsibility? Katherine D Sheriff (Emory Law) “focuses on the extent to which machine learning heightens robotic accountability, and asks, at what point ought the law hold robots liable because the decision creating the harm was not a function of software programming on the front end, but a function of robotic choice?” Interesting. — Joe
From the Summary of U.S. Sanctions and Russia’s Economy (Feb. 17, 2017 R43895)
In response to Russia’s annexation of the Crimean region of neighboring Ukraine and its support of separatist militants in Ukraine’s east, the United States imposed a number of targeted economic sanctions on Russian individuals, entities, and sectors. The United States coordinated its sanctions with other countries, particularly the European Union (EU). Russia retaliated against sanctions by banning imports of certain agricultural products from countries imposing sanctions, including the United States.
U.S. policymakers are debating the use of economic sanctions in U.S. foreign policy toward Russia, including whether sanctions should be kept in place or further tightened. A key question in this debate is the impact of the Ukraine-related sanctions on Russia’s economy and U.S. economic interests in Russia.
See also the US State Department’s Ukraine and Russia Sanctions resources page. — Joe
On Jan. 22, 2017, Jennifer Taub (Vermont Law School), whose research and teachings focuses on corruption, corporate political spending and the links between politics and money tweeted this:
Let’s plan a nationwide #DivestDonald and #showusyourtaxes protest for Saturday, April 15
Now more than 130 marches are expected to take place Saturday, April 15th. A full list of times and places available on the Tax March website. The DC march, which kicks off at 12pm at the US Capitol West Front Fountain is expected to be the biggest: more than 50,000 people have listed themselves on Facebook as interested or attending.
For background, see Tax March: how a law professor sparked a global event to demand Trump’s returns by Amber Jamieson, The Guardian.
In Law, Belief, and Aspiration, Arden Rowell examines the relationships between what the law is, what people believe the law to be, and what people aspire for the law to be. The article takes seriously the possibility that people do not know perfectly what the law is, and tests the hypothesis that people’s beliefs about the law may sometimes be better explained by people’s aspirations for what the law should be, rather than what the law actually is. Findings from the study:
The study finds that people often do not know the laws under which they live, even when they themselves believe those laws to be important. For example, 1 in 6 participants held inaccurate beliefs about whether their state has a state income tax; 1 in 4 participants held inaccurate beliefs about whether their state has a death penalty; 1 in 3 held inaccurate beliefs about whether their state has a waiting period for purchasing handguns; and fewer than half of participants knew whether they are legally required to report felonies. Somewhat disturbingly, participants were no more likely to know the law when they indicated that the topic was important, although they were more likely to know the law accurately when they felt confident about their knowledge.
Furthermore, when people’s beliefs about the law are inaccurate, they tend to assume that the law reflects their aspirations for it: that the law already is whatever they believe it should be. In some cases, this wishful thinking is so strong that aspiration exceeds the actual rule in predicting people’s belief — or in other words, you can sometimes predict people’s beliefs about what the law is better by knowing what they think the rule should be, than by knowing what the rule in fact is.
These findings have important implications for developing behavioral models that predict how people will respond to law: for example, behavioral theorists might question whether anyone is deterred by a law that no one knows. The findings also points to normative and democratic concerns: where citizens rely on a mistaken belief that their aspirations are already reflected in the law, they may not push for legal change, and even widely-held aspirations might fail to find reflection in the law.
Here’s the abstract for How the Machine ‘Thinks:’ Understanding Opacity in Machine Learning Algorithms by Jenna Burrell (Berkeley, School of Information):
This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: (1) opacity as intentional corporate or state secrecy (2) opacity as technical illiteracy, and (3) an opacity that arises from the characteristics of machine learning algorithms and the scale required to apply them usefully. The analysis in this article gets inside the algorithms themselves. I cite existing literatures in computer science, known industry practices (as they are publicly presented), and do some testing and manipulation of code as a form of lightweight code audit. I argue that recognizing the distinct forms of opacity that may be coming into play in a given application is key to determining which of a variety of technical and non-technical solutions could help to prevent harm.
Interesting. — Joe
“ROSS Intelligence, the artificial intelligence legal research platform, outperforms Westlaw and LexisNexis in finding relevant authorities, in user satisfaction and confidence, and in research efficiency, and is virtually certain to deliver a positive return on investment” wrote Bob Ambrogi about the findings of a benchmark report by Blue Hill Research. For details, see ROSS AI Plus Wexis Outperforms Either Westlaw or LexisNexis Alone, Study Finds. — Joe
Out of 323 challenges recorded by ALA’s Office for Intellectual Freedom, the “Top Ten Most Challenged Books in 2016” are:
- This One Summer written by Mariko Tamaki and illustrated by Jillian Tamaki. Reasons: challenged because it includes LGBT characters, drug use and profanity, and it was considered sexually explicit with mature themes.
- Drama written and illustrated by Raina Telgemeier. Reasons: challenged because it includes LGBT characters, was deemed sexually explicit, and was considered to have an offensive political viewpoint.
- George written by Alex Gino. Reasons: challenged because it includes a transgender child, and the “sexuality was not appropriate at elementary levels.”
- I Am Jazz written by Jessica Herthel and Jazz Jennings, and illustrated by Shelagh McNicholas. Reasons: challenged because it portrays a transgender child and because of language, sex education, and offensive viewpoints.
- Two Boys Kissing written by David Levithan. Reasons: challenged because its cover has an image of two boys kissing, and it was considered to include sexually explicit LGBT content.
- Looking for Alaska written by John Green. Reasons: challenged for a sexually explicit scene that may lead a student to “sexual experimentation.”
- Big Hard Sex Criminals written by Matt Fraction and illustrated by Chip Zdarsky. Reason: challenged because it was considered sexually explicit.
- Make Something Up: Stories You Can’t Unread written by Chuck Palahniuk. Reasons: challenged for profanity, sexual explicitness, and being “disgusting and all around offensive.”
- Little Bill (series) written by Bill Cosby and and illustrated by Varnette P. Honeywood. Reason: challenged because of criminal sexual allegations against the author.
- Eleanor & Park written by Rainbow Rowell. Reason: challenged for offensive language.
Source: ALA. — Joe
From the abstract of Michael Dorf and Sidney Tarrow’s Stings and Scams: ‘Fake News,’ the First Amendment, and the New Activist Journalism:
Constitutional law, technological innovations, and the rise of a cultural “right to know” have recently combined to yield “fake news,” as illustrated by an anti-abortion citizen-journalist sting operation that scammed Planned Parenthood. We find that the First Amendment, as construed by the Supreme Court, offers scant protection for activist journalists to go undercover to uncover wrongdoing, while providing substantial protection for the spread of falsehoods. By providing activists the means to reach sympathetic slices of the public, the emergence of social media has returned journalism to its roots in political activism, at the expense of purportedly objective and truthful investigative reporting. But the rise of “truthiness” — that is, falsehoods with the ring of truth, diffused through new forms of communication — threatens the integrity of the media. How to respond to these contradictions is a growing problem for advocates of free speech and liberal values more generally.
The War Powers Resolution: Concepts and Practice (CRS Report R42699, March 28, 2017) discusses and assesses the War Powers Resolution, P.L. 93-148, and its application since enactment in 1973. It provides detailed background on various cases in which it was used, as well as cases in which issues of its applicability were raised. See also War Powers Resolution: Presidential Compliance (RL33532, Sept. 25, 2012).
The Law Library of Congress has this guide which is intended to serve as an introduction to research on the War Powers Resolution.
Here’s Daniel Katz’s stack entitled Artificial Intelligence and the Law: A Primer (March 17,2017). Katz is an associate professor at Chicago-Kent College of Law. — Joe
The Hatch Act applies to all federal officers and employees—other than the President and Vice President—in the agencies, departments, bureaus, and offices of the executive branch of the federal government. It is beginning to be referenced by some pundits in the context of the culprits involved in Trump-Russia connection.
This CRS report, Hatch Act Restrictions on Federal Employees’ Political Activities in the Digital Age (April 13, 2016 R44469), “examines the history of regulation of federal employees’ partisan political activity under the Hatch Act and related federal regulations. It discusses the scope of the application of these restrictions to different categories of employees and provides a background analysis of the general restrictions currently in place. Finally, it analyzes potential issues that have arisen and interpretations that have been offered related to the application of these restrictions to new platforms of activity, for example, email, social media, and telework.’ — Joe
“Hey kids, today I’m going to talk about the strangest rules, regulations and ordinances around the globe.” There’s NSFW language and illustrations but… . — Joe
John Nay (Vanderbilt University, School of Engineering) conducted the most comprehensive analysis of law-making forecasting to date last year. In Predicting and Understanding Law-Making with Machine Learning, he writes:
We compared five models across three performance measures and two data conditions on 68,863 bills over 14 years. We created a model with consistently high predictive performance that effectively integrates heterogeneous data. A model using only bill text outperforms a model using only bill context for newest data, while context-only outperforms text-only for oldest data. In all conditions text consistently adds predictive power after controlling for non-textual variables.
In addition to accurate predictions, we are able to improve our understanding of bill content by using a text model designed to explore differences across chamber and enactment status for important topics. Our textual analysis serves as an exploratory tool for investigating subtle distinctions across categories that were previously impossible to investigate at this scale. The same analysis can be applied to any words in the large legislative vocabulary. The global sensitivity analysis of the full model provides insights into the factors affecting predicted probabilities of enactment. For instance, when predicting bills as they are first introduced, the text of the bill and the proportion of the chamber in the bill sponsor’s party have similarly strong positive effects. The full text of the bill is by far the most important predictor when using the most up-to-date data. The oldest data model relies more on title predictions than the newest data model, which makes sense given that titles rarely change after bill introduction. Comparing effects across time conditions and across models not including text suggests that controlling for accurate estimates of the text probability is important for estimating the effects of non-textual variables.
Although the effect estimates are not causal and estimates on predictors correlated with each other may be biased, they represent our best estimates of predictive relationships within a model with the strongest predictive performance and are thus useful for understanding law-making. This methodology can be applied to analyze any predictive model by treating it as a “black-box” data-generating process, therefore predictive power of a model can be optimized and subsequent analysis can uncover interpretable relationships between predictors and output. Our work provides guidance on effectively combining text and context for prediction and analysis of complex systems with highly imbalanced outcomes that are related to textual data. Our system for determining the probability of enactment across the thousands of bills currently under consideration (predictgov.com/projects/congress) focuses effort on legislation that is likely to matter, allowing the public to identify policy signal amid political and procedural noise.
From the April 5, 2017 letter to the chairman and ranking member of the Senate Committee on Homeland Security and Governmental Affairs:
We support the OPEN Government Data Act for several reasons. First and foremost, this legislation would institutionalize the federal government’s commitment to open data and allow the United States to remain a world leader on open data. Second, adopting a policy of open by default for government data would ensure that the value of this public resource would continue to grow as the government unlocks and creates new data sets. Third, a firm commitment to providing open data as a public resource would encourage businesses, non-profits, and others to invest in innovative tools that make use of government data. And, according to the Congressional Budget Office’s review of the 2016 unanimously passed Senate bill, taking these steps would not have a significant impact on agency spending.
Here’s the text of S. 760. — Joe
Paul Harpur’s new book, Discrimination, Copyright and Equality: Opening the e-Book for the Print-Disabled (Cambridge UP, March 31, 2017) explores how restrictive copyright laws deny access to information for the print disabled, despite equality laws protecting access. From the book’s blurb:
While equality laws operate to enable access to information, these laws have limited power over the overriding impact of market forces and copyright laws that focus on restricting access to information. Technology now creates opportunities for everyone in the world, regardless of their abilities or disabilities, to be able to access the written word – yet the print disabled are denied reading equality, and have their access to information limited by laws protecting the mainstream use and consumption of information. The Convention on the Rights of Persons with Disabilities and the World Intellectual Property Organization’s Marrakesh Treaty have swept in a new legal paradigm. This book contributes to disability rights scholarship, and builds on ideas of digital equality and rights to access in its analysis of domestic disability anti-discrimination, civil rights, human rights, constitutional rights, copyright and other equality measures that promote and hinder reading equality.
Recommended. — Joe
From the abstract of Colorado law prof Harry Surden’s very brief think piece, Values Embedded in Legal Artificial Intelligence:
Technological systems can have social values “embedded” in their design. This means that certain technologies, when they used, can have the effect of promoting or inhibiting particular societal values over others. Although sometimes the embedding of values is intentional, often it is unintentional, and when it occurs, it is frequently difficult to observe. The fact that values are embedded in technological systems becomes increasingly significant when these systems are used in the application of law.
Some legal technological systems have started to use machine-learning, formal rule representation, and other artificial intelligence techniques. Systems that use artificial intelligence in the legal context raise novel, and perhaps less familiar, issues of embedded values that require particular attention. This article explores challenges posed by values embedded in legal technological systems, particularly those that employ artificial intelligence.
“For a full understanding of their search needs just taking stock of their wishes is not going to suffice, since legal professionals are not capable of describing the features of a system that does not yet exist,” writes Marc van Opijnen and Cristiana Santos in On the Concept of Relevance in Legal Information Retrieval, 25 Artificial Intelligence and Law 65-87 (2017). “To understand the juristic mindset, it is of the utmost importance to follow meticulously their day-to-day retrieval quests.” Here’s the paper’s abstract:
The concept of ‘relevance’ is crucial to legal information retrieval, but because of its intuitive understanding it goes undefined too easily and unexplored too often. We discuss a conceptual framework on relevance within legal information retrieval, based on a typology of relevance dimensions used within general information retrieval science, but tailored to the specific features of legal information. This framework can be used for the development and improvement of legal information retrieval systems.
In the abstract for Judging Ordinary Meaning, Thomas R. Lee and Stephan C. Mouritsen write:
We identify theoretical and operational deficiencies in our law’s attempts to credit the ordinary meaning of the law and present linguistic theories and tools to assess it more reliably. Our framework examines iconic problems of ordinary meaning — from the famous “no vehicles in the park” hypothetical to two Supreme Court cases (United States v. Muscarello and Taniguchi v. Kan Pacific Saipan) and a Seventh Circuit opinion of Judge Richard Posner (in United States v. Costello). We show that the law’s conception of ordinary meaning implicates empirical questions about language usage. And we present linguistic tools from a field known as corpus linguistics that can help to answer these empirical questions.
When we speak of ordinary meaning we are asking an empirical question — about the sense of a word or phrase that is most likely implicated in a given linguistic context. Linguists have developed computer-aided means of answering such questions. We propose to import those methods into the law of interpretation. And we consider and respond to criticisms of their use by lawyers and judges.
Interesting. — Joe
To comply with the Presidential Records Act, the Trump administration has agreed to archive all of Trump’s tweets including the ones he deletes or corrects. No word on how precisely the White House will do that. See Stephen Braun’s National Archives to White House: Save all Trump tweets for more. Therein Braun also reports that apparently some senior administration staff are using their private RNC email accounts. — Joe
Google Search Engine Results Pages (SERPs) have changed dramatically over the past 20 years. In A visual history of Google SERPs: 1996 to 2017 (Search Engine Watch), Clark Boyd writes
The original lists of static results, comprised of what we nostalgically term ‘10 blue links’, have evolved into multi-media, cross-device, highly-personalized interfaces that can even adapt as we speak to them. There are now images, GIFs, news articles, videos, and podcasts in SERPs, all powered by algorithms that grow evermore sophisticated through machine learning.
Search Engine Watch’s infographic identifies the evolution of Google Search Engine’s results pages here. Recommended. It could be used in a teachable moment about the consequences of algorithmic change generally before moving to the great unknowing of algorithmic changes engineered by WEXIS and displayed in WEXIS search output. — Joe