Is this going to be a trend? BigLaw firm libraries reporting to firm’s chief marketing officer

I hope not. See BNA’s Scott Mozarsky (President of Bloomberg Law and Bloomberg BNA’s Legal Division) ATL post, Large Law’s Not-So-Secret Weapon In Marketing And BD: The Library. (Observing that he only knows of four of the 50 largest firms where the law library reports to the chief marketing officer.)  See also Greg Lambert’s response to Mozarsky’s post at Who Leads the Law Library? How About Law Librarians? — Joe

Is Wikipedia a reliable legal authority now?

Keith Lee identifies recent court opinions that cite (or reject) Wikipedia as an authority on Associate’s Mind. He writes:

Every Circuit has judicial opinions that cite Wikipedia as a reliable source for general knowledge. Who Ludacris is. Explaining Confidence Intervals. But some courts within the same Circuit will be dismissive of Wikipedia as a source of general information. There is no definitive answer. Judges seem to make determinations about Wikipedia’s reliability on a case-by-case basis. If you want to cite Wikipedia in a brief and not have a judge be dismissive of it, it’s probably worth your time running a quick search to see where the judge stands on the topic.

Hat tip to PinHawk’s Librarian News Digest on the PinHawk Blog. — Joe

Holding algorithms accountable

Here’s the abstract to Accountable Algorithms, 165 University of Pennsylvania Law Review ___ 2017 (Forthcoming). This very interesting article was written by Joshua Kroll, Joanna Huey, Solon Barocas, Edward Felten, Joel Reidenberg, David Robinson and Harlan Yu.

Abstract:

Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability.

Recommended. — Joe

Does Facebook’s personalization of social media pages through the use of machine-learning algorithms constitute the “development” of content under the Communications Decency Act?

Catherine A. Tremble attempts to answer that question in her forthcoming Fordham Law Review note, Wild Westworld: The Application of Section 230 of the Communications Decency Act to Social Networks’ Use of Machine-Learning Algorithms.

Here’s the abstract:

On August 10th, 2016, a complaint filed in the Eastern District of New York formally accused Facebook of aiding the execution of terrorist attacks. The complaint depicted user-generated posts and groups promoting and directing the incitement of terrorist activities. Under section 230 of the Communications Decency Act (CDA), Interactive Service Providers (ISPs), such as Facebook, cannot be held liable for user-generated content where the ISP did not create or develop the information. However, this case stands out because it seeks to hold Facebook liable not only for the content of third parties, but also for the effect its personalized machine-learning algorithms — or “services” — have had on the ability of terrorists to orchestrate and execute attacks. By alleging that Facebook’s conduct goes beyond the mere act of publication, and includes the actual services’ effect on terrorists’ abilities to more effectively execute attacks, the complaint seeks to prevent the court from granting section 230 immunity to Facebook.

This Note argues that Facebook’s services — specifically the personalization of social media pages through the use of machine-learning algorithms — constitute the “development” of content and as such do not qualify for immunity under section 230 of the CDA. Recognizing the challenge of applying a static statute to a shifting technological landscape, this Note analyzes recent jurisprudential evolutions in section 230 doctrine to revise the original analytical framework applied in early cases. This Framework is guided by congressional and public policy goals but evolves to demonstrate awareness of technological evolution and ability. It specifically tailors section 230 immunity to account for behavioral data mined for ISP use, and the effect the use of that data has on users — two issues that courts have yet to confront. This Note concludes that, under the updated section 230 framework, personalized machine-learning algorithms made effective through the collection of individualized behavioral data make ISPs co-developers of content and as such bar them from section 230 immunity.

— Joe

Artificial intelligence in legal research

ILTA’s Beyond the Hype: Artificial Intelligence in Legal Research webinar was conducted last month and features ROSS Intelligence CEO and co-founder Andrew Arruda. The link takes you to the archived webinar. Interesting. — Joe

March Madness begins with the release of the 2018 US News Law School Rankings

Here. — Joe

The Ethics of algorithms

From the abstract of The Ethics of Algorithms: Mapping the Debate, Big Data & Society, Vol. 3(2)(2016) by Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter and Luciano Floridi (all Oxford Internet Institute, Univ. of Oxford):

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

— Joe

The big three legal research providers in the small law market

Lexis, Westlaw, and Fastcase are in a virtual tie in the small law market according to a recent survey conducted by the law practice management firm Clio. The results of the survey revealed the following small law market shares:

  1. Westlaw, 20.58 percent
  2. Fastcase, 20.35 percent
  3. LexisNexis, 20.21 percent

See the below pie chart and table for details.

Hat tip to Bob Ambrogi’ LawSites post. — Joe

 

Risk assessment algorithm used in bail hearings

“As part of a bold effort at bail reform,” writes Ephrat Livni, “the state of New Jersey replaced bail hearings with algorithmically informed risk assessments this year. Anyone’s eligible for release, no money down, if they meet certain criteria. To ensure unbiased, scientific decisions, judges use a machine-generated score. … The Public Safety Assessment (PSA) formula that New Jersey is now using for bail—along with about 20 other jurisdictions that employ it less extensively—aims to make risk calculations neutral, purely evidence-based, and premised on data. It compares risks and outcomes in a database of 1.5 million cases from 300 jurisdictions nationwide, producing a score of one to six for the defendant based on the information.”

“The automated recommendation”, Livni adds, “serves as a guide and doesn’t replace judicial discretion. Still, the program raises questions about the claimed neutrality of machine reasoning, and the wisdom of reliance on mechanical judgment.” For more, see In the US, some criminal court judges now use algorithms to guide decisions on bail. — Joe

Balkin’s three basic laws of robotics for the algorithmic society

From the abstract of Yale Law prof Jack Balkin’s The Three Laws of Robotics in the Age of Big Data, 78 Ohio State Law Journal ___ (2017)(Forthcoming):

This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.

The homunculus fallacy is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions.

The substitution effect refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people.

The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The “three laws of robotics” for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves.

Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed.

With this in mind, we can state three basic “laws of robotics” for the Algorithmic Society:

First, operators of robots, algorithms and artificial intelligence agents are information fiduciaries who have special duties of good faith and fair dealing toward their end-users, clients and customers.

Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public.

Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be algorithmic nuisances. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term “algorithmic nuisance” captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified “pollution” – that is, using computational power to make others pay for the costs of one’s activities.

Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency – and its cousins, accountability and due process – apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm.

Very interesting. — Joe

Primer on artificial intelligence in legal services

Here’s the blurb for Joanna Goodman’s Robots in Law: How Artificial Intelligence is Transforming Legal Services:

Although 2016 was a breakthrough year for artificial intelligence (AI) in legal services in terms of market awareness and significant take-up, legal AI represents evolution rather than revolution. Since the first ‘robot lawyers’ started receiving mainstream press coverage, many law firms, other legal service providers and law colleges are being asked what they are doing about AI. Robots in Law: How Artificial Intelligence is Transforming Legal Services is designed to provide a starting point in the form of an independent primer for anyone looking to get up to speed on AI in legal services. The book is organized into four distinct sections: Part I: Legal AI – Beyond the hype Part II: Putting AI to work Part III: AI giving back – Return on investment Part IV: Looking ahead The first three present an in-depth overview, and analysis, of the current legal AI landscape; the final section includes contributions from AI experts with connections to the legal space, on the prospects for legal AI in the short-term future. Along with the emergence of New Law and the burgeoning lawtech start-up economy, AI is part of a new dynamic in legal technology and it is here to stay. The question now is whether AI will find its place as a facilitator of legal services delivery, or whether it will initiate a shift in the value chain that will transform the legal business model.

For more, see Bob Ambrogi’s This Week In Legal Tech column (ATL). — Joe

LexBlog launches Fastcase integration

Law bloggers frequently cite to primary sources but most offer no links to them because the sources they use reside behind a paywall, be it Bloomberg, LexisNexis or Thomson Reuters. For LexBlog bloggers, the paywall problem has been resolved by the integration of Fastcase’s legal search service into LexBlog’s WordPress platform. Now clicking on a LexBlog link will display within the same browser interface primary law sourced by Fastcase. For details, see Kevin O’Keefe’s LexBlog launches Fastcase integration. — Joe

LexisNexis sued by consumer for shoddy editorial work

Hoping to represent a class of consumers who bought LN’s New York Landload-Tennat Law (aka the Tanbook), the law firm of Himmelstein, McConnell, Gribben, Donoghue & Joseph brought a Feb. 23 complaint against the publisher in Manhattan Supreme Court.

“Rather than an authoritative source of state statutes, laws and regulations, the Tanbook, which is represented by the defendant as complete and unedited, is instead, at least as pertains to those involving rent regulated housing in New York rife with omissions and inaccuracies, rendering it of no value to the attorneys, lay people, or judges who use it,” the 25-page complaint states.

Details at Class Calls LexisNexis Publication Totally Useless (Courthouse News Service). Hat tip to and see also Jean O’Grady’s Dewey B Strategic post. — Joe

RIP Eugene Garfield, September 16, 1925 – February 26, 2017

Sad news to report. On February 26th, Eugene Garfield, a giant in the LIS field who founded the Institute for Scientific Information and The Scientist, passed away. Here’s the link to The Scientist’s obituary and a review of his contributions. — Joe

Advice on how to respond to a Trump Twitter attack by Cleary Gottlieb

Cleary Gottlieb has issued a brief memo on how clients should respond to a social media attack. The memo clearly focuses on a Trump Twitter attack. “The advice,” writes Villanova law prof Louis J. Sirico, Jr. on Legal Skills Prof Blog, “is very lawyer-like—full or pros and cons, very thoughtful, very measured.” And very interesting. — Joe

Tracking Trump, there’s apps for that

In The Best Apps To Track Trump’s Legal Changes, Bob Ambrogi identifies three apps designed to monitor the Trump administration’s actions.

  1. The goal of Track Trump is “to isolate actual policy changes from rhetoric and political theater and to hold the administration accountable for the promises it made.”
  2. The Cabinet Center for Administrative Transition (CCAT) from the law firm Cadwalader, Wickersham & Taft collects “pronouncements, position papers, policy statements, and requirements as to legislative and regulatory change related to the financial service agenda of the President, the new administration and the new Congress. It tracks legislative developments, executive orders, policy positions, regulations, the regulators themselves, and relevant Trump administration news.”
  3. Columbia Law School’s Trump Human Rights Tracker follows the Trump administration’s actions and their implications for human rights.

— Joe

How likely is algorithmic transparency?

Here’s the abstract for Opening the Black Box: In Search of Algorithmic Transparency by Rachel Pollack Ichou (University of Oxford, Oxford Internet Institute):

Given the importance of search engines for public access to knowledge and questions over their neutrality, there have been many theoretical debates about the regulation of the search market and the transparency of search algorithms. However, there is little research on how such debates have played out empirically in the policy sphere. This paper aims to map how key actors in Europe and North America have positioned themselves in regard to transparency of search engine algorithms and the underlying political and economic ideas and interests that explain these positions. It also discusses the strategies actors have used to advocate for their positions and the likely impact of their efforts for or against greater transparency on the regulation of search engines. Using a range of qualitative research methods, including analysis of textual material and elite interviews with a wide range of stakeholders, this paper concludes that while discussions around algorithmic transparency will likely appear in future policy proposals, it is highly unlikely that search engines will ever be legally required to share their algorithms due to a confluence of interests shared by Google and its competitors. It ends with recommendations for how algorithmic transparency could be enhanced through qualified transparency, consumer choice, and education.

— Joe

A psychological perspective on algorithm aversion

Berkeley J. Dietvorst, The University of Chicago Booth School of Business, Joseph P. Simmons, University of Pennsylvania, The Wharton School, and Cade Massey, University of Pennsylvania, The Wharton School, Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err, Journal of Experimental Psychology: General (forthcoming).

Abstract: Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

Use Trump’s executive orders sourced on White House website with caution

In White House posts wrong versions of Trump’s orders on its website, USA Today reports that the texts of at least five Trump executive orders hosted on the White House website do not match the official text sent to the Federal Register. Quoting from the USA Today article, examples include:

► The controversial travel ban executive order suspended the Visa Interview Waiver Program and required the secretary of State to enforce a section of the Immigration and Naturalization Act requiring an in-person interview for everyone seeking a non-immigrant visa. But the White House version of the order referred to that provision as 8 U.S.C. 1222, which requires a physical and mental examination — not 8 U.S.C. 1202, which requires an interview.

► An executive order on ethical standards for administration appointees, as it appears on the White House website, refers to”section 207 of title 28″ of the U.S. Code. As the nonprofit news site Pro Publica reported last week, that section does not exist. The Federal Register correctly cited section 207 of title 18, which does exist.

— Joe

The Algorithm as a Human Artifact: Implications for Legal {Re}Search

Here’s the abstract for Susan Nevelow Mart’s very interested article The Algorithm as a Human Artifact: Implications for Legal {Re}Search (SSRN):

Abstract: When legal researchers search in online databases for the information they need to solve a legal problem, they need to remember that the algorithms that are returning results to them were designed by humans. The world of legal research is a human-constructed world, and the biases and assumptions the teams of humans that construct the online world bring to the task are imported into the systems we use for research. This article takes a look at what happens when six different teams of humans set out to solve the same problem: how to return results relevant to a searcher’s query in a case database. When comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database. An average of forty percent of the cases were unique to one database, and only about 7% of the cases were returned in search results in all six databases. It is fair to say that each different set of engineers brought very different biases and assumptions to the creation of each search algorithm. One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. The study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm. From the law professor seeking to set up a corpus of cases to study, the trial lawyer seeking that one elusive case, the legal research professor showing students the limitations of algorithms, researchers who want full results will need to mine multiple resources with multiple searches. And more accountability about the nature of the algorithms being deployed would allow all researchers to craft searches that would be optimally successful.

Recommended. — Joe