The editors of Perspectives – Teaching Legal Research and Writing are seeking articles for the Fall 2019 issue. From Legal Skills Prof Blog:
The Spring 2019 issue of Perspectives: Teaching Legal Research and Writing is in final production with an anticipated publication date of June 2019. However, we presently have a few spots available for the Fall 2019 issue and thus the Board of Editors is actively seeking articles to fill that volume. So if you’re working on an article idea appropriate for Perspectives (see below), or can develop a good manuscript in the next couple of months, please consider submitting it to us for consideration. There is no formal deadline since we will accept articles on rolling basis but the sooner the better if you’d like it published in the Fall issue.
From the executive summary of Technological Convergence: Regulatory, Digital Privacy, and Data Security Issues (R45746, May 30, 2019):
Technological convergence, in general, refers to the trend or phenomenon where two or more independent technologies integrate and form a new outcome. … Technological convergent devices share three key characteristics. First, converged devices can execute multiple functions to serve blended purpose. Second, converged devices can collect and use data in various formats and employ machine learning techniques to deliver enhanced user experience. Third, converged devices are connected to a network directly and/or are interconnected with other devices to offer ubiquitous access to users.
Technological convergence may present a range of issues where Congress may take legislative and/or oversight actions. Three selected issue areas associated with technological convergence are regulatory jurisdiction, digital privacy, and data security.
From the abstract for Alli Orr Larsen & Jeffrey L. Fisher, Virtual Briefing at the Supreme Court (109 Cornell Law Review (2019, Forthcoming)):
The open secret of Supreme Court advocacy in a digital era is that there is a new way to argue to the Justices. Today’s Supreme Court arguments are developed online: They are dissected and explored in blog posts, fleshed out in popular podcasts, and analyzed and re-analyzed by experts who do not represent parties or have even filed a brief in the case at all. This “virtual briefing” (as we call it) is intended to influence the Justices and their law clerks but exists completely outside of traditional briefing rules. This article describes virtual briefing and makes a case that the key players inside the Court are listening. In particular, we show that the Twitter patterns of law clerks indicate they are paying close attention to producers of virtual briefing, and threads of these arguments (proposed and developed online) are starting to appear in the Court’s decisions.
We argue that this “crowdsourcing” dynamic to Supreme Court decision-making is at least worth a serious pause. There is surely merit to enlarging the dialogue around the issues the Supreme Court decides – maybe the best ideas will come from new voices in the crowd. But the confines of the adversarial process have been around for centuries, and there are significant risks that come with operating outside of it particularly given the unique nature and speed of online discussions. We analyze those risks in this article and suggest it is time to think hard about embracing virtual briefing — truly assessing what can be gained and what will be lost along the way.
H/T to Bob Ambrogi for reporting that Casemaker has launched a major redesign of its legal research platform called Casemaker4. Details on LawSites and from Casemaker.
“The central problem” writes Robert Parnell “is that not all samples of legal data contain sufficient information to be usefully applied to decision making. By the time big data sets are filtered down to the type of matter that is relevant, sample sizes may be too small and measurements may be exposed to potentially large sampling errors. If Big Data becomes ‘small data’, it may in fact be quite useless.”
“In practice, although the volume of available legal data will sometimes be sufficient to produce statistically meaningful insights, this will not always be the case. While litigants and law firms would no doubt like to use legal data to extract some kind of informational signal from the random noise that is ever-present in data samples, the hard truth is that there will not always be one. Needless to say, it is important for legal professionals to be able to identify when this is the case.
“Overall, the quantitative analysis of legal data is much more challenging and error-prone than is generally acknowledged. Although it is appealing to view data analytics as a simple tool, there is a danger of neglecting the science in what is basically data science. The consequences of this can be harmful to decision making. To draw an analogy, legal data analytics without inferential statistics is like legal argument without case law or rules of precedent — it lacks a meaningful point of reference and authority.”
For more see When Big Legal Data isn’t Big Enough: Limitations in Legal Data Analytics (Settlement Analytics, 2016). Recommended.
From the abstract for Karni Chagal, Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers (Stanford Law & Policy Review, Forthcoming):
Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms’ self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.
A growing number of scholars and entities now acknowledge that whenever certain “sophisticated” or “autonomous” decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their “traditional” predecessors. What is it that separates “traditional” algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call “thinking algorithms,” that seem to warrant their own custom-made treatment? Why have “auto-pilots,” for example, been traditionally treated as “products,” while autonomous vehicles are suddenly perceived as a more “human-like” system that requires different treatment? Where is the line between machines drawn?
Scholars who touch on this question, have generally referred to the system’s level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system’s level of autonomy, but not on its compatibility with products liability laws.
This article therefore proposes a new approach to distinguishing traditional products from “thinking algorithms” for the determining whether products liability should apply. Instead of examining the vague concept of “autonomy,” the article analyzes the system’s specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to “sophisticated” systems and when it should not.
From the abstract for James Grimmelmann, All Smart Contracts Are Ambiguous, Penn Journal of Law and Innovation (Forthcoming):
Smart contracts are written in programming languages rather than in natural languages. This might seem to insulate them from ambiguity, because the meaning of a program is determined by technical facts rather than by social ones.
It does not. Smart contracts can be ambiguous, too, because technical facts depend on socially determined ones. To give meaning to a computer program, a community of programmers and users must agree on the semantics of the programming language in which it is written. This is a social process, and a review of some famous controversies involving blockchains and smart contracts shows that it regularly creates serious ambiguities. In the most famous case, The DAO hack, more than $150 million in virtual currency turned on the contested semantics of a blockchain-based smart-contract programming language.
From the blurb for John Nichols, Horsemen of the Trumpocalypse: A Field Guide to the Most Dangerous People in America (Hachette Books, 2018):
A line-up of the dirty dealers and defenders of the indefensible who are definitely not “making America great again”. Donald Trump has assembled a rogue’s gallery of alt-right hatemongers, crony capitalists, immigrant bashers, and climate-change deniers to run the American government. To survive the next four years, we the people need to know whose hands are on the levers of power. And we need to know how to challenge their abuses. John Nichols, veteran political correspondent at the Nation, has been covering many of these deplorables for decades. Sticking to the hard facts and unafraid to dig deep into the histories and ideologies of the people who make up Trump’s inner circle, Nichols delivers a clear-eyed and complete guide to this wrecking-crew administration.
Public.Resource.Org and its President and Founder Carl Malamud are recipients of the 2019 AALL Public Access to Government Information Award. From the press release:
“The activism of Carl Malamud and Public.Resource.Org in the public domain has been crucial in providing the public with vital access to essential government information,” said AALL President Femi Cadmus. “For his critical work and advocacy in advancing government transparency, AALL is proud to recognize Carl and his organization with the 2019 Public Access to Government Information Award.”
Long overdue. Congratulations!
Shay Elbaum, Reference Librarian, Stanford Law School, recounts his first experence at providing data services for empirical legal research. “As a new librarian with just enough tech know-how to be dangerous, working on this project has been a learning experience in several dimensions. I’m sharing some highlights here in the hope that others in the same position will glean something useful.” Details on the CS-SIS blog. Interesting.
H/T to Bob Ambrogi for reporting that Dean E. Sonderegger has been appointed senior vice president and general manager of Wolters Kluwer Legal & Regulatory U.S. (LRUS). He has been vice president in charge of legal markets and innovation since joining LRUS in 2015. Before that, he was with Bloomberg BNA for 13 years, where he oversaw strategy and marketing for software products. Here’s the press release.
Based on Edgar Alan Rayo’s assessment of companies’ offerings in the legal field, current applications of AI appear to fall in six major categories:
- Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
- Prediction technology – An AI software generates results that forecast litigation outcome.
- Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
- Document automation – Law firms use software templates to create filled out documents based on data input.
- Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
- Electronic billing – Lawyers’ billable hours are computed automatically.
Rayo explores the major areas of current AI applications in law, individually and in depth here.
From the abstract for Richard M. Re & Alicia Solow-Niederman Developing Artificially Intelligent Justice (Stanford Technology Law Review, Forthcoming):
Artificial intelligence, or AI, promises to assist, modify, and replace human decision-making, including in court. AI already supports many aspects of how judges decide cases, and the prospect of “robot judges” suddenly seems plausible—even imminent. This Article argues that AI adjudication will profoundly affect the adjudicatory values held by legal actors as well as the public at large. The impact is likely to be greatest in areas, including criminal justice and appellate decision-making, where “equitable justice,” or discretionary moral judgment, is frequently considered paramount. By offering efficiency and at least an appearance of impartiality, AI adjudication will both foster and benefit from a turn toward “codified justice,” an adjudicatory paradigm that favors standardization above discretion. Further, AI adjudication will generate a range of concerns relating to its tendency to make the legal system more incomprehensible, data-based, alienating, and disillusioning. And potential responses, such as crafting a division of labor between human and AI adjudicators, each pose their own challenges. The single most promising response is for the government to play a greater role in structuring the emerging market for AI justice, but auspicious reform proposals would borrow several interrelated approaches. Similar dynamics will likely extend to other aspects of government, such that choices about how to incorporate AI in the judiciary will inform the future path of AI development more broadly.
To get AI systems off the ground, training data must be voluminous and accurately labeled and annotated. With AI becoming a growing enterprise priority, data science teams are under tremendous pressure to deliver projects but frequently are challenged to produce training data at the required scale and quality. Nearly eight out of 10 organizations engaged in AI and machine learning said that projects have stalled, according to a Dimensional Research’s Artificial Intelligence and Machine Learning Projects Obstructed by Data Issues. The majority (96%) of these organizations said they have run into problems with data quality, data labeling necessary to train AI, and building model confidence.
In a public statement today [transcript here], Mueller reiterated DOJ policy that an indictment or criminal prosecution of a sitting President would unconstitutionally undermine the capacity of the executive branch to perform its constitutionally assigned functions so indicting President Trump for obstruction of justice was “not an option.” See the DOJ OLC’s memo titled A Sitting President’s Amenability to Indictment and Criminal Prosecution (updated Dec. 10, 2018).
On May 23, 2019, the DOJ filed a superseding indictment against WikiLeaks founder, Julian Assange. The 18 count indictment charges 17 violations of the Espionage Act, 18 U.S.C. §793, as well as one count of conspiracy to commit computer intrusion. Read the indictment here.
Excerpt from the introduction to Legislative Purpose and Adviser Immunity in Congressional Investigations (LSB10301, May 24, 2019):
The Trump Administration has recently questioned the legal validity of numerous investigative demands made by House committees. These objections have been based on various grounds, but two specific arguments will be addressed in this Sidebar:
- The President and other Administration officials have contended that certain committee demands lack a valid “legislative purpose” and therefore do not fall within Congress’s investigative authority.
- The President has made a more generalized claim that his advisers cannot be made to testify before Congress, even in the face of a committee subpoena. This position, based upon the executive branch’s longstanding conception of immunity for presidential advisers from compelled congressional testimony regarding their official duties.
Aryan Pegwar asks and answers the post title’s question. “Today Modern technologies like artificial intelligence, machine learning, data science have become the buzzwords. Everybody talks about but no one fully understands. They seem very complex to a layman. People often get confused by words like AI, ML and data science. In this article, we explain these technologies in simple words so that you can easily understand the difference between them.” Details here.
Noted appellate practitioner Jack Metzler last year proposed that we use the parenthetical “(cleaned up)” to indicate the omission of “messy quotation marks, ellipses, etc.” from a quoted authority. This idea has taken hold, appearing in over 100 judicial opinion. For Green Bag, Michael S. Kwun takes a look at new parentheticals.
TechRepublic reports that Microsoft has announced that Word Online will incorporate a feature known as Ideas this fall. Backed by artificial intelligence and machine learning courtesy of Microsoft Graph, Ideas will suggest ways to help you enhance your writing and create better documents. Ideas will also show you how to better organize and structure your documents by suggesting tables, styles, and other features already available in Word.