From the abstract for Giovanni Abramo, Ciriaco Andrea D’Angelo & Emanuela Reale, Peer review vs bibliometrics: which method better predicts the scholarly impact of publications?, Scientometrics, 121(1), 537-554 (2019):

In this work, we try to answer the question of which method, peer review vs bibliometrics, better predicts the future overall scholarly impact of scientific publications. We measure the agreement between peer review evaluations of Web of Science indexed publications submitted to the first Italian research assessment exercise and long-term citations of the same publications. We do the same for an early citation-based indicator. We find that the latter shows stronger predictive power, i.e., it more reliably predicts late citations in all the disciplinary areas examined, and for any citation time window starting one year after publication.

From the abstract for Andrew T. Hayashi (Virginia) & Gregory Mitchell (Virginia), Maintaining Scholarly Integrity in the Age of Bibliometrics, 68 J. Legal Educ. ___ (2019):

As quantitative measures of scholarly impact gain prominence in the legal academy, we should expect institutions and scholars to engage in a variety of tactics designed to inflate the apparent influence of their scholarly output. We identify these tactics and identify countermeasures that should be taken to prevent this manipulation. The rise of bibliometrics poses a significant risk to the scholarly endeavor but, with foresight, we can maintain scholarly integrity in the age of bibliometrics.

From the blurb for Neil Gorsuch, A Republic, If You Can Keep It (Crown Forum, Sept. 10, 2019):

Justice Neil Gorsuch reflects on his journey to the Supreme Court, the role of the judge under our Constitution, and the vital responsibility of each American to keep our republic strong.

As Benjamin Franklin left the Constitutional Convention, he was reportedly asked what kind of government the founders would propose. He replied, “A republic, if you can keep it.” In this book, Justice Neil Gorsuch shares personal reflections, speeches, and essays that focus on the remarkable gift the framers left us in the Constitution.

Justice Gorsuch draws on his thirty-year career as a lawyer, teacher, judge, and justice to explore essential aspects our Constitution, its separation of powers, and the liberties it is designed to protect. He discusses the role of the judge in our constitutional order, and why he believes that originalism and textualism are the surest guides to interpreting our nation’s founding documents and protecting our freedoms. He explains, too, the importance of affordable access to the courts in realizing the promise of equal justice under law—while highlighting some of the challenges we face on this front today.

Along the way, Justice Gorsuch reveals some of the events that have shaped his life and outlook, from his upbringing in Colorado to his Supreme Court confirmation process. And he emphasizes the pivotal roles of civic education, civil discourse, and mutual respect in maintaining a healthy republic.

A Republic, If You Can Keep It offers compelling insights into Justice Gorsuch’s faith in America and its founding documents, his thoughts on our Constitution’s design and the judge’s place within it, and his beliefs about the responsibility each of us shares to sustain our distinctive republic of, by, and for “We the People.”

From the abstract for Mary Margaret Penrose, Goodbye to Concurring Opinions, Duke Journal of Constitutional Law & Public Policy, Forthcoming:

Quick! List the U.S. Supreme Court’s 5 most impactful concurring opinions. Better yet, list the top 10. Do they readily come to mind? How long did it take you? Are these cases taught regularly? Have any more than the first 5 actually become the law? This article challenges the belief that separate opinions, particularly concurrences, are justified because they often become the law. That is factually untrue. And, it is even less likely for modern cases decided by the Burger, Rehnquist and Roberts’ Court.

From the abstract for J.B. Ruhl, Michael P. Vandenbergh & Sarah Dunaway, Total Scholarly Impact: Law Professor Citations in Non-Law Journals (Sept. 10, 2019):

This Article provides the first ranking of legal scholars and law faculties based on citations in non-law journals. Applying the methods, as much as possible, of the widely used Leiter-Sisk “Scholarly Impact Score,” which includes only citations in law publications, we calculate a “Interdisciplinary Scholarly Impact Score” from the non-law citations over a five-year period (2012-2018) to the work of tenured law faculty published in that period in non-law journals. We also provide the weighted scores for law faculty at the top 25 law schools as ranked by the US News rankings, a school-by-school ranking, and lists of the top five faculty by non-law citations at each school and of the top fifty scholars overall.

The work of legal scholars outside of law journals is not trivial. Over 600 faculty members from the 25 schools in our cohort published almost 3,000 articles in non-law journals from 2012-2018, and those articles received close to 21,000 citations in non-law journals. The faculties that rank in the top ten based on weighted scores for Interdisciplinary Scholarly Impact using the Leiter-Sisk weighting method (2x the mean + the median) for all faculty with at least one publication in the study period are: Minnesota, Stanford, Yale, Duke, Cal-Irvine, Georgetown, Boston University, USC, Vanderbilt, and George Washington. The rankings, although subject to limitations similar to those faced by the law journal citation studies, demonstrate that it is possible with reasonable effort to include citations in both law and non-law journals in rankings of legal scholars and law school faculties.

Legal scholars are cited in non-law journals for the work they publish in legal journals and, in many cases, for work they publish in non-law journals. Counting only their citations in law journals thus underestimates both the impact of their legal scholarship and their interdisciplinary impact. Non-law journals are widely read by law and policy scholars, scientists who influence legal scholarship, and policymakers, and publications and citations of legal scholars in non-law journals can be an indication of work that has transcended the conceptual frameworks, assumptions, or methods of legal research. Publications and citations in non-law journals thus provide an additional indication of the influence of legal scholars. Citations in non-law journals also provide an indication of the influence of legal scholars on the overall scholarly enterprise outside of law, and accounting for non-law citations in legal rankings can also encourage interdisciplinary scholarship. Scholars from non-law fields have made important contributions to legal scholarship, but the reverse should also be the case. Acceptance by other fields of legal scholars’ proposed legal reforms can play an important role in determining their success, which is made more likely when legal scholars are included in the work of other disciplines.

For these reasons, we suggest in the Article that future evaluations of legal scholars’ work include both the Law Scholarly Impact Score and the new Interdisciplinary Scholarly Impact Score, or combine the two into a Total Scholarly Impact Score. Although there is some mismatch in the citation engine capacities and the time frames for our non-law journal citation study and the most recent Sisk et al. law journal citation study, a combination of the two can provide a rough approximation of the Total Scholarly Impact Score. The top ten law faculties based on this combined measure are: Yale, Harvard, Chicago, NYU, Stanford, Columbia, Duke, Cal-Berkeley, Pennsylvania, and Vanderbilt.

The databases used in the law and non-law studies and their search capacities differ, making it difficult to develop a citation study method that captures all of a faculty members’ law and non-law publications and all citations to them in defined time frames. We are working to improve the non-law citation study database and search capacity.

Following an introduction to the project, in Part I we discuss why accounting for legal scholars’ non-law publications and citations is important when assessing scholarly impact. Part II describes our methodology. Part III presents our results, and Part IV discusses the results.

From the abstract for David B. Wilkins and Maria J. Esteban Ferrer, The Integration of Law into Global Business Solutions: The Rise, Transformation, and Potential Future of the Big Four Accountancy Networks in the Global Legal Services Market (2017):

Using a unique data set comprised of original research of both the corporate Web sites of the Big Four—PwC, Deloitte, KPMG, and EY—and their affiliated law firms, as well as archival material from the legal and accountancy press, this article documents the rise and transformation of the Big Four legal service lines since the enactment of the Sarbanes Oxley Act of 2002. Moreover, it demonstrates that there are good reasons to believe that these sophisticated players will be even more successful in penetrating the corporate legal services market in the decades to come, as that market increasingly matures in a direction that favors the integration of law into a wider category of business solutions that these globally integrated multidisciplinary practices now champion. We conclude with some preliminary observations about the implications of the reemergence of the Big Four legal networks for the legal profession.

From the abstract for Ashley Deeks, The Judicial Demand for Explainable Artificial Intelligence, 119 Colum.L.Rev. ____ (Forthcoming 2019):

A recurrent concern about machine learning algorithms is that they operate as “black boxes,” making it difficult to identify how and why the algorithms reach particular decisions, recommendations, or predictions. Yet judges will confront machine learning algorithms with increasing frequency, including in criminal, administrative, and tort cases. This Essay argues that judges should demand explanations for these algorithmic outcomes. One way to address the “black box” problem is to design systems that explain how the algorithms reach their conclusions or predictions. If and as judges demand these explanations, they will play a seminal role in shaping the nature and form of “explainable artificial intelligence” (or “xAI”). Using the tools of the common law, courts can develop what xAI should mean in different legal contexts.

There are advantages to having courts to play this role: Judicial reasoning that builds from the bottom up, using case-by-case consideration of the facts to produce nuanced decisions, is a pragmatic way to develop rules for xAI. Further, courts are likely to stimulate the production of different forms of xAI that are responsive to distinct legal settings and audiences. More generally, we should favor the greater involvement of public actors in shaping xAI, which to date has largely been left in private hands.

From the abstract for Jake Goldenfein, Algorithmic Transparency and Decision-Making Accountability: Thoughts for Buying Machine Learning Algorithms (Sept. 9, 2019):

There has been a great deal of research on how to achieve algorithmic accountability and transparency in automated decision-making systems – especially for those used in public governance. However, good accountability in the implementation and use of automated decision-making systems is far from simple. It involves multiple overlapping institutional, technical, and political considerations, and becomes all the more complex in the context of machine learning based, rather than rule based, decision systems. This chapter argues that relying on human oversight of automated systems, so called ‘human-in-the-loop’ approaches, is entirely deficient, and suggests addressing transparency and accountability during the procurement phase of machine learning systems – during their specification and parameterisation – is absolutely critical. In a machine learning based automated decision system, the accountability typically associated with a public official making a decision has already been displaced into the actions and decisions of those creating the system – the bureaucrats and engineers involved in building the relevant models, curating the datasets, and implementing a system institutionally. But what should those system designers be thinking about and asking for when specifying those systems?

There are a lot of accountability mechanisms available for system designers to consider, including new computational transparency mechanisms, ‘fairness’ and non-discrimination, and ‘explainability’ of decisions. If an official specifies for a system to be transparent, fair, or explainable, however, it is important that they understand the limitations of such a specification in the context of machine learning. Each of these approaches is fraught with risks, limitations, and the challenging political economy of technology platforms in government. Without understand the complexities and limitations of those accountability and transparency ideas, they risk disempowering public officials in the face of private industry technology vendors, who use trade secrets and market power in deeply problematic ways, as well as producing deficient accountability outcomes. This chapter therefore outlines the risks associated with corporate cooption of those transparency and accountability mechanisms, and suggests that significant resources must be invested in developing the necessary skills in the public sector for deciding whether a machine learning system is useful and desirable, and how it might be made as accountable and transparent as possible.

“Managing the uncertainty that is inherent in machine learning for predictive modeling can be achieved via the tools and techniques from probability, a field specifically designed to handle uncertainty,” writes Jason Brownlee in A Gentle Introduction to Uncertainty in Machine Learning. In his post, one will learn:

  • Uncertainty is the biggest source of difficulty for beginners in machine learning, especially developers.
  • Noise in data, incomplete coverage of the domain, and imperfect models provide the three main sources of uncertainty in machine learning.
  • Probability provides the foundation and tools for quantifying, handling, and harnessing uncertainty in applied machine learning.

From the blurb for Law as Data: Computation, Text, and the Future of Legal Analysis (SFI Press, 2019):

In recent years,the digitization of legal texts and developments in the fields of statistics, computer science, and data analytics have opened entirely new approaches to the study of law. This volume explores the new field of computational legal analysis, an approach marked by its use of legal texts as data. The emphasis herein is work that pushes methodological boundaries, either by using new tools to study longstanding questions within legal studies or by identifying new questions in response to developments in data availability and analysis.By using the text and underlying data of legal documents as the direct objects of quantitative statistical analysis, Law as Data introduces the legal world to the broad range of computational tools already proving themselves relevant to law scholarship and practice, and highlights the early steps in what promises to be an exciting new approach to studying the law.

Swiss researchers have found that algorithms that mine large swaths of data can eliminate anonymity in federal court rulings. This could have major ramifications for transparency and privacy protection. The study relied on a “web scraping technique” or mining of large swaths of data. The researchers created a database of all decisions of the Supreme Court available online from 2000 to 2018 – a total of 122,218 decisions. Additional decisions from the Federal Administrative Court and the Federal Office of Public Health were also added. Using an algorithm and manual searches for connections between data, the researchers were able to de-anonymise, in other words reveal identities, in 84% of the judgments in less than an hour.

H/T to beSpacific for discovering this report.

At the ABA’s annual meeting in August the ABA adopted this AI resolution:

RESOLVED, That the American Bar Association urges courts and lawyers to address the emerging ethical and legal issues related to the usage of artificial intelligence (“AI”) in the practice of law including: (1) bias, explainability, and transparency of automated decisions made by AI; (2) ethical and beneficial usage of AI; and (3) controls and oversight of AI and the vendors that provide AI.

In this Digital Detectives podcast episode on Legal Talk Network, hosts Sharon Nelson and John Simek are joined by Fastcase CEO Ed Walters to discuss this resolution. Recommended.

End Note: It will be interesting to see if, and if so, how the ABA fulfills its promise regarding controlling and oversight of AI vendors.

From Chapter One in Evaluating Machine Learning Models by Alice Zheng:

One of the core tasks in building a machine learning model is to evaluate its performance. It’s fundamental, and it’s also really hard. My mentors in machine learning research taught me to ask these questions at the outset of any project: “How can I measure success for this project?” and “How would I know when I’ve succeeded?” These questions allow me to set my goals realistically, so that I know when to stop. Sometimes they prevent me from working on ill-formulated projects where good measurement is vague or infeasible. It’s important to think about evaluation up front.

From the summary for Counting Regulations: An Overview of Rulemaking, Types of Federal Regulations, and Pages in the Federal Register (R43056, Updated September 3, 2019):

Federal rulemaking is an important mechanism through which the federal government implements policy. Federal agencies issue regulations pursuant to statutory authority granted by Congress. Therefore, Congress may have an interest in performing oversight of those regulations, and measuring federal regulatory activity can be one way for Congress to conduct that oversight. The number of federal rules issued annually and the total number of pages in the Federal Register are often referred to as measures of the total federal regulatory burden.