The 2019 Edition of Ken Svengalis’ Legal Information Buyer’s Guide & Reference Manual (New England LawPress, June 2019) includes the most significant enhancements since the book was first published in 1996, including:

  • Invaluable introductions to each of 87 subject categories in Chapter 27, providing subject overview, sources of law, and useful Internet sites.
  • One hundred and twenty-eight (128) pages of new material, as well as updating of existing content, now totaling 1,147 pages in all.
  • More than 150 new treatises, reference titles, and other product reviews (Chapter 27).
  • Enhanced bibliographies of legal treatises in 87 subject areas (up from 67 in 2018), including more than 80 titles on Legal Research and Writing, and with new, used, electronic, or West Monthly Assured Print Pricing on more than 2,900 titles in all (Chapter 27).
  • Enhanced bibliography of legal reference titles (Chapter 22).
  • Updated bibliographies of state legal resources and research guides, including the cost of CALR offerings (Chapter 28).
  • Completely updated bibliographic data for all covered titles.
  • Completely updated cost and supplementation figures through 2019, with supplementation figures through 2018 (and 2019 for Matthew Bender).
  • Completely updated cost spreadsheet for supplemented titles (Appendix G).
  • Completely updated charts and tables reflecting 2018 corporate annual reports and pricing data.
  • Completely updated sample Westlaw and LexisNexis costs (Chapters 4 & 25).
  • Completely updated sample CALR costs for all vendors (Chapter 25).
  • Completely updated spreadsheet of published state statutory codes.
  • Recent industry developments and acquisitions, including profit margins (Chapter 2).
  • Updated information on Fastcase, Law360, and other Online providers.
  • Cumulative supplementation cost data going back 26 years — all at your fingertips — to guide your acquisitions and de-acquisitions decisions.
  • Special alerts of egregious price and supplementation cost increases in recent years.

Highly recommended.

From the abstract for Jarrod Shobe, Enacted Legislative Findings and Purposes, University of Chicago Law Review, Vol. 86, 2019:

Statutory interpretation scholarship generally imagines a sharp divide between statutory text and legislative history. This Article shows that scholars have failed to consider the implications of a hybrid type of text that is enacted by Congress and signed by the president, but which looks like legislative history. This text commonly appears at the beginning of a bill under headings such as “Findings” and “Purposes.” This enacted text often provides a detailed rationale for legislation and sets out Congress’s intent and purposes. Notably, it is drafted in plain language by political congressional staff rather than technical drafters, so it may be the portion of the enacted text that is most accessible to members of Congress and their high-level staff. Despite enacted findings and purposes’ apparent importance to interpretation, courts infrequently reference them and lack a coherent theory of how they should be used in statutory interpretation. In most cases in which courts have referenced them, they have relegated them to a status similar to that of unenacted legislative history despite the fact that they are less subject to formalist and pragmatic objections. Perhaps because courts have infrequently and inconsistently relied on enacted findings and purposes, scholars have also failed to consider them, so their relevance to statutory interpretation has gone mostly unrecognized and untheorized in the legal literature.

This Article argues that all of the enacted text of a statute must be read together and with equal weight, as part of the whole law Congress enacted, to come up with an interpretation that the entire text can bear. This is more likely to generate an interpretation in line with Congress’s intent than a mode of interpretation that focuses on the specific meaning of isolated terms based on dictionaries, canons, unenacted legislative history, or other unenacted tools. This Article shows that, when textualists’ formalist arguments against legislative history are taken off the table, there may be less that divides textualists from purposivists. Enacted findings and purposes may offer a text-based, and therefore more constrained and defensible, path forward for purposivism, which has been in retreat in recent decades in the face of strong textualist attacks.

“This white paper is presented by LexisNexis on behalf of the author. The opinions may not represent the opinions of LexisNexis. This document is for educational purposes only.” But the name of the author was not disclosed, the paper is branded with the LexisNexis logo on every page, and the paper is hosted online by LexisNexis. The paper is about as “educational” as anything Trump opines about.

In the whitepaper, Are Free & Low-Cost Legal Resources Worth the Risk?, LexisNexis once again goes after low cost (but high tech) legal information vendors using the paper’s critique of Google Scholar to slip in false claims about Casetext (and Fastcase). This is another instance of the mantra “low cost can cost you” the folks in LN’s C suite like to chant on the deck of the Titanic of very expensive legal information vendors.

In LexisNexis, scared of competition, lies about Casetext (June 4, 2019) Casetext’s Tara McCarty corrects some of the whitepaper’s falsehoods in a footnote:

“A few examples: (1) They say Casetext’s citator, SmartCite (our alternative to Shepard’s), is “based on algorithms rather than human editors.” While we do use algorithms to make the process more efficient, a team of human editors reviews SmartCite results. By using both, we actually improve accuracy, allowing computers to catch human error and visa versa. (2) They say Casetext doesn’t have slip opinions. Slip opinions are available on Casetext within 24 hours of publication. (3) They say Casetext doesn’t have case summaries. Not only does Casetext have over four million case summaries — those summaries are penned by judges, rather than nameless editors.”

McCarty’s editorial is recommended. The whitepaper, not so much.  Enough said.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.

The editors of Perspectives – Teaching Legal Research and Writing are seeking articles for the Fall 2019 issue. From Legal Skills Prof Blog:

The Spring 2019 issue of Perspectives: Teaching Legal Research and Writing is in final production with an anticipated publication date of June 2019. However, we presently have a few spots available for the Fall 2019 issue and thus the Board of Editors is actively seeking articles to fill that volume. So if you’re working on an article idea appropriate for Perspectives (see below), or can develop a good manuscript in the next couple of months, please consider submitting it to us for consideration. There is no formal deadline since we will accept articles on rolling basis but the sooner the better if you’d like it published in the Fall issue.

Shay Elbaum, Reference Librarian, Stanford Law School, recounts his first experence at providing data services for empirical legal research. “As a new librarian with just enough tech know-how to be dangerous, working on this project has been a learning experience in several dimensions. I’m sharing some highlights here in the hope that others in the same position will glean something useful.” Details on the CS-SIS blog. Interesting.

With continued advances in AI, machine learning and legal analytics anticipated, we can expect that legal information platforms will be supplanted by legal intelligence platforms in the not too distant future.  But what would a legal intelligence (or “smart law”) platform look like? Well, I can’t describe a prototypical legal intelligence platform in any technical detail. But it will exist at the point of the agile convergence of expert analysis, text and data-driven features for core legal search for all market segments.  I do, however, see what some “smart law” platform elements would be when looking at what Fastcase and Casetext are offering right now.

In my opinion, the best contemporary perspective on what a legal intelligence platform would be is to imagine that Fastcase and Casetext were one company.  The imagined vendor would offer in integrated fashion Fastcase and Casetext’s extensive collection of primary and secondary resources including legal news and contemporary analysis from the law blogosphere, Fastcase’s search engine algorithms for keyword searching, Casetext’s CLARA for contextual searching, Casetext’s SmartCite, Fastcase’s Docket Alarm, Fastcase BK, and Fastcase’s install base of some 70-75% of US attorneys, all in the context of the industry’s most transparent pricing model which both Fastcase and Casetext have already adopted.

Obviously, pricing models are not an essential element of a legal intelligence platform. But wouldn’t most potential “smart law” customers prefer transparent pricing? That won’t happen if WEXIS deploys the first legal intelligence platforms.  Neither Fastcase nor Casetext (nor Thomson Reuters, LexisNexis, BBNA, or WK) has a ‘smart law” platform right now. Who will be the first? Perhaps one possibility is hiding in plain sight.

A snip from Casetext’s blog post, Cite-checking the Smart Way: An Interview about SmartCite with Casetext Co-Founder and Chief Product Officer, Pablo Arredondo (May 15, 2019):

“SmartCite was developed through a combination of cutting-edge machine learning, natural language processing, and experienced editorial review. Let’s start with the technology.

“SmartCite looks for patterns in millions of cases and uses judges’ own words to determine whether a case is good law and how a case has been cited by other cases. There are three key data sources analyzed by SmartCite. First, SmartCite looks at “explanatory parentheticals.” You know how judges will summarize other cases using parentheses? By looking for these phrases in opinions, we were able to extract 4.3 million case summaries and explanations written by judges! These explanatory parentheticals provide what I call “artisanal citator entries”: they are insightful, reliable, judge-written summaries of cases.

“The second key data source leveraged by SmartCite are phrases in judicial opinions that indicate that a case has been negatively treated. For example, when a judicial decision cites to a case that is bad law, the judge will often explain why that case is bad law by saying “overruled by” or “reversed by” or “superseded by statute, as stated in…” The same is true with good law. Judicial opinions will often indicate that a case is “affirmed by” another case.

“The third data source we use are Bluebook signals that judges use to characterize and distinguish cases. Bluebook signals can actually tell us a lot about a case. For example, when a judge introduces a case using “but see” or “cf.” or “contra,” the judge is indicating that this case is contrary authority, or that it has treated a legal issue differently from other cases. These contrary signals are powerful indicators of tension in the case law.

“However, using machine learning to look for judicial phrases and Bluebook signals is only the starting point of SmartCite’s analysis. We also rely on experienced editors to manage that process, review the case law, and make decisions on the ‘edge cases.'”

See also this page for SmartCite product information.

H/T to Scott Fruehwald for calling attention to Kevin Bennardo & Alexa Chew (UNC), Citation Stickiness, 20 Journal of Appellate Practice & Process, Forthcoming, in his Legal Skills Prof Blog post Are Lawyers Citing the Best Cases to Courts? Scott Fruehwald solicits comments to his post. One interesting question is whether we have to start teaching legal research differently because of the results of Bennardo & Chew’s empirical study.

Here’s the abstract to Citation Stickiness:

This Article is an empirical study of what we call citation stickiness. A citation is sticky if it appears in one of the parties’ briefs and then again in the court’s opinion. Imagine that the parties use their briefs to toss citations in the court’s direction. Some of those citations stick and appear in the opinion — these are the sticky citations. Some of those citations don’t stick and are unmentioned by the court — these are the unsticky ones. Finally, some sources were never mentioned by the parties yet appear in the court’s opinion. These authorities are endogenous — they spring from the internal workings of the court itself.

In a perfect adversarial world, the percentage of sticky citations in courts’ opinions would be something approaching 100%. The parties would discuss the relevant authorities in their briefs, and the court would rely on the same authorities in its decision-making. Spoiler alert: our adversarial world is imperfect. Endogenous citations abound in judicial opinions and parties’ briefs are brimming with unsticky citations.

So we crunched the numbers. We analyzed 325 cases in the federal courts of appeals. Of the 7552 cases cited in those opinions, more than half were never mentioned in the parties’ briefs. But there’s more — in the Article, you’ll learn how many of the 23,479 cases cited in the parties’ briefs were sticky and how many were unsticky. You’ll see the stickiness data sliced and diced in numerous ways: by circuit, by case topic, by an assortment of characteristics of the authoring judge. Read on!

H/T to Bob Ambrogi for calling attention to Trialdex, a comprehensive resource for finding and comparing federal and state jury instructions. Bob observes “the site provides a searchable collection all official or quasi-official federal civil and criminal instructions and annotations, as well as an index of 20,000 legal terms, statutes, CFRs and Supreme Court cases referenced in jury instructions. The index includes every reference in a federal instruction or annotation to a U.S. Supreme Court decision, a U.S. Code statute, a C.F.R. provision, and a federal rule.” Do note that Trialdex does not index state instructions, but provides links to all state instructions that are posted online and uses a Google search integration to enable full-text search of all state instructions.

From the blurb for National Survey of State Laws, 8th edition, edited by Richard Leiter:

The National Survey of State Laws (NSSL) is a print and online resource that provides an overall view of some of the most-asked about and controversial legal topics in the United States. This database is derived from Richard Leiter’s National Survey of State Laws print editions. Presented in chart format, NSSL allows users to make basic state-by-state comparisons of current state laws. The database is updated regularly as new laws are passed or updated.

The current 8th edition, along with the 7th, 6th and 5th editions, are included in database format, which allows users to compare the same laws as they existed in 2005, 2008, 2015 and 2018, and to make more current comparisons with laws added or updated in the database since 2018. All print editions are included in HeinOnline’ s image-based, fully searchable, user-friendly platform.

The resource is available from Hein here.

From the abstract for Alexa Chew, Stylish Legal Citation, Arkansas Law Review, Vol. 71, Forthcoming:

Can legal citations be stylish? Is that even a thing? Yes, and this Article explains why and how. The usual approach to writing citations is as a separate, inferior part of the writing process, a perfunctory task that satisfies a convention but isn’t worth the attention that stylish writers spend on the “real” words in their documents. This Article argues that the usual approach is wrong. Instead, legal writers should strive to write stylish legal citations — citations that are fully integrated with the prose to convey information in a readable way to a legal audience. Prominent legal style expert Bryan Garner and others have repeatedly pinned legal style problems on citations. For example, Garner has argued that in-line (or textual) citations supposedly interrupt the prose and cause writers to ignore “unshapely” paragraphs and poor flow between sentences. Garner’s cause célèbre has been to persuade lawyers and judges to move their citations into footnotes, which he asserts will fix the stylistic problems caused by citations. This Article proposes both a different explanation for unstylish citations and a different solution. The explanation is that legal style experts don’t address citation as a component of legal style, leaving practitioners with little guidance about how to write stylish citations or even what they look like. This Article summarizes the citation-writing advice offered to practitioners in legal-style books like Plain English for Lawyers. Spoiler alert: it’s not much. The solution is to restructure the revision and editing processes to incorporate citations and treat them like “real” words, too. Rather than cordoning off citations from the rest of the prose, writers should embrace them as integral to the text as a whole. This Article describes a method for writing citations that goes well beyond “Bluebooking.” This method should be useful to any legal writer — from first-semester 1Ls to judicial clerks to experienced appellate practitioners.

H/T to beSpacific for calling attention to Kristina Niedringhaus’ Is it a “Good” Case? Can You Rely on BCite, KeyCite, and Shepard’s to Tell You?, JOTWELL (April 22, 2019) (reviewing Paul Hellyer, Evaluating Shepard’s, KeyCite, and BCite for Case Validation Accuracy, 110 Law Libr. J. 449 (2018)). Here’s a snip:

Hellyer’s article is an important read for anyone who relies on a citator for case validation or, determining whether a case is still “good” law. The results are fascinating and his methodology is thorough and detailed. Before delving into his findings, Hellyer reviews previous studies and explains his process in detail. His dataset is available upon request. The article has additional value because Hellyer shared his results with the three vendors prior to publication and describes and responds to some of their criticisms in his article, allowing the reader to make their own assessment of the critique.