The British North America Legislative Database includes characteristics of all the legislation passed by the pre-Confederation assemblies of eastern British North America: Nova Scotia (1758-1867); Cape Breton (1785-1820); Prince Edward Island (1768-1867); New Brunswick (1786-1867); Lower Canada (1792-1838); Upper Canada (1792-1840), the United Canadas (1841-1867); and Newfoundland (1832-1867).

From the abstract of Jennifer L. Behrens, ‘Unknown Symbols’: Online Legal Research in the Age of Emoji, Forthcoming, 38 Legal Reference Services Quarterly ___ (2019):

Over the last decade, emoji and emoticons have made the leap from text messaging and social media to legal filings, court opinions, and law review articles. However, emoji and emoticons’ growth in popularity has tested the capability of online legal research systems to properly display and retrieve them in search results, posing challenges for future researchers of primary and secondary sources. This article examines current display practices on several of the most popular online legal research services (including Westlaw Edge, Lexis Advance, Bloomberg Law, Fastcase, HeinOnline, and Gale OneFile LegalTrac), and suggests effective workarounds for researchers.

Swiss researchers have found that algorithms that mine large swaths of data can eliminate anonymity in federal court rulings. This could have major ramifications for transparency and privacy protection. The study relied on a “web scraping technique” or mining of large swaths of data. The researchers created a database of all decisions of the Supreme Court available online from 2000 to 2018 – a total of 122,218 decisions. Additional decisions from the Federal Administrative Court and the Federal Office of Public Health were also added. Using an algorithm and manual searches for connections between data, the researchers were able to de-anonymise, in other words reveal identities, in 84% of the judgments in less than an hour.

H/T to beSpacific for discovering this report.

From the abstract for G. Patrick Flanagan & Michelle Dewey, Where Do We Go from Here? Transformation and Acceleration of Legal Analytics in Practice (Georgia State University Law Review, Vol. 35, No. 4, 2019):

The advantages of evidence-based decision-making in the practice and theory of law should be obvious: Don’t make arguments to judges that seldom persuade; Jurisprudential analysis ought to align with sound social science; Attorneys should pitch legal work to clients that demonstrably need it. Despite the appearance of simplicity, there are practical and attitudinal barriers to finding and incorporating data into the practice of law.

This article evaluates the current technologies and systems used to publish and analyze legal information from a researcher’s perspective. The authors also explore the technological, economic, political, and legal impediments that have prevented legal information systems from being able to keep pace with other industries and more open models. The authors detail tangible recommendations for necessary next steps toward making legal analytics more widely adopted by practitioners.

The ABA Profile of the Legal Profession survey reports when lawyers begin a research project 37% say they start with a general search engine like Google, 31% start with a paid online resource, 11% start with a free state bar-sponsored legal research service and 8% start with print resources.

A large majority (72%) use fee-based online resources for research. Westlaw is the most-used paid online legal research service, used by nearly two-thirds of all lawyers (64%) and preferred over other paid online services by nearly half of all lawyers (46%).

When it comes to free websites used most often for legal research 19% said Cornell’s Legal Information Institute, followed by Findlaw, Fastcase and government websites (17% each), Google Scholar 13%, and Casemaker 11%. Despite the popularity of online sources, 44% still use print materials regularly.

The survey also reports that 10% of lawyers say their firms use artificial intelligence-based technology tools while 36% think artificial intelligence tools will become mainstream in the legal profession in the next three to five years.

In thinking about dual provider choices for legal information vendors in the BigLaw market, I believe we tend to think the licensing equation is (Westlaw + Lexis Advance). Why? The answer may be that we tend to divide the marketplace for commercial legal information into two unique and close-to-mutually exclusive segments: general for core legal search provided by WEXIS and specialty for practice-specific legal search provided by Bloomberg BNA and Wolters Kluwer. This perspective assumes the adoption of BBNA and WK is only on a practice group/per seat basis while the adoption of WEXIS is on an enterprise/firm-wide basis. In addition to perceptions on editorial quality, where topical deep dives are expected from BBNA and WK but not WEXIS, perceived vendor pricing policies have influenced our take on the structure of this market.

According to Feit Consulting, the reality is quite different. Approximately 89% of AmLaw 200 firms license Wolters Kluwer and 72% of those WK firms license this service in an enterprise/firm-wide pricing plan, not on a practice group/per seat plan. That 72% figure means WK’s firm-wide install base in the AmLaw 200 is approximately 64%, or almost the same as Lexis Advance’s install rate in BigLaw.

The dual provider licensing equation really appears to be (Westlaw) + (Lexis or Wolters Kluwer). This is reinforced by statistics from Feit on the likelihood of vendor cancellation. Only 14% of Westlaw firms and 12% of WK firms are extremely or moderately likely to be eliminated. That’s less than half the number of firms extremely or moderately likely to eliminate Lexis (30%) and BBNA (29%). For dual provider firms, (Westlaw) + (Lexis or Wolters Kluwer) appears to be a well established equation.

ROSS Intelligence goes after “legacy” search platforms (i.e., WEXIS) in this promotional blog post, How ROSS AI Turns Legal Research On Its Head, Aug. 6, 2019. The post claims that ROSS supplants secondary analytical sources and makes West KeyCite and LexisNexis Shepard’s obsolete because its search function provides all the relevant applied AI search output for the research task at hand. In many respects, Fastcase and Casetext also could characterize their WEXIS competitors as legacy legal search platforms. Perhaps they have and I have just missed that.

To the best of my recollection, Fastcase, Casetext and ROSS have not explicitly promoted competition with each other. WEXIS has always been the primary target in their promotions. So why are Fastcase, Casetext and ROSS competing with each other in the marketplace? What if they joined forces in such a compelling manner that users abandon WEXIS for core legal search? Two or all three of the companies could merge. In the alternative, they could find a creative way to offer license-one-get-all options.

Perhaps the first step is to reconsider the sole provider option. It’s time to revise the licensing equation; perhaps it should be (Westlaw or Lexis) + (Fastcase or Casetext or ROSS).

The 2019 Edition of Ken Svengalis’ Legal Information Buyer’s Guide & Reference Manual (New England LawPress, June 2019) includes the most significant enhancements since the book was first published in 1996, including:

  • Invaluable introductions to each of 87 subject categories in Chapter 27, providing subject overview, sources of law, and useful Internet sites.
  • One hundred and twenty-eight (128) pages of new material, as well as updating of existing content, now totaling 1,147 pages in all.
  • More than 150 new treatises, reference titles, and other product reviews (Chapter 27).
  • Enhanced bibliographies of legal treatises in 87 subject areas (up from 67 in 2018), including more than 80 titles on Legal Research and Writing, and with new, used, electronic, or West Monthly Assured Print Pricing on more than 2,900 titles in all (Chapter 27).
  • Enhanced bibliography of legal reference titles (Chapter 22).
  • Updated bibliographies of state legal resources and research guides, including the cost of CALR offerings (Chapter 28).
  • Completely updated bibliographic data for all covered titles.
  • Completely updated cost and supplementation figures through 2019, with supplementation figures through 2018 (and 2019 for Matthew Bender).
  • Completely updated cost spreadsheet for supplemented titles (Appendix G).
  • Completely updated charts and tables reflecting 2018 corporate annual reports and pricing data.
  • Completely updated sample Westlaw and LexisNexis costs (Chapters 4 & 25).
  • Completely updated sample CALR costs for all vendors (Chapter 25).
  • Completely updated spreadsheet of published state statutory codes.
  • Recent industry developments and acquisitions, including profit margins (Chapter 2).
  • Updated information on Fastcase, Law360, and other Online providers.
  • Cumulative supplementation cost data going back 26 years — all at your fingertips — to guide your acquisitions and de-acquisitions decisions.
  • Special alerts of egregious price and supplementation cost increases in recent years.

Highly recommended.

From the abstract for Jarrod Shobe, Enacted Legislative Findings and Purposes, University of Chicago Law Review, Vol. 86, 2019:

Statutory interpretation scholarship generally imagines a sharp divide between statutory text and legislative history. This Article shows that scholars have failed to consider the implications of a hybrid type of text that is enacted by Congress and signed by the president, but which looks like legislative history. This text commonly appears at the beginning of a bill under headings such as “Findings” and “Purposes.” This enacted text often provides a detailed rationale for legislation and sets out Congress’s intent and purposes. Notably, it is drafted in plain language by political congressional staff rather than technical drafters, so it may be the portion of the enacted text that is most accessible to members of Congress and their high-level staff. Despite enacted findings and purposes’ apparent importance to interpretation, courts infrequently reference them and lack a coherent theory of how they should be used in statutory interpretation. In most cases in which courts have referenced them, they have relegated them to a status similar to that of unenacted legislative history despite the fact that they are less subject to formalist and pragmatic objections. Perhaps because courts have infrequently and inconsistently relied on enacted findings and purposes, scholars have also failed to consider them, so their relevance to statutory interpretation has gone mostly unrecognized and untheorized in the legal literature.

This Article argues that all of the enacted text of a statute must be read together and with equal weight, as part of the whole law Congress enacted, to come up with an interpretation that the entire text can bear. This is more likely to generate an interpretation in line with Congress’s intent than a mode of interpretation that focuses on the specific meaning of isolated terms based on dictionaries, canons, unenacted legislative history, or other unenacted tools. This Article shows that, when textualists’ formalist arguments against legislative history are taken off the table, there may be less that divides textualists from purposivists. Enacted findings and purposes may offer a text-based, and therefore more constrained and defensible, path forward for purposivism, which has been in retreat in recent decades in the face of strong textualist attacks.

“This white paper is presented by LexisNexis on behalf of the author. The opinions may not represent the opinions of LexisNexis. This document is for educational purposes only.” But the name of the author was not disclosed, the paper is branded with the LexisNexis logo on every page, and the paper is hosted online by LexisNexis. The paper is about as “educational” as anything Trump opines about.

In the whitepaper, Are Free & Low-Cost Legal Resources Worth the Risk?, LexisNexis once again goes after low cost (but high tech) legal information vendors using the paper’s critique of Google Scholar to slip in false claims about Casetext (and Fastcase). This is another instance of the mantra “low cost can cost you” the folks in LN’s C suite like to chant on the deck of the Titanic of very expensive legal information vendors.

In LexisNexis, scared of competition, lies about Casetext (June 4, 2019) Casetext’s Tara McCarty corrects some of the whitepaper’s falsehoods in a footnote:

“A few examples: (1) They say Casetext’s citator, SmartCite (our alternative to Shepard’s), is “based on algorithms rather than human editors.” While we do use algorithms to make the process more efficient, a team of human editors reviews SmartCite results. By using both, we actually improve accuracy, allowing computers to catch human error and visa versa. (2) They say Casetext doesn’t have slip opinions. Slip opinions are available on Casetext within 24 hours of publication. (3) They say Casetext doesn’t have case summaries. Not only does Casetext have over four million case summaries — those summaries are penned by judges, rather than nameless editors.”

McCarty’s editorial is recommended. The whitepaper, not so much.  Enough said.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.

The editors of Perspectives – Teaching Legal Research and Writing are seeking articles for the Fall 2019 issue. From Legal Skills Prof Blog:

The Spring 2019 issue of Perspectives: Teaching Legal Research and Writing is in final production with an anticipated publication date of June 2019. However, we presently have a few spots available for the Fall 2019 issue and thus the Board of Editors is actively seeking articles to fill that volume. So if you’re working on an article idea appropriate for Perspectives (see below), or can develop a good manuscript in the next couple of months, please consider submitting it to us for consideration. There is no formal deadline since we will accept articles on rolling basis but the sooner the better if you’d like it published in the Fall issue.

Shay Elbaum, Reference Librarian, Stanford Law School, recounts his first experence at providing data services for empirical legal research. “As a new librarian with just enough tech know-how to be dangerous, working on this project has been a learning experience in several dimensions. I’m sharing some highlights here in the hope that others in the same position will glean something useful.” Details on the CS-SIS blog. Interesting.

With continued advances in AI, machine learning and legal analytics anticipated, we can expect that legal information platforms will be supplanted by legal intelligence platforms in the not too distant future.  But what would a legal intelligence (or “smart law”) platform look like? Well, I can’t describe a prototypical legal intelligence platform in any technical detail. But it will exist at the point of the agile convergence of expert analysis, text and data-driven features for core legal search for all market segments.  I do, however, see what some “smart law” platform elements would be when looking at what Fastcase and Casetext are offering right now.

In my opinion, the best contemporary perspective on what a legal intelligence platform would be is to imagine that Fastcase and Casetext were one company.  The imagined vendor would offer in integrated fashion Fastcase and Casetext’s extensive collection of primary and secondary resources including legal news and contemporary analysis from the law blogosphere, Fastcase’s search engine algorithms for keyword searching, Casetext’s CLARA for contextual searching, Casetext’s SmartCite, Fastcase’s Docket Alarm, Fastcase BK, and Fastcase’s install base of some 70-75% of US attorneys, all in the context of the industry’s most transparent pricing model which both Fastcase and Casetext have already adopted.

Obviously, pricing models are not an essential element of a legal intelligence platform. But wouldn’t most potential “smart law” customers prefer transparent pricing? That won’t happen if WEXIS deploys the first legal intelligence platforms.  Neither Fastcase nor Casetext (nor Thomson Reuters, LexisNexis, BBNA, or WK) has a ‘smart law” platform right now. Who will be the first? Perhaps one possibility is hiding in plain sight.

A snip from Casetext’s blog post, Cite-checking the Smart Way: An Interview about SmartCite with Casetext Co-Founder and Chief Product Officer, Pablo Arredondo (May 15, 2019):

“SmartCite was developed through a combination of cutting-edge machine learning, natural language processing, and experienced editorial review. Let’s start with the technology.

“SmartCite looks for patterns in millions of cases and uses judges’ own words to determine whether a case is good law and how a case has been cited by other cases. There are three key data sources analyzed by SmartCite. First, SmartCite looks at “explanatory parentheticals.” You know how judges will summarize other cases using parentheses? By looking for these phrases in opinions, we were able to extract 4.3 million case summaries and explanations written by judges! These explanatory parentheticals provide what I call “artisanal citator entries”: they are insightful, reliable, judge-written summaries of cases.

“The second key data source leveraged by SmartCite are phrases in judicial opinions that indicate that a case has been negatively treated. For example, when a judicial decision cites to a case that is bad law, the judge will often explain why that case is bad law by saying “overruled by” or “reversed by” or “superseded by statute, as stated in…” The same is true with good law. Judicial opinions will often indicate that a case is “affirmed by” another case.

“The third data source we use are Bluebook signals that judges use to characterize and distinguish cases. Bluebook signals can actually tell us a lot about a case. For example, when a judge introduces a case using “but see” or “cf.” or “contra,” the judge is indicating that this case is contrary authority, or that it has treated a legal issue differently from other cases. These contrary signals are powerful indicators of tension in the case law.

“However, using machine learning to look for judicial phrases and Bluebook signals is only the starting point of SmartCite’s analysis. We also rely on experienced editors to manage that process, review the case law, and make decisions on the ‘edge cases.'”

See also this page for SmartCite product information.