Thomson Reuters has secured a multiyear contract to give Department of Justice personnel access to technology platforms designed for legal research and investigative purposes. More than 25K users across the department will have access to the information service provider’s Westlaw Edge, Litigation Analytics, Drafting Assistant and Practical Law products.

From the abstract of Jennifer L. Behrens, ‘Unknown Symbols’: Online Legal Research in the Age of Emoji, Forthcoming, 38 Legal Reference Services Quarterly ___ (2019):

Over the last decade, emoji and emoticons have made the leap from text messaging and social media to legal filings, court opinions, and law review articles. However, emoji and emoticons’ growth in popularity has tested the capability of online legal research systems to properly display and retrieve them in search results, posing challenges for future researchers of primary and secondary sources. This article examines current display practices on several of the most popular online legal research services (including Westlaw Edge, Lexis Advance, Bloomberg Law, Fastcase, HeinOnline, and Gale OneFile LegalTrac), and suggests effective workarounds for researchers.

The ABA Profile of the Legal Profession survey reports when lawyers begin a research project 37% say they start with a general search engine like Google, 31% start with a paid online resource, 11% start with a free state bar-sponsored legal research service and 8% start with print resources.

A large majority (72%) use fee-based online resources for research. Westlaw is the most-used paid online legal research service, used by nearly two-thirds of all lawyers (64%) and preferred over other paid online services by nearly half of all lawyers (46%).

When it comes to free websites used most often for legal research 19% said Cornell’s Legal Information Institute, followed by Findlaw, Fastcase and government websites (17% each), Google Scholar 13%, and Casemaker 11%. Despite the popularity of online sources, 44% still use print materials regularly.

The survey also reports that 10% of lawyers say their firms use artificial intelligence-based technology tools while 36% think artificial intelligence tools will become mainstream in the legal profession in the next three to five years.

In thinking about dual provider choices for legal information vendors in the BigLaw market, I believe we tend to think the licensing equation is (Westlaw + Lexis Advance). Why? The answer may be that we tend to divide the marketplace for commercial legal information into two unique and close-to-mutually exclusive segments: general for core legal search provided by WEXIS and specialty for practice-specific legal search provided by Bloomberg BNA and Wolters Kluwer. This perspective assumes the adoption of BBNA and WK is only on a practice group/per seat basis while the adoption of WEXIS is on an enterprise/firm-wide basis. In addition to perceptions on editorial quality, where topical deep dives are expected from BBNA and WK but not WEXIS, perceived vendor pricing policies have influenced our take on the structure of this market.

According to Feit Consulting, the reality is quite different. Approximately 89% of AmLaw 200 firms license Wolters Kluwer and 72% of those WK firms license this service in an enterprise/firm-wide pricing plan, not on a practice group/per seat plan. That 72% figure means WK’s firm-wide install base in the AmLaw 200 is approximately 64%, or almost the same as Lexis Advance’s install rate in BigLaw.

The dual provider licensing equation really appears to be (Westlaw) + (Lexis or Wolters Kluwer). This is reinforced by statistics from Feit on the likelihood of vendor cancellation. Only 14% of Westlaw firms and 12% of WK firms are extremely or moderately likely to be eliminated. That’s less than half the number of firms extremely or moderately likely to eliminate Lexis (30%) and BBNA (29%). For dual provider firms, (Westlaw) + (Lexis or Wolters Kluwer) appears to be a well established equation.

ROSS Intelligence goes after “legacy” search platforms (i.e., WEXIS) in this promotional blog post, How ROSS AI Turns Legal Research On Its Head, Aug. 6, 2019. The post claims that ROSS supplants secondary analytical sources and makes West KeyCite and LexisNexis Shepard’s obsolete because its search function provides all the relevant applied AI search output for the research task at hand. In many respects, Fastcase and Casetext also could characterize their WEXIS competitors as legacy legal search platforms. Perhaps they have and I have just missed that.

To the best of my recollection, Fastcase, Casetext and ROSS have not explicitly promoted competition with each other. WEXIS has always been the primary target in their promotions. So why are Fastcase, Casetext and ROSS competing with each other in the marketplace? What if they joined forces in such a compelling manner that users abandon WEXIS for core legal search? Two or all three of the companies could merge. In the alternative, they could find a creative way to offer license-one-get-all options.

Perhaps the first step is to reconsider the sole provider option. It’s time to revise the licensing equation; perhaps it should be (Westlaw or Lexis) + (Fastcase or Casetext or ROSS).

New York Attorney General Letitia James announced a multistate settlement with LexisNexis Risk Solutions and several of its affiliates for defrauding state law enforcement agencies out of more than $2.8 million. LexisNexis deliberately failed to pay those agencies agreed-upon fees for the resale of car crash reports. The press release explains how LN Risk violated its contract for this data:

“LexisNexis defrauded law enforcement agencies in New York and other states by paying law enforcement agencies for only the first crash report sold, and not for each subsequent report resold, as their contracts required. In particular, the investigation found that from June 2012 through May 2019, LexisNexis fulfilled customer crash report requests by searching its database and — if it had previously sold the requested report to another customer — would resell the report without paying the contracted agency its agreed-upon fee for the new sale. LexisNexis would then omit the new sale from reports of sales it was contractually obliged to provide to the agencies. As a result, LexisNexis generated monthly reports for the agencies that falsely understated total crash report sales, and deprived New York State law enforcement agencies of sales fees they were entitled to receive.”

Just joking!

Last summer, there were some very visible signs that AALL was ratcheting up its consumer advocacy efforts for the benefit of AALL law firm members because LN was tying ancillary products to the sale of a Lexis Advance contract in the law firm market. Go here for a summary of the tying controversy written about six months ago. Since then, nothing. The controversy is not on the Executive Board’s summer meeting agenda this year and the June 17, 2019 CRIV liaison call notes do not mention it directly. So much for AALL taking “legal or commercial action” against LexisNexis for its anticompetitive tying sales strategy.

Early results from 25% of the AmLaw 200 participating so far in a Feit Consulting survey indicate that the adoption rate of Westlaw Edge and Context by LN is roughly the same, trending at 15%. “Context seems to be getting much more consideration, however, because of its much lower cost. At this point 40% of firms with Lexis are actively considering Context,” according to Feit Consulting’s blog post.

My primary concern is that comparing Westlaw Edge and Context because both offer litigation analytics may only be part of the story. Westlaw Edge offers much more than just the litigation analytics offered by Context; Westlaw Edge includes WestSearch Plus, KeyCite Overruling Risk, Statutes Compare and Regulations Compare. And Westlaw Edge will eventually replace Westlaw whereas Context will not replace Lexis Advance.

The 2019 Edition of Ken Svengalis’ Legal Information Buyer’s Guide & Reference Manual (New England LawPress, June 2019) includes the most significant enhancements since the book was first published in 1996, including:

  • Invaluable introductions to each of 87 subject categories in Chapter 27, providing subject overview, sources of law, and useful Internet sites.
  • One hundred and twenty-eight (128) pages of new material, as well as updating of existing content, now totaling 1,147 pages in all.
  • More than 150 new treatises, reference titles, and other product reviews (Chapter 27).
  • Enhanced bibliographies of legal treatises in 87 subject areas (up from 67 in 2018), including more than 80 titles on Legal Research and Writing, and with new, used, electronic, or West Monthly Assured Print Pricing on more than 2,900 titles in all (Chapter 27).
  • Enhanced bibliography of legal reference titles (Chapter 22).
  • Updated bibliographies of state legal resources and research guides, including the cost of CALR offerings (Chapter 28).
  • Completely updated bibliographic data for all covered titles.
  • Completely updated cost and supplementation figures through 2019, with supplementation figures through 2018 (and 2019 for Matthew Bender).
  • Completely updated cost spreadsheet for supplemented titles (Appendix G).
  • Completely updated charts and tables reflecting 2018 corporate annual reports and pricing data.
  • Completely updated sample Westlaw and LexisNexis costs (Chapters 4 & 25).
  • Completely updated sample CALR costs for all vendors (Chapter 25).
  • Completely updated spreadsheet of published state statutory codes.
  • Recent industry developments and acquisitions, including profit margins (Chapter 2).
  • Updated information on Fastcase, Law360, and other Online providers.
  • Cumulative supplementation cost data going back 26 years — all at your fingertips — to guide your acquisitions and de-acquisitions decisions.
  • Special alerts of egregious price and supplementation cost increases in recent years.

Highly recommended.

Kudos to Fastcase for launching Case Alerts, a daily report of court decisions in key practice areas, in partnership with the Florida Bar. According to the press release, “the subscription-only service introduced Florida Family Law Alerts, a daily e-mail summary of family law decisions from Florida courts. Fastcase and The Florida Bar will roll out new Florida practice areas throughout the summer, including business law, real property, probate, trusts, and tax law alerts.” I bet we will hear that Fastcase will be extending this service to other state jurisdictions in partnership with additional state bar associations in the not too distant future.

“This white paper is presented by LexisNexis on behalf of the author. The opinions may not represent the opinions of LexisNexis. This document is for educational purposes only.” But the name of the author was not disclosed, the paper is branded with the LexisNexis logo on every page, and the paper is hosted online by LexisNexis. The paper is about as “educational” as anything Trump opines about.

In the whitepaper, Are Free & Low-Cost Legal Resources Worth the Risk?, LexisNexis once again goes after low cost (but high tech) legal information vendors using the paper’s critique of Google Scholar to slip in false claims about Casetext (and Fastcase). This is another instance of the mantra “low cost can cost you” the folks in LN’s C suite like to chant on the deck of the Titanic of very expensive legal information vendors.

In LexisNexis, scared of competition, lies about Casetext (June 4, 2019) Casetext’s Tara McCarty corrects some of the whitepaper’s falsehoods in a footnote:

“A few examples: (1) They say Casetext’s citator, SmartCite (our alternative to Shepard’s), is “based on algorithms rather than human editors.” While we do use algorithms to make the process more efficient, a team of human editors reviews SmartCite results. By using both, we actually improve accuracy, allowing computers to catch human error and visa versa. (2) They say Casetext doesn’t have slip opinions. Slip opinions are available on Casetext within 24 hours of publication. (3) They say Casetext doesn’t have case summaries. Not only does Casetext have over four million case summaries — those summaries are penned by judges, rather than nameless editors.”

McCarty’s editorial is recommended. The whitepaper, not so much.  Enough said.

Trust is a state of readiness to take a risk in a relationship. Once upon a time most law librarians were predisposed to trust legal information vendors and their products and services. Think Shepard’s in print when Shepard’s was the only available citator with signals that were by default the industry standard. Think late 1970s-early 1980s for computer-assisted legal research where the degree of risk taken by a searcher was partially controlled by properly using Boolean operators when Lexis was the only full-text legal search vendor.

Today, output from legal information platforms does not always result in building confidence around the use of the information provided be it legal search or legal citator outputs as comparative studies of each by Mart and Hellyer have demonstrated. What about the output we are now being offered by way of the implementation of artificial intelligence for legal analytics and predictive technology? As legal information professionals are we willing to be vulnerable to the actions of our vendors based on some sort of expectation that vendors will provide actionable intelligence important to our user population, irrespective of our ability to monitor or control vendors’ use of artificial intelligence for legal analytics and predictive technology?

Hopefully we are not so naive as to trust our vendors applied AI output at face value. But we won’t be given the opportunity to shine a light into the “black box” because of understandable proprietary concerns. What’s needed is a way to identify the impact of model error and bias. One way is to compare similar legal analytic outputs that identify trends and patterns using data points from past case law, win/loss rates and even a judge’s history or similar predictive technology outputs that forecast litigation outcome like Mart did for legal search and Hellyer did for citators. At the present time, however, our legal information providers do not offer similar enough AI tools for comparative studies and who knows if they will.  Early days… .

Until such time as there is a legitimate certification process to validate each individual AI product to the end user when the end user calls up specific applied AI output for legal analytics and predictive technology, is there any reason to assume the risk of using them? No, not really, but use them our end users will. Trust but (try to) validate otherwise the output remains opaque to the end user and that can lead to illusions of understanding.

From the abstract for Karni Chagal, Am I an Algorithm or a Product? When Products Liability Should Apply to Algorithmic Decision-Makers (Stanford Law & Policy Review, Forthcoming):

Over the years mankind has come to rely increasingly on machines. Technology is ever advancing, and in addition to relinquishing physical and mere computational tasks to machines, algorithms’ self-learning abilities now enable us to entrust machines with professional decisions, for instance, in the fields of law, medicine and accounting.

A growing number of scholars and entities now acknowledge that whenever certain “sophisticated” or “autonomous” decision-making systems cause damage, they should no longer be subject to products liability but deserve different treatment from their “traditional” predecessors. What is it that separates “traditional” algorithms and machines that for decades have been subject to traditional product liability legal framework from what I would call “thinking algorithms,” that seem to warrant their own custom-made treatment? Why have “auto-pilots,” for example, been traditionally treated as “products,” while autonomous vehicles are suddenly perceived as a more “human-like” system that requires different treatment? Where is the line between machines drawn?

Scholars who touch on this question, have generally referred to the system’s level of autonomy as a classifier between traditional products and systems incompatible with products liability laws (whether autonomy was mentioned expressly, or reflected in the specific questions posed). This article, however, argues that a classifier based on autonomy level is not a good one, given its excessive complexity, the vague classification process it dictates, the inconsistent results it might lead to, and the fact said results mainly shed light on the system’s level of autonomy, but not on its compatibility with products liability laws.

This article therefore proposes a new approach to distinguishing traditional products from “thinking algorithms” for the determining whether products liability should apply. Instead of examining the vague concept of “autonomy,” the article analyzes the system’s specific features and examines whether they promote or hinder the rationales behind the products liability legal framework. The article thus offers a novel, practical method for decision-makers wanting to decide when products liability should continue to apply to “sophisticated” systems and when it should not.

Based on Edgar Alan Rayo’s assessment of companies’ offerings in the legal field, current applications of AI appear to fall in six major categories:

  1. Due diligence – Litigators perform due diligence with the help of AI tools to uncover background information. We’ve decided to include contract review, legal research and electronic discovery in this section.
  2. Prediction technology – An AI software generates results that forecast litigation outcome.
  3. Legal analytics – Lawyers can use data points from past case law, win/loss rates and a judge’s history to be used for trends and patterns.
  4. Document automation – Law firms use software templates to create filled out documents based on data input.
  5. Intellectual property – AI tools guide lawyers in analyzing large IP portfolios and drawing insights from the content.
  6. Electronic billing – Lawyers’ billable hours are computed automatically.

Rayo explores the major areas of current AI applications in law, individually and in depth here.

TechRepublic reports that Microsoft has announced that Word Online will incorporate a feature known as Ideas this fall. Backed by artificial intelligence and machine learning courtesy of Microsoft Graph, Ideas will suggest ways to help you enhance your writing and create better documents. Ideas will also show you how to better organize and structure your documents by suggesting tables, styles, and other features already available in Word.

With continued advances in AI, machine learning and legal analytics anticipated, we can expect that legal information platforms will be supplanted by legal intelligence platforms in the not too distant future.  But what would a legal intelligence (or “smart law”) platform look like? Well, I can’t describe a prototypical legal intelligence platform in any technical detail. But it will exist at the point of the agile convergence of expert analysis, text and data-driven features for core legal search for all market segments.  I do, however, see what some “smart law” platform elements would be when looking at what Fastcase and Casetext are offering right now.

In my opinion, the best contemporary perspective on what a legal intelligence platform would be is to imagine that Fastcase and Casetext were one company.  The imagined vendor would offer in integrated fashion Fastcase and Casetext’s extensive collection of primary and secondary resources including legal news and contemporary analysis from the law blogosphere, Fastcase’s search engine algorithms for keyword searching, Casetext’s CLARA for contextual searching, Casetext’s SmartCite, Fastcase’s Docket Alarm, Fastcase BK, and Fastcase’s install base of some 70-75% of US attorneys, all in the context of the industry’s most transparent pricing model which both Fastcase and Casetext have already adopted.

Obviously, pricing models are not an essential element of a legal intelligence platform. But wouldn’t most potential “smart law” customers prefer transparent pricing? That won’t happen if WEXIS deploys the first legal intelligence platforms.  Neither Fastcase nor Casetext (nor Thomson Reuters, LexisNexis, BBNA, or WK) has a ‘smart law” platform right now. Who will be the first? Perhaps one possibility is hiding in plain sight.

A snip from Casetext’s blog post, Cite-checking the Smart Way: An Interview about SmartCite with Casetext Co-Founder and Chief Product Officer, Pablo Arredondo (May 15, 2019):

“SmartCite was developed through a combination of cutting-edge machine learning, natural language processing, and experienced editorial review. Let’s start with the technology.

“SmartCite looks for patterns in millions of cases and uses judges’ own words to determine whether a case is good law and how a case has been cited by other cases. There are three key data sources analyzed by SmartCite. First, SmartCite looks at “explanatory parentheticals.” You know how judges will summarize other cases using parentheses? By looking for these phrases in opinions, we were able to extract 4.3 million case summaries and explanations written by judges! These explanatory parentheticals provide what I call “artisanal citator entries”: they are insightful, reliable, judge-written summaries of cases.

“The second key data source leveraged by SmartCite are phrases in judicial opinions that indicate that a case has been negatively treated. For example, when a judicial decision cites to a case that is bad law, the judge will often explain why that case is bad law by saying “overruled by” or “reversed by” or “superseded by statute, as stated in…” The same is true with good law. Judicial opinions will often indicate that a case is “affirmed by” another case.

“The third data source we use are Bluebook signals that judges use to characterize and distinguish cases. Bluebook signals can actually tell us a lot about a case. For example, when a judge introduces a case using “but see” or “cf.” or “contra,” the judge is indicating that this case is contrary authority, or that it has treated a legal issue differently from other cases. These contrary signals are powerful indicators of tension in the case law.

“However, using machine learning to look for judicial phrases and Bluebook signals is only the starting point of SmartCite’s analysis. We also rely on experienced editors to manage that process, review the case law, and make decisions on the ‘edge cases.'”

See also this page for SmartCite product information.