Join Steve Lastres, Director of Knowledge Management Services at Debevoise & Plimpton, and Vable Thursday at 11:30 am (Eastern) for the webinar Just Give Me What I Need: Looking at Next-Gen Current Awareness Aggregation Tools. Topics to be covered include why content aggregation is no longer enough and Who the market players are in the legal sector. Free registration required. Recommended. — Joe
Category Archives: Legal Research
“Low-cost can cost you” campaign: Does LexisNexis now acknowledge that Fastcase and Casemaker (and even Google Scholar) pose competitive threats?
On Lextalk, LexisNexis makes an obvious distinction: it offers resources low-cost search services do not provide. In a nutshell, Low-Cost Legal Research = Low-Value Results. LexisNexis Equips You with Much, Much More (March 18, 2017) claims that Lexis Advance is simply better because of its offerings (even if you don’t need the resources, tools and other value-add-ons). No metrics, no comparison of search engine performance, simply an unsubstantiated warning to lawyers that “cost savings usually equals case-law light,” meaning as displayed below, “low-cost can cost you.” See also Lextalk’s Low-Cost Legal Research: Go Cheap, Get Gaps. Go LexisNexis, Gain Confidence (Apr. 5, 2017).
In LexisNexis Comes Out Swinging Against Lower-Cost Legal Research Services, the Lawyerist’s Lisa Needham makes a perceptive observation: “What LexisNexis seems to overlook in their eagerness to go after everyone else is that it merely highlights how much they see things like Fastcase and Google Scholar as competition. If you’re scared enough to mount an entire campaign about how great you are and how terrible other services are, you’ve pretty much already acknowledged that they represent a legitimate threat. Fastcase and Casemaker should be nothing but proud to be highlighted in this fashion.”
Competition is definitely increasing in the small law market with Lexis, Westlaw and Fastcase in a virtual tie in the small law market and it does look like Lexis is mounting a campaign to acquire a larger install base in it. Lextalk recently published posts such as Texas Legal Research: 4 Ways to Get It Done Quickly, Thoroughly, Massachusetts Legal Research: 4 Ways to Get It Done Quickly, Thoroughly and Illinois Legal Research: 4 Ways to Get It Done Quickly, Thoroughly, all of which were published on April 25, 2017. Each post offers a discount for Lexis Advance that is limited to any attorney in a law firm with 1 – 50 attorneys. So has Lexis been losing ground to Casemaker and Fastcase in the Texas, Massachusetts and Illinois small law market? — Joe
End note: Members of the state bars of Texas, Massachusetts and Illinois receive Fastcase access as a membership benefit.
There is a new legal search service on the block and it intends to compete with Lexis Advance and Westlaw
Judicata is a legal search service still evolving into becoming a fully-fledged, professional grade one but it is getting very close to being ready. The search service already claims to be better than WEXIS. According to Judicata’s CEO Itai Gurari, “[w]e’ve focused on building a search engine that returns the best results the fastest, and at this point it mops the floor with Westlaw and Lexis.” Why? Because Judicata is mapping the law with extreme accuracy and granularity. Bob Ambrogi was given an opportunity to test drive Judicata yesterday and reports his findings today at After Five Years in Stealth Mode, Judicata Reveals Its Legal Research Service. Recommended. — Joe
How a network of bills become a law: What can be learned from GovTrack’s text incorporation analysis for legislative history research
A new analytical tool incorporated into GovTrack late last year reveals when provisions of bills are incorporated into other bills by way of text incorporation analysis. “Only about 3% of bills will be enacted through the signature of the President or a veto override. Another 1% are identical to those bills, so-called ‘companion bills,’ which are easily identified. Our new analysis reveals almost another 3% of bills which had substantial parts incorporated into an enacted bill in 2015–2016. To miss that last 3% is to be practically 100% wrong about how many bills are being enacted by Congress,” writes GovTrack. For details see GovTrack’s blog post and illustration of this new technique. — Joe
About their very interesting essay, New Wine in Old Wineskins: Metaphor and Legal Research, 92 Notre Dame L. Rev Online 1 (2016), Amy Sloan and Colin Starger write
This essay examines a different set of metaphors currently doing damage in law. Though not as life-and-death dramatic as the War on Drugs or the struggle against patriarchy, these metaphors affect every law student and practicing lawyer. What’s more, our examination implicates broader philosophical issues that resonate well beyond specifically legal discourse. The metaphors we examine pertain to legal research—how we conceptualize the task of ‘finding law’ to make arguments and solve legal problems. The broader philosophical issues concern changes wrought by technology. When technology radically alters our material world, sometimes our conceptual world fails to adjust. To successfully evolve, we must interrogate and change our deepest metaphors. This Essay undertakes this foundational task in the brave new world of legal research.
This Essay argues that conceptualizing emerging legal technologies using inherited research metaphors is like pouring new wine in old wineskins—it simply doesn’t work. When a primary challenge of research was physically gathering hidden and expensive information, metaphors based on journey, acquisition, and excavation helped make sense of the research process. But new, technologically-driven search methods have burst those conceptual wineskins. The Internet and Big Data make information cheap and easily accessible. The old metaphors fail.
Recommended. — Joe
“ROSS Intelligence, the artificial intelligence legal research platform, outperforms Westlaw and LexisNexis in finding relevant authorities, in user satisfaction and confidence, and in research efficiency, and is virtually certain to deliver a positive return on investment” wrote Bob Ambrogi about the findings of a benchmark report by Blue Hill Research. For details, see ROSS AI Plus Wexis Outperforms Either Westlaw or LexisNexis Alone, Study Finds. — Joe
“For a full understanding of their search needs just taking stock of their wishes is not going to suffice, since legal professionals are not capable of describing the features of a system that does not yet exist,” writes Marc van Opijnen and Cristiana Santos in On the Concept of Relevance in Legal Information Retrieval, 25 Artificial Intelligence and Law 65-87 (2017). “To understand the juristic mindset, it is of the utmost importance to follow meticulously their day-to-day retrieval quests.” Here’s the paper’s abstract:
The concept of ‘relevance’ is crucial to legal information retrieval, but because of its intuitive understanding it goes undefined too easily and unexplored too often. We discuss a conceptual framework on relevance within legal information retrieval, based on a typology of relevance dimensions used within general information retrieval science, but tailored to the specific features of legal information. This framework can be used for the development and improvement of legal information retrieval systems.
In the abstract for Judging Ordinary Meaning, Thomas R. Lee and Stephan C. Mouritsen write:
We identify theoretical and operational deficiencies in our law’s attempts to credit the ordinary meaning of the law and present linguistic theories and tools to assess it more reliably. Our framework examines iconic problems of ordinary meaning — from the famous “no vehicles in the park” hypothetical to two Supreme Court cases (United States v. Muscarello and Taniguchi v. Kan Pacific Saipan) and a Seventh Circuit opinion of Judge Richard Posner (in United States v. Costello). We show that the law’s conception of ordinary meaning implicates empirical questions about language usage. And we present linguistic tools from a field known as corpus linguistics that can help to answer these empirical questions.
When we speak of ordinary meaning we are asking an empirical question — about the sense of a word or phrase that is most likely implicated in a given linguistic context. Linguists have developed computer-aided means of answering such questions. We propose to import those methods into the law of interpretation. And we consider and respond to criticisms of their use by lawyers and judges.
Interesting. — Joe
Google Search Engine Results Pages (SERPs) have changed dramatically over the past 20 years. In A visual history of Google SERPs: 1996 to 2017 (Search Engine Watch), Clark Boyd writes
The original lists of static results, comprised of what we nostalgically term ‘10 blue links’, have evolved into multi-media, cross-device, highly-personalized interfaces that can even adapt as we speak to them. There are now images, GIFs, news articles, videos, and podcasts in SERPs, all powered by algorithms that grow evermore sophisticated through machine learning.
Search Engine Watch’s infographic identifies the evolution of Google Search Engine’s results pages here. Recommended. It could be used in a teachable moment about the consequences of algorithmic change generally before moving to the great unknowing of algorithmic changes engineered by WEXIS and displayed in WEXIS search output. — Joe
Check out the below Ted Institute video. Dario Gil presents “a jaw dropping scenario in which a Watson enabled computer is responding to the types of complex business research questions which are fairly routine in a “big law” research environment,” writes Jean O’Grady in Augmented Intelligence as a Reference Librarian? Twice as Fast Half as Good… For Now… . — Joe
Want to search an out-of-date version of the National Survey of State Laws? There’s a very expensive online legal search service for that!
Westlaw carries the full text of the sixth edition of the National Survey of State Laws online. Therein lies the problem. In addition to not stating online that the sixth edition has been superceded by the much more recent seventh edition (which Westlaw is going to publish online), the compiliers of the State Laws Survey have released two updates and four (or is it five?) new chapters that are not online. Bottom line: if you are using the National Survey of State Laws on Westlaw, you are searching eight-year-old topical state laws surveys. Make a note of that, researchers, at least until the seventh edition is online.
Time for the folks in the Land of 10,000 Invoices to get the seventh edition of this valuable resource uploaded and to keep it updated once it is. Perhaps Lexis or BNA can do a better publishing job for this title.– Joe
PS: A reader has commented that the seventh edition of the National Survey of State Laws is available, apparently since Jan. 12, 2016, on HeinOnline.
As a quick follow-up to my earlier post titled 10,000 documents: Is there a flaw in West Search? (March 20, 2017), it appears that a West reference attorney has confirmed my conclusion that Westlaw does not offer as comprehensive a searching capability as Lexis.
In reading Mary Whisner’s (Reference librarian, Gallagher Law Library, University of Washington School of Law) research paper Comparing Case Law Retrievals, Mid-1990s and Today, Appendix B records an exchange between Whisner and a West reference attorney. Here’s the pertinent parts:
11/18/2016 01:14:35PM Agent (Anna Wiles): “Those searches seem to be maxing out.”
11/18/2016 01:14:51PM Agent (Anna Wiles): “Sometimes, the algorithm will cut off before 10,000.”
11/18/2016 01:23:26PM Agent (Anna Wiles): “If you run the search in all states and all federal, it will max out because it is a broad search.”
11/18/2016 01:23:53PM Agent (Anna Wiles): “If you narrow by a jurisdiction, the results will not max out.”
But Whisner was attempting to perform a fairly comprehensive search. Note that West Search sometimes will max out at under 10,000 documents too according to the West staffer.
More evidence that in an attempt to find the Holy Grail of legal research — the uber precise search result — West Search may have sacrificed comprehensiveness. — Joe
10,000 documents is an awful lot. Truly a low precision, high recall search. But sometimes, one starts off searching very broadly because Westlaw and Lexis Advance provide a “search within results” option to narrow down initial search output. While I do not perform many broad searches in Westlaw, I have never once seen a figure higher than 10,000 documents in my search results. I have, however, seen “10,000+” documents in equally broad Lexis Advance searches on the same topic. Unfortunately 10,000 documents appears to be a search results limit in Westlaw.
If an initial search pulls up 10,000 documents in Westlaw, there is no reason to believe all Westlaw documents identified by one’s search are really all the potentially relevant documents in the Westlaw database. Searching within the initial 10,000 documents search results would be, therefore, based on a seriously flawed subset of the Westlaw database, one defined by West Search, not one’s search logic. This is not the case in Lexis Advance where a broad search may yield 10,000+ documents for searching within initial results. If this is indeed a flaw in West Search’s output, one must conclude that Lexis Advance offers more comprehensive searching of its database than Westlaw. — Joe
Gorsuch confirmation hearings gear up: “Find as much information about the new Supreme Court nominee as possible.”
“The idea for the Gorsuch Project was born after law librarians from several universities and government offices faced a similar question from their patrons: ‘Find as much information about the new Supreme Court nominee as possible.'” — From the Gorsuch Project.
The Gorsuch project “is the result of the collaborative efforts of several libraries to research and collect a comprehensive set of materials relating to the Hon. Neil Gorsuch’s career on the 10th Circuit Court of Appeals. Majority opinions, dissents, and concurrences authored or joined by Gorsuch and references to his published work and speeches are presented here.” The academic law libraries involved are located at the Univ. of Illinois College of Law, the Univ. of Richmond School of Law, the Univ. of Virginia School of Law (host site of the Project), plus the Free Law Project and the US Railroad Retirement Board contributed to the project.
See also Neil M. Gorsuch a Law Library of Congress bibliography that was last updated February 2, 2017.
H/T to Michel-Adrien Sheppard’s Slaw post. — Joe
Keith Lee identifies recent court opinions that cite (or reject) Wikipedia as an authority on Associate’s Mind. He writes:
Every Circuit has judicial opinions that cite Wikipedia as a reliable source for general knowledge. Who Ludacris is. Explaining Confidence Intervals. But some courts within the same Circuit will be dismissive of Wikipedia as a source of general information. There is no definitive answer. Judges seem to make determinations about Wikipedia’s reliability on a case-by-case basis. If you want to cite Wikipedia in a brief and not have a judge be dismissive of it, it’s probably worth your time running a quick search to see where the judge stands on the topic.
Hat tip to PinHawk’s Librarian News Digest on the PinHawk Blog. — Joe
ILTA’s Beyond the Hype: Artificial Intelligence in Legal Research webinar was conducted last month and features ROSS Intelligence CEO and co-founder Andrew Arruda. The link takes you to the archived webinar. Interesting. — Joe
Lexis, Westlaw, and Fastcase are in a virtual tie in the small law market according to a recent survey conducted by the law practice management firm Clio. The results of the survey revealed the following small law market shares:
- Westlaw, 20.58 percent
- Fastcase, 20.35 percent
- LexisNexis, 20.21 percent
See the below pie chart and table for details.
Hat tip to Bob Ambrogi’ LawSites post. — Joe
In The Best Apps To Track Trump’s Legal Changes, Bob Ambrogi identifies three apps designed to monitor the Trump administration’s actions.
- The goal of Track Trump is “to isolate actual policy changes from rhetoric and political theater and to hold the administration accountable for the promises it made.”
- The Cabinet Center for Administrative Transition (CCAT) from the law firm Cadwalader, Wickersham & Taft collects “pronouncements, position papers, policy statements, and requirements as to legislative and regulatory change related to the financial service agenda of the President, the new administration and the new Congress. It tracks legislative developments, executive orders, policy positions, regulations, the regulators themselves, and relevant Trump administration news.”
- Columbia Law School’s Trump Human Rights Tracker follows the Trump administration’s actions and their implications for human rights.
Here’s the abstract for Opening the Black Box: In Search of Algorithmic Transparency by Rachel Pollack Ichou (University of Oxford, Oxford Internet Institute):
Given the importance of search engines for public access to knowledge and questions over their neutrality, there have been many theoretical debates about the regulation of the search market and the transparency of search algorithms. However, there is little research on how such debates have played out empirically in the policy sphere. This paper aims to map how key actors in Europe and North America have positioned themselves in regard to transparency of search engine algorithms and the underlying political and economic ideas and interests that explain these positions. It also discusses the strategies actors have used to advocate for their positions and the likely impact of their efforts for or against greater transparency on the regulation of search engines. Using a range of qualitative research methods, including analysis of textual material and elite interviews with a wide range of stakeholders, this paper concludes that while discussions around algorithmic transparency will likely appear in future policy proposals, it is highly unlikely that search engines will ever be legally required to share their algorithms due to a confluence of interests shared by Google and its competitors. It ends with recommendations for how algorithmic transparency could be enhanced through qualified transparency, consumer choice, and education.