Category Archives: Legal Research Instruction

Using Congressional Research Service Reports in LR&W instruction

Most readers are probably aware of legislative research reports produced by the Congressional Research Service. Two come readily to mind:

But there are plenty more where those came from. Here’s just a sample of CRS reports that can provide useful background reading on the federal legislative process for the legal research component of “how a bill becomes a law.”

Bills and Resolutions: Examples of How Each Kind Is Used (Dec. 2, 2010 98-706)

The Senate’s Calendar of Business (Apr. 21, 2017 98-429)

Calendars of the House of Representatives (Mar. 2, 2017 98-437)

House Floor Activity: The Daily Flow of Business (April 16, 2008 RS20233)

Introducing a Senate Bill or Resolution (Jan. 17, 2017 R44195)

Introducing a House Bill or Resolution (Jan. 12, 2017)

Sponsorship and Cosponsorship of House Bills (Jan 12, 2017 RS22477)

Types of Committee Hearings (June 28, 2017 98-317)

Hearings in the U.S. Senate: A Guide for Preparation and Procedure (Mar. 18, 2010 RL30548)

Hearings in the House of Representatives: A Guide for Preparation and Procedure (June 13, 2006 RL30539)

“Holds” in the Senate (Jan. 17, 2017 R43563)

Enrollment of Legislation: Relevant Congressional Procedures (May 18, 2017 RL34480)

Veto Override Procedure in the House and Senate (Feb. 2015 RS22654)

How Bills Amend Statutes (June 24, 2008 RS20617)

— Joe

Kerr: How to Read a Legal Opinion: A Guide for New Law Students

Orin Kerr’s How to Read a Legal Opinion: A Guide for New Law Students, 11 The Green Bag 2d 51 (2007) celebrates the 10th anniversay of its publication in The Green Bag. It should be required reading during the first week of 1Ls’ law school careers. — Joe

A more rigorous approach to case briefing

In A More Rigorous Approach to Teaching the Reasoning Portion of Case Analysis: A Key to Developing More Competent Law Students, Edwin S. Fruehwald argues that LR&W programs superficially teach students how to analyze a holding for case briefing assignments. Fruehwald presents a more rigorous approach to case analysis, by including in the reasoning section the types of reasoning (rule-based reasoning, reasoning by analysis, distinguishing cases, policy-based reasoning, synthesis) a judge is using and how the judge employs these types of reasoning to obtain the holding. Recommended for 1Ls. — Joe

Should, and if so, how should legal research and writing instruction be integrated in law schools?

Check out Liz McCurry Johnson’s brief article, Teaching Legal Research and Writing in a Fully Integrated Way. Here’s the abstract:

This Article outlines the past model of legal research and writing at Wake Forest School of Law, the problems it presented and the redesign methods implemented over the course of two academic years. It details student evaluations and evidence of better student learning when legal writing and research is taught together, in an integrated fashion, rather than in two paralleling courses. Most importantly, this Article provides evidence to support an argument that legal research should never be taught outside the context of a legal writing, or a doctrinal, problem set. Collaboration among faculty is the key to successful student learning.

— Joe

Just in time for incoming 1Ls: Learning how to read and write legal citations

Alexa Chew’s Citation Literacy adds a new chapter to LR&W pedagogy. Here’s the abstract:

New lawyers and law students spend a lot of time worrying about legal citation. But most of that time is spent worrying about the wrong thing—formatting. The primary purpose of legal citation is to communicate information to the reader. Thus, legal citations are integral parts of the legal documents that lawyers read and write. But rather than viewing citation as communication, law students, and the new lawyers they become, tend to view it as a formatting sideshow dictated by the Bluebook or other citation style guides. This view is both inaccurate and counterproductive.

I argue that the reason for this detrimental and misplaced focus on citation form is because law schools do not teach what I call citation literacy: the ability to both read and write legal citations. Despite the pervasive presence of citations in judicial opinions, which first-year law students spend most of their time reading, pondering, and discussing, law schools teach students only how to write citations. This “write-first” approach to teaching citation deprives legal novices of opportunities to learn to make meaning from citations as readers. Were students able to understand as readers how legal citations operate as communication, the tedious task of writing legal citations would be grounded in an intrinsic purpose rather than a seemingly arbitrary style guide. As things stand now, students must be prodded to follow citation norms for extrinsic gains, which satisfies neither student nor professor.

The citation literacy pedagogy I propose would first teach law students to study the citations in the documents they read—the edited judicial opinions they find in casebooks, the unedited judicial opinions they read in legal writing courses, and sample memoranda or briefs. To teach students to make meaning from citations, citation should be introduced concurrently with foundational lessons in legal authority. Once students understand what citations mean, they can more easily write the citation forms set forth in citation style guides and adapt that knowledge to the heterogeneity of citation forms they will encounter in practice.

Here’s the conclusion to this highly recommended article:

Legal citations play an integral role in legal analysis and legal documents, communicating important information from writer to reader about the support for the writer’s claims. Skilled legal readers incorporate that information into their understanding of the legal texts they read, making meaning from them and using them to assess the quality of legal arguments. However, the prevailing write-first citation pedagogy subverts this communicative purpose, focusing almost solely on teaching students to write citations without first teaching them to read citations. Lawyers need to be able to both read and write legal citations—to be citation literate—and law schools can advance this goal by upending the write-first citation pedagogy, which cabins citation instruction into legal writing courses and deprives students of opportunities to practice making meaning from the citations in the legal documents they read. By promoting citation literacy over citation formatting, law schools might even reduce the misery associated with legal citation and produce graduates who can adapt to whatever the future of legal citation holds.

— Joe

Is artificial intelligence causing a premature disruption in legal research?

Here’s the abstract for Jamie Baker’s 2017 A Legal Research Odyssey: Artificial Intelligence as Disruptor:

Cognitive computing is revolutionizing finance through the ability to combine structured and unstructured data and provide precise market analysis. It is also revolutionizing medicine by providing well-informed options for diagnoses. Analogously, ROSS, a progeny of IBM’s Watson, is set to revolutionize the legal field by bringing cognitive computing to legal research. While ROSS is currently being touted as possessing the requisite sophistication to perform effortless legal research, there is a real danger in a technology like ROSS causing premature disruption. As in medicine and finance, cognitive computing has the power to make legal research more efficient. But the technology is not ready to replace the need for law students to learn sound legal research process and strategy. When done properly, legal research is a highly creative skill that requires a deep level of analysis. Law librarians must infuse law students with an understanding of legal research process, as well as instruct on the practical aspects of using artificial intelligence responsibly in the face of algorithmic transparency, the duty of technology competence, malpractice pitfalls, and the unauthorized practice of law.

— Joe

Artificial intelligence and legal research instruction

Jamie J. Baker’s 2017 A Legal Research Odyssey: Artificial Intelligence as Disruptor discusses the current state of artificial intelligence as it applies to law. The article provides a background in current technological capabilities, shows how these capabilities are being used in various professions, including finance and medicine, and provides an overview of current natural language processing capabilities to discuss how the latest technological advances will realistically be applied to legal research. The article ultimately argues that law librarians must still infuse law students with sound legal research process and understanding so that they have the ability to confidently rely on algorithms in the face of various ethical duties. Here’s the abstract:

Cognitive computing is revolutionizing finance through the ability to combine structured and unstructured data and provide precise market analysis. It is also revolutionizing medicine by providing well-informed options for diagnoses. Analogously, ROSS, a progeny of IBM’s Watson, is set to revolutionize the legal field by bringing cognitive computing to legal research. While ROSS is currently being touted as possessing the requisite sophistication to perform effortless legal research, there is a real danger in a technology like ROSS causing premature disruption. As in medicine and finance, cognitive computing has the power to make legal research more efficient. But the technology is not ready to replace the need for law students to learn sound legal research process and strategy. When done properly, legal research is a highly creative skill that requires a deep level of analysis. Law librarians must infuse law students with an understanding of legal research process, as well as instruct on the practical aspects of using artificial intelligence responsibly in the face of algorithmic transparency, the duty of technology competence, malpractice pitfalls, and the unauthorized practice of law.

— Joe

CRS guides for researching federal legislation

Two regularly updated research guides produced by the Congressional Research Service were updated in the last year.

Researching Current Federal Legislation and Regulations: A Guide to Resources for Congressional Staff (Feb. 6, 2017, RL33895) introduces congressional staff to selected governmental and nongovernmental sources that are useful in tracking and obtaining information on federal legislation and regulations. It includes governmental sources, such as Congress.gov, the Government Publishing Office’s Federal Digital System (FDsys), and U.S. Senate and House websites. Nongovernmental or commercial sources include resources such as HeinOnline and the Congressional Quarterly (CQ) websites.

Legislative History Research: A Guide to Resources for Congressional Staff (July 6, 2016, R41865) provides an overview of federal legislative history research, the legislative process, and where to find congressional documents. The report also summarizes some of the reasons researchers are interested in legislative history, briefly describes the actions a piece of legislation might undergo during the legislative process, and provides a list of easily accessible print and electronic resources.

See also, The Framing of the United States Constitution: A Beginner’s Guide on In Custodia Legis.

Useful. — Joe

‘Finding law:’ Out-of-date metaphors for 21st century legal research

About their very interesting essay, New Wine in Old Wineskins: Metaphor and Legal Research, 92 Notre Dame L. Rev Online 1 (2016), Amy Sloan and Colin Starger write

This essay examines a different set of metaphors currently doing damage in law. Though not as life-and-death dramatic as the War on Drugs or the struggle against patriarchy, these metaphors affect every law student and practicing lawyer. What’s more, our examination implicates broader philosophical issues that resonate well beyond specifically legal discourse. The metaphors we examine pertain to legal research—how we conceptualize the task of ‘finding law’ to make arguments and solve legal problems. The broader philosophical issues concern changes wrought by technology. When technology radically alters our material world, sometimes our conceptual world fails to adjust. To successfully evolve, we must interrogate and change our deepest metaphors. This Essay undertakes this foundational task in the brave new world of legal research.

This Essay argues that conceptualizing emerging legal technologies using inherited research metaphors is like pouring new wine in old wineskins—it simply doesn’t work. When a primary challenge of research was physically gathering hidden and expensive information, metaphors based on journey, acquisition, and excavation helped make sense of the research process. But new, technologically-driven search methods have burst those conceptual wineskins. The Internet and Big Data make information cheap and easily accessible. The old metaphors fail.

Recommended. — Joe

Beyond the traditional view of legal reasoning

Here’s the blurb for Beyond Legal Reasoning: A Critique of Pure Lawyering (Routledge 2017) by Jeffrey Lipshaw (Suffolk University Law School):

The concept of learning to ‘think like a lawyer’ is one of the cornerstones of legal education in the United States and beyond. In this book, Jeffrey Lipshaw provides a critique of the traditional views of ‘thinking like a lawyer’ or ‘pure lawyering’ aimed at lawyers, law professors, and students who want to understand lawyering beyond the traditional warrior metaphor. Drawing on his extensive experience at the intersection of real world law and business issues, Professor Lipshaw presents a sophisticated philosophical argument that the “pure lawyering” of traditional legal education is agnostic to either truth or moral value of outcomes. He demonstrates pure lawyering’s potential both for illusions of certainty and cynical instrumentalism, and the consequences of both when lawyers are called on as dealmakers, policymakers, and counsellors.

This book offers an avenue for getting beyond (or unlearning) merely how to think like a lawyer. It combines legal theory, philosophy of knowledge, and doctrine with an appreciation of real-life judgment calls that multi-disciplinary lawyers are called upon to make. The book will be of great interest to scholars of legal education, legal language and reasoning as well as professors who teach both doctrine and thinking and writing skills in the first year law school curriculum; and for anyone who is interested in seeking a perspective on ‘thinking like a lawyer’ beyond the litigation arena.

Recommended. — Joe

Using linguistics to determine the ordinary meaning of the language of the law empirically

In the abstract for Judging Ordinary Meaning, Thomas R. Lee and Stephan C. Mouritsen write:

We identify theoretical and operational deficiencies in our law’s attempts to credit the ordinary meaning of the law and present linguistic theories and tools to assess it more reliably. Our framework examines iconic problems of ordinary meaning — from the famous “no vehicles in the park” hypothetical to two Supreme Court cases (United States v. Muscarello and Taniguchi v. Kan Pacific Saipan) and a Seventh Circuit opinion of Judge Richard Posner (in United States v. Costello). We show that the law’s conception of ordinary meaning implicates empirical questions about language usage. And we present linguistic tools from a field known as corpus linguistics that can help to answer these empirical questions.

When we speak of ordinary meaning we are asking an empirical question — about the sense of a word or phrase that is most likely implicated in a given linguistic context. Linguists have developed computer-aided means of answering such questions. We propose to import those methods into the law of interpretation. And we consider and respond to criticisms of their use by lawyers and judges.

Interesting. — Joe

A trip down memory lane: a visual history of the development of Google Search Engine Results Pages

Google Search Engine Results Pages (SERPs) have changed dramatically over the past 20 years. In A visual history of Google SERPs: 1996 to 2017 (Search Engine Watch), Clark Boyd writes

The original lists of static results, comprised of what we nostalgically term ‘10 blue links’, have evolved into multi-media, cross-device, highly-personalized interfaces that can even adapt as we speak to them. There are now images, GIFs, news articles, videos, and podcasts in SERPs, all powered by algorithms that grow evermore sophisticated through machine learning.

Search Engine Watch’s infographic identifies the evolution of Google Search Engine’s results pages here. Recommended. It could be used in a teachable moment about the consequences of algorithmic change generally before moving to the great unknowing of algorithmic changes engineered by WEXIS and displayed in WEXIS search output. — Joe

 

Has West Search sacrificed comprehensiveness?

As a quick follow-up to my earlier post titled 10,000 documents: Is there a flaw in West Search? (March 20, 2017), it appears that a West reference attorney has confirmed my conclusion that Westlaw does not offer as comprehensive a searching capability as Lexis.

In reading Mary Whisner’s (Reference librarian, Gallagher Law Library, University of Washington School of Law) research paper Comparing Case Law Retrievals, Mid-1990s and Today, Appendix B records an exchange between Whisner and a West reference attorney. Here’s the pertinent parts:

11/18/2016 01:14:35PM Agent (Anna Wiles): “Those searches seem to be maxing out.”

11/18/2016 01:14:51PM Agent (Anna Wiles): “Sometimes, the algorithm will cut off before 10,000.”

11/18/2016 01:23:26PM Agent (Anna Wiles): “If you run the search in all states and all federal, it will max out because it is a broad search.”

11/18/2016 01:23:53PM Agent (Anna Wiles): “If you narrow by a jurisdiction, the results will not max out.”

But Whisner was attempting to perform a fairly comprehensive search. Note that West Search sometimes will max out at under 10,000 documents too according to the West staffer.

More evidence that in an attempt to find the Holy Grail of legal research —  the uber precise search result — West Search may have sacrificed comprehensiveness. — Joe

10,000 documents: Is there a flaw in West Search?

10,000 documents is an awful lot. Truly a low precision, high recall search. But sometimes, one starts off searching very broadly because Westlaw and Lexis Advance provide a “search within results” option to narrow down initial search output. While I do not perform many broad searches in Westlaw, I have never once seen a figure higher than 10,000 documents in my search results. I have, however, seen “10,000+” documents in equally broad Lexis Advance searches on the same topic.  Unfortunately 10,000 documents appears to be a search results limit in Westlaw.

If an initial search pulls up 10,000 documents in Westlaw, there is no reason to believe all Westlaw documents identified by one’s search are really all the potentially relevant documents in the Westlaw database. Searching within the initial 10,000 documents search results would be, therefore, based on a seriously flawed subset of the Westlaw database, one defined by West Search, not one’s search logic. This is not the case in Lexis Advance where a broad search may yield 10,000+ documents for searching within initial results. If this is indeed a flaw in West Search’s output, one must conclude that Lexis Advance offers more comprehensive searching of its database than Westlaw. — Joe

Is Wikipedia a reliable legal authority now?

Keith Lee identifies recent court opinions that cite (or reject) Wikipedia as an authority on Associate’s Mind. He writes:

Every Circuit has judicial opinions that cite Wikipedia as a reliable source for general knowledge. Who Ludacris is. Explaining Confidence Intervals. But some courts within the same Circuit will be dismissive of Wikipedia as a source of general information. There is no definitive answer. Judges seem to make determinations about Wikipedia’s reliability on a case-by-case basis. If you want to cite Wikipedia in a brief and not have a judge be dismissive of it, it’s probably worth your time running a quick search to see where the judge stands on the topic.

Hat tip to PinHawk’s Librarian News Digest on the PinHawk Blog. — Joe

The big three legal research providers in the small law market

Lexis, Westlaw, and Fastcase are in a virtual tie in the small law market according to a recent survey conducted by the law practice management firm Clio. The results of the survey revealed the following small law market shares:

  1. Westlaw, 20.58 percent
  2. Fastcase, 20.35 percent
  3. LexisNexis, 20.21 percent

See the below pie chart and table for details.

Hat tip to Bob Ambrogi’ LawSites post. — Joe

 

How likely is algorithmic transparency?

Here’s the abstract for Opening the Black Box: In Search of Algorithmic Transparency by Rachel Pollack Ichou (University of Oxford, Oxford Internet Institute):

Given the importance of search engines for public access to knowledge and questions over their neutrality, there have been many theoretical debates about the regulation of the search market and the transparency of search algorithms. However, there is little research on how such debates have played out empirically in the policy sphere. This paper aims to map how key actors in Europe and North America have positioned themselves in regard to transparency of search engine algorithms and the underlying political and economic ideas and interests that explain these positions. It also discusses the strategies actors have used to advocate for their positions and the likely impact of their efforts for or against greater transparency on the regulation of search engines. Using a range of qualitative research methods, including analysis of textual material and elite interviews with a wide range of stakeholders, this paper concludes that while discussions around algorithmic transparency will likely appear in future policy proposals, it is highly unlikely that search engines will ever be legally required to share their algorithms due to a confluence of interests shared by Google and its competitors. It ends with recommendations for how algorithmic transparency could be enhanced through qualified transparency, consumer choice, and education.

— Joe

A psychological perspective on algorithm aversion

Berkeley J. Dietvorst, The University of Chicago Booth School of Business, Joseph P. Simmons, University of Pennsylvania, The Wharton School, and Cade Massey, University of Pennsylvania, The Wharton School, Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err, Journal of Experimental Psychology: General (forthcoming).

Abstract: Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet, when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In five studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

The Algorithm as a Human Artifact: Implications for Legal {Re}Search

Here’s the abstract for Susan Nevelow Mart’s very interested article The Algorithm as a Human Artifact: Implications for Legal {Re}Search (SSRN):

Abstract: When legal researchers search in online databases for the information they need to solve a legal problem, they need to remember that the algorithms that are returning results to them were designed by humans. The world of legal research is a human-constructed world, and the biases and assumptions the teams of humans that construct the online world bring to the task are imported into the systems we use for research. This article takes a look at what happens when six different teams of humans set out to solve the same problem: how to return results relevant to a searcher’s query in a case database. When comparing the top ten results for the same search entered into the same jurisdictional case database in Casetext, Fastcase, Google Scholar, Lexis Advance, Ravel, and Westlaw, the results are a remarkable testament to the variability of human problem solving. There is hardly any overlap in the cases that appear in the top ten results returned by each database. An average of forty percent of the cases were unique to one database, and only about 7% of the cases were returned in search results in all six databases. It is fair to say that each different set of engineers brought very different biases and assumptions to the creation of each search algorithm. One of the most surprising results was the clustering among the databases in terms of the percentage of relevant results. The oldest database providers, Westlaw and Lexis, had the highest percentages of relevant results, at 67% and 57%, respectively. The newer legal database providers, Fastcase, Google Scholar, Casetext, and Ravel, were also clustered together at a lower relevance rate, returning approximately 40% relevant results.

Legal research has always been an endeavor that required redundancy in searching; one resource does not usually provide a full answer, just as one search will not provide every necessary result. The study clearly demonstrates that the need for redundancy in searches and resources has not faded with the rise of the algorithm. From the law professor seeking to set up a corpus of cases to study, the trial lawyer seeking that one elusive case, the legal research professor showing students the limitations of algorithms, researchers who want full results will need to mine multiple resources with multiple searches. And more accountability about the nature of the algorithms being deployed would allow all researchers to craft searches that would be optimally successful.

Recommended. — Joe

Lexis Advance Show Me How video series

Lexis has 50 short 1-3 minute how-to video tips for key Lexis Advance® features on its Show Me How YouTube channel. The videos cover everything from document and navigation tools to segments to terms and connectors use in Lexis Advance. The YouTube channel page text notes that some of the video demonstrate how-to tips for the latest enhancements to Lexis Advance. Recommended. — Joe