From the abstract for Giovanni Abramo, Ciriaco Andrea D’Angelo & Emanuela Reale, Peer review vs bibliometrics: which method better predicts the scholarly impact of publications?, Scientometrics, 121(1), 537-554 (2019):

In this work, we try to answer the question of which method, peer review vs bibliometrics, better predicts the future overall scholarly impact of scientific publications. We measure the agreement between peer review evaluations of Web of Science indexed publications submitted to the first Italian research assessment exercise and long-term citations of the same publications. We do the same for an early citation-based indicator. We find that the latter shows stronger predictive power, i.e., it more reliably predicts late citations in all the disciplinary areas examined, and for any citation time window starting one year after publication.

From the abstract for Andrew T. Hayashi (Virginia) & Gregory Mitchell (Virginia), Maintaining Scholarly Integrity in the Age of Bibliometrics, 68 J. Legal Educ. ___ (2019):

As quantitative measures of scholarly impact gain prominence in the legal academy, we should expect institutions and scholars to engage in a variety of tactics designed to inflate the apparent influence of their scholarly output. We identify these tactics and identify countermeasures that should be taken to prevent this manipulation. The rise of bibliometrics poses a significant risk to the scholarly endeavor but, with foresight, we can maintain scholarly integrity in the age of bibliometrics.

From the abstract for Joseph Scott Miller, U.S. Supreme Court I.P. Cases, 1810-2018: Measuring & Mapping the Citation Networks (June 7, 2019):

Intellectual property law in the United States, though shaped by key statutes, has long been a common-law field to a great degree. Many decades of decisional law flesh out the meaning of broad-textured, sparely worded statutes. Given the key roles of patent law and copyright law, both federal, the Supreme Court of the United States is i.p. law’s leading apex court. What are the major topical currents in the Supreme Court’s i.p. cases, both now and over the course of the Court’s work? This study uses network-analysis tools to measure and map the entirety of the Court’s i.p. jurisprudence. It goes deeper than existing studies of judicial citation networks by focusing on a topically defined subnetwork. It goes further than existing studies by analyzing, in addition to basic citation networks, a time series of co-citation networks—using techniques developed within bibliometrics, for mapping a scholarly field’s conceptual terrain, to track and describe doctrinal change. Emerging bottom up from the Court’s citations, the co-citation map charted here reveals, surprisingly, a core of antitrust and patent-misuse cases (especially from the 1940s) exerting significant influence on i.p. doctrine.

From the abstract for Joe Lawprofblawg and Darren Bush, The Most Important Law Review Article You’ll Never Read: A Hilarious (in the Footnotes) yet Serious (in the Text) Discussion of Law Reviews and Law Professors (Feb. 2019):

No! Stop! Go back! Reading the abstract is like taking the red pill in the Matrix.

In this article we discuss “the game.” “The game” is the quest for measuring scholarship success using metrics such as law review ranking, citation counts, downloads, and other indicia of scholarship “quality.” We argue that this game is rigged, inherently biased against authors from lower ranked schools, women, minorities, and faculty who teach legal writing, clinical, and library courses. As such, playing “the game” in a Sisyphean effort to achieve external validation is a losing one for all but a few. Instead, we argue that faculty members should reject this entrenched and virulent hierarchy, and focus on the primary purposes of writing, which are to foster innovation in a fashion that is both pleasing to the author and that improves society. We discuss this rigged game, and seek to reframe our academic life to focus on enhancing innovation and discourse. We would start by skipping abstract writing.

Now go back to your life. Don’t even think about downloading and reading this. It’s too dangerous.

From the abstract for Oren Perez et al., The Network of Law Reviews: Citation Cartels, Scientific Communities, and Journal Rankings, Modern Law Review, Forthcoming 2018/2019:

Research evaluation is increasingly being influenced by quantitative data. The legal field has not escaped the impact of such metrics. Law schools and legal journals are being ranked by multiple global rankings. The key rankings for law schools are the Times Higher Education and Shanghai University Subject Rankings for law and SSRN Ranking for U.S. and International law schools. Law Journals are measured by four different rankings: Clarivate Analytics Web of Science Journal Citation Reports (JCR), CiteScore from Elsevier, Scimago and Washington and Lee. Despite the opposition from the scientific community these metrics continue to flourish.

The article argues journal rankings (as other metrics) are the consequence of theory-laden choices that can influence their structure and their pretense of objectivity is therefore merely illusory. We focus on the influential ranking of law journals in JCR and critically assess its structure and methodology. In particular, we consider the question of the existence of tacit citation cartels in the U.S. law reviews market and the attentiveness of the JCR for the potential influence of such tacit cartel. To examine this question we studied a sample of 90 journals included in the category of Law in the JCR: 45 U.S. student-edited (SE) and 45 peer-reviewed (PR) journals. We found that PR and SE journals are more inclined to cite members of their own class, forming two separated communities. Close analysis revealed that this phenomenon is more pronounced in SE journals, especially generalist ones. This tendency reflects, we argue, a tacit cartelistic behavior, which is a product of deeply entrenched institutional structures. Because U.S. SE journals produce much more citations than PR journals, the fact that their citations are directed almost exclusively to SE journals elevates their ranking in the Journal Citation Reports in a way that distorts the structure of the ranking. This distortion can hamper the production of legal knowledge. We discuss several policy measures that can counter the adverse effects of this situation.

Brian Leiter has been publishing a series of most-cited law faculty by speciality. Here’s what’s been published so far on Leiter’s Law School Reports:

More to come? Check Leiter’s Law School Reports. — Joe

From Gregory C. Sick et al., Scholarly Impact of Law School Faculties in 2018: Updating the Leiter Score Ranking for the Top Third (Aug. 14, 2018):

This updated 2018 study explores the scholarly impact of law faculties, ranking the top third of ABA-accredited law schools. Refined by Brian Leiter, the “Scholarly Impact Score” for a law faculty is calculated from the mean and the median of total law journal citations over the past five years to the work of tenured faculty members. In addition to a school-by-school ranking, we report the mean, median, and weighted score, along with a listing of the tenured law faculty members at each school with the ten highest individual citation counts. The law faculties at Yale, Harvard, Chicago, New York University, and Columbia rank in the top five for Scholarly Impact. The other schools rounding out the top ten are Stanford, the University of California-Berkeley, Duke, Pennsylvania, and Vanderbilt. The most dramatic rises in the 2018 Scholarly Impact Ranking were by four schools that climbed 16 ordinal positions: Kansas (to #48), USC (to #23), the University of St. Thomas (Minnesota) (to #23), and William & Mary (to #28). In addition, two schools rose by 10 spots: Florida State (to #29) and San Francisco (to #54). Several law faculties achieve a Scholarly Impact Ranking in 2018 well above the law school rankings reported by U.S. News for 2019: Vanderbilt (at #10) repeats its appearance within the top ten for Scholarly Impact, but is ranked lower by U.S. News (at #17). Among the top ranked schools, the University of California-Irvine experiences the greatest incongruity, ranking just outside the top ten (#12) for Scholarly Impact, but holding a U.S. News ranking nine ordinal places lower (at #21). In the Scholarly Impact top 25, George Mason rises slightly (to #19), but remains under-valued in U.S. News (at #41). George Washington stands at #16 in the Scholarly Impact Ranking, while falling just inside the top 25 (at #24) in U.S. News. The most dramatically under-valued law faculty remains the University of St. Thomas (Minnesota), which now ranks inside the top 25 (at #23) for Scholarly Impact, while being relegated by U.S. News below the top 100 (at #113)—a difference of 90 ordinal levels.

— Joe

From the about page for Metrics Toolkit:

The Metrics Toolkit provides evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied. You’ll also find examples of how to use metrics in grant applications, CVs, and promotion dossiers.

There are two ways to use the Toolkit. Explore metrics to browse the metrics you want to learn more about. Or, you can choose metrics that will be best for your use case by filtering via our broad discipline, research output, and impact type categories.

Interesting. — Joe

From the abstract of Stefanie Haustein’s Scholarly Twitter Metrics:

Twitter has arguably been the most popular among the data sources that form the basis of so-called altmetrics. Tweets to scholarly documents have been heralded as both early indicators of citations as well as measures of societal impact. This chapter provides an overview of Twitter activity as the basis for scholarly metrics from a critical point of view and equally describes the potential and limitations of scholarly Twitter metrics. By reviewing the literature on Twitter in scholarly communication and analyzing 24 million tweets linking to scholarly documents, it aims to provide a basic understanding of what tweets can and cannot measure in the context of research evaluation. Going beyond the limited explanatory power of low correlations between tweets and citations, this chapter considers what types of scholarly documents are popular on Twitter, and how, when and by whom they are diffused in order to understand what tweets to scholarly documents measure. Although this chapter is not able to solve the problems associated with the creation of meaningful metrics from social media, it highlights particular issues and aims to provide the basis for advanced scholarly Twitter metrics.

H/T Gary Price’s InfoDocket post. — Joe

In Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data, Lutz Bornmann and Robin Haunschild “address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics–by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments)–using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts.”

H/T to Gary Price’s InfoDocket post. — Joe

Here’s the abstract for Sergey Feldman, Kyle Lo and Waleed Ammar’s Citation Count Analysis for Papers with Preprints (May 14, 2018):

We explore the degree to which papers prepublished on arXiv garner more citations, in an attempt to paint a sharper picture of fairness issues related to prepublishing. A paper’s citation count is estimated using a negative-binomial generalized linear model (GLM) while observing a binary variable which indicates whether the paper has been prepublished. We control for author influence (via the authors’ h-index at the time of paper writing), publication venue, and overall time that paper has been available on arXiv. Our analysis only includes papers that were eventually accepted for publication at top-tier CS conferences, and were posted on arXiv either before or after the acceptance notification. We observe that papers submitted to arXiv before acceptance have, on average, 65\% more citations in the following year compared to papers submitted after. We note that this finding is not causal, and discuss possible next steps.

H/T to Gary Price’s InfoDocket post. — Joe

Here’s the abstract for The Journal Impact Factor: A brief history, critique, and discussion of adverse effects by Vincent Larivière and Cassidy R. Sugimoto:

The Journal Impact Factor (JIF) is, by far, the most discussed bibliometric indicator. Since its introduction over 40 years ago, it has had enormous effects on the scientific ecosystem: transforming the publishing industry, shaping hiring practices and the allocation of resources, and, as a result, reorienting the research activities and dissemination practices of scholars. Given both the ubiquity and impact of the indicator, the JIF has been widely dissected and debated by scholars of every disciplinary orientation. Drawing on the existing literature as well as on original research, this chapter provides a brief history of the indicator and highlights well-known limitations—such as the asymmetry between the numerator and the denominator, differences across disciplines, the insufficient citation window, and the skewness of the underlying citation distributions. The inflation of the JIF and the weakening predictive power is discussed, as well as the adverse effects on the behaviors of individual actors and the research enterprise. Alternative journal-based indicators are described and the chapter concludes with a call for responsible application and a commentary on future developments in journal indicators.

H/T to Gary Price’s InfoDocket post. — Joe

H/T to Legal Skills Prof Blog for the tip to Bradley Areheart’s The Top 100 Law Reviews: A Reference Guide Based on Historical USNWR Data (Aug. 25, 2017). Here’s the abstract:

The best proxy for how other law professors react and respond to publishing in main, or flagship, law reviews is the US News and World Report (USNWR) rankings. This paper utilizes historical USNWR data to rank the top 100 law reviews. The USNWR rankings are important in shaping many – if not most – law professors’ perceptions about the relative strength of a law school (and derivatively, the home law review). This document contains a chart that is sorted by the 10-year rolling average for each school, but it also contains the 5-year and 15-year rolling averages. This paper also describes my methodology and responds to a series of frequently asked questions.

And the top 10 law schools based on USNWR data and sorted by 10-year rolling averages is

  1. Yale
  2. Harvard
  3. Stanford
  4. Columbia
  5. Chicago
  6. NYU
  7. Pennsylvania
  8. Berkeley
  9. Virginia
  10. Michigan

— Joe

In Legal Research in Search of Attention: A Quantitative Assessment, 27 King’s Law Journal 170 (2016) Mathias M. Siem writes

[T]he Social Science Research Network (SSRN) is a good platform to test which research is more or less appealing. In the study reported in this article, 1107 papers of SSRN’s Legal Scholarship Network were analysed in order to identify the main determinants of SSRN downloads, abstract views, and downloads per abstract views. This analysis fills a gap in the growing literature that deals with the impact of published research. It is also suggested that examining SSRN is important because its open nature reflects the general trend from offline publications in domestic journals to global availability of publications online.

Here’s the abstract:

In today’s world it is easy to make research publicly available by putting it online. But this improved availability raises the question how to produce research that actually gets attention. Bibliometrics can contribute to this debate. Based on a sample of 1107 papers of SSRN’s Legal Scholarship Network, this article finds that a short title, a top-20 university affiliation, US authorship, and writing about topics of corporate law and international law have a positive effect on downloads and/or abstract views. The article also reflects on the implications of these findings, in particular how they may be related to contentious attempts to identify what is “good” legal research through metrics and peer review.

— Joe

The purpose of Citation Performance Indicators — A Very Short Introduction by Phil Davis, Scholarly Kitchen (May 15, 2017) “is to provide a brief summary of the main citation indicators used today. It is not intended to be comprehensive, nor is it intended to opine on which indicator is best. It is geared for casual users of performance metrics and not bibliometricians.”

H/T to Gary Price’s InfoDocket post. — Joe

Interbrand released its annual survey of the top 100 of the most valuable brands.  Apple and Google hold the number 1 and 2 spots respectively.  Barnes & Noble is nowhere to be found, but Amazon comes in at number 10.  Lego broke into the list for the first time at number 82 (Ninja Go!!!!).  Facebook is listed as a top rise are number 23.  I guess having 1 billion users helps with brand awareness.  My old friend Jack Daniels makes the list at number 84.  Thomson Reuters comes in at number 63, though that represents a drop of 12% in brand value.  There must be some people out there still pining for Westlaw Classic I imagine.

Mark

Lex Machina issued a report last Tuesday that analyzes copyright litigation trends over the last five years.  The report is impressive for the level of detail in the statistical analysis and charts presented in the 37 page document.  The report is designed to highlight legal analytics in copyright litigation.  The target audience appears to be plaintiffs with a heavy interest in protecting their media assets, firms that are considering taking on copyright cases, and those with an interest in the mechanics of copyright litigation.  As the report indicates, it is the first survey of its kind.  I’ve followed file sharing and other IP cases which I have reported on in this forum from time to time.  I found the report interesting for its snapshot of how litigation progresses through the courts.

Highlights from the press release include:

  • Top plaintiffs include music (Broadcast Music, Sony/ATV Songs, Songs of Universal, UMG Records, EMI, and more), software (Microsoft), fashion (Coach), and textile patterns (Star Fabrics) industries.
  • Top defendants include retailers (Ross Stores, TJX (TJ Maxx), Amazon, Burlington Coat Factory, Rainbow USA, J.C. Penny, Sears, Forever 21, Wal-mart, and Nordstroms), music labels (Universal Music, Sony Music Entertainment, UMG Recordings), & publishing / education, (Pearson Education and John Wiley and Sons).
  • Doniger Burroughs, a California fashion, art, and entertainment boutique leads among plaintiffs firms with 741 cases, more than double the next firm.
  • Copyright litigation is heavily concentrated in the Central District of California (2,496 cases, 26.2% of all since 2009) and the Southern District of New York (1,061 cases, 11.1%).
  • Fair use is usually decided on summary judgment.
  • The majority of infringement findings happen as a result of default, and almost all default findings are for infringement.
  • Top parties winning damages include companies in movies and entertainment (Disney, Twentieth Century Fox, Columbia Pictures, Warner Brothers, Universal, Paramount Pictures, and more), software (Quantlab, Foundry Networks), and music (UMG Recording).
  • In file sharing cases, about 90% of cases settle. Top plaintiffs include movie production companies. And an erotic website leads the list of Internet file-sharing plaintiffs with 4,238 cases – about 15 times as many cases as the next most litigious plaintiff.

The report registration and download link is here.

Mark

What are we talking about? The Blog Emperor is comparing the differential in the US News law school overall and academic reputation rankings. In this blog post, he listed 53 law schools that are over-performing and underperforming their overall rankings because, well, academic reputation is very, very important.

How about the US News judges-attorneys reputational rankings? No, that’s not important. Only peer assessment scores are. Considering the low sample sizes and, in some years, response rates for both US News reputational surveys, the annual reputational findings are absurd (unless one might be fishing to increase human and robot traffic because law prof blog traffic dips during Winter Break; see today’s earlier post about web communications traffic stats).

For reaction to the nonsense, see the comment trail for Staci Zaretsky’s ATL post. My favorite, so far, is

I’m sorry, but who gives a shit what law professors and law deans think of the school? IF they count as part of the legal community (which I don’t really think they do), it is a small, insular, largely irrelevant portion.

Tell me what real lawyers think about the schools.

Joe

According to Incapsula, bots went from 51% of web traffic in 2012 to 61% of web traffic this year for a 21% year-over-year increase. The cloud computing firm found that most of the increase in bot traffic was due to increased activity by “good bots” like search engines. Spam bots, comparatively, are on the decline. However, the fact remains that any self-congratulatory remarks about a blog reaching a visit and/or page view milestone or blog rankings based on those metrics are wild inflations of this form of “social media.” If the bot traffic trend continues at this pace, pretty soon one will have divide web traffic stats by four to come up with a reasonable estimate of human mouse clicks and eyeballs. — Joe

bothumantraffic20122013