Brian Leiter has been publishing a series of most-cited law faculty by speciality. Here’s what’s been published so far on Leiter’s Law School Reports:

More to come? Check Leiter’s Law School Reports. — Joe

From Gregory C. Sick et al., Scholarly Impact of Law School Faculties in 2018: Updating the Leiter Score Ranking for the Top Third (Aug. 14, 2018):

This updated 2018 study explores the scholarly impact of law faculties, ranking the top third of ABA-accredited law schools. Refined by Brian Leiter, the “Scholarly Impact Score” for a law faculty is calculated from the mean and the median of total law journal citations over the past five years to the work of tenured faculty members. In addition to a school-by-school ranking, we report the mean, median, and weighted score, along with a listing of the tenured law faculty members at each school with the ten highest individual citation counts. The law faculties at Yale, Harvard, Chicago, New York University, and Columbia rank in the top five for Scholarly Impact. The other schools rounding out the top ten are Stanford, the University of California-Berkeley, Duke, Pennsylvania, and Vanderbilt. The most dramatic rises in the 2018 Scholarly Impact Ranking were by four schools that climbed 16 ordinal positions: Kansas (to #48), USC (to #23), the University of St. Thomas (Minnesota) (to #23), and William & Mary (to #28). In addition, two schools rose by 10 spots: Florida State (to #29) and San Francisco (to #54). Several law faculties achieve a Scholarly Impact Ranking in 2018 well above the law school rankings reported by U.S. News for 2019: Vanderbilt (at #10) repeats its appearance within the top ten for Scholarly Impact, but is ranked lower by U.S. News (at #17). Among the top ranked schools, the University of California-Irvine experiences the greatest incongruity, ranking just outside the top ten (#12) for Scholarly Impact, but holding a U.S. News ranking nine ordinal places lower (at #21). In the Scholarly Impact top 25, George Mason rises slightly (to #19), but remains under-valued in U.S. News (at #41). George Washington stands at #16 in the Scholarly Impact Ranking, while falling just inside the top 25 (at #24) in U.S. News. The most dramatically under-valued law faculty remains the University of St. Thomas (Minnesota), which now ranks inside the top 25 (at #23) for Scholarly Impact, while being relegated by U.S. News below the top 100 (at #113)—a difference of 90 ordinal levels.

— Joe

From the about page for Metrics Toolkit:

The Metrics Toolkit provides evidence-based information about research metrics across disciplines, including how each metric is calculated, where you can find it, and how each should (and should not) be applied. You’ll also find examples of how to use metrics in grant applications, CVs, and promotion dossiers.

There are two ways to use the Toolkit. Explore metrics to browse the metrics you want to learn more about. Or, you can choose metrics that will be best for your use case by filtering via our broad discipline, research output, and impact type categories.

Interesting. — Joe

From the abstract of Stefanie Haustein’s Scholarly Twitter Metrics:

Twitter has arguably been the most popular among the data sources that form the basis of so-called altmetrics. Tweets to scholarly documents have been heralded as both early indicators of citations as well as measures of societal impact. This chapter provides an overview of Twitter activity as the basis for scholarly metrics from a critical point of view and equally describes the potential and limitations of scholarly Twitter metrics. By reviewing the literature on Twitter in scholarly communication and analyzing 24 million tweets linking to scholarly documents, it aims to provide a basic understanding of what tweets can and cannot measure in the context of research evaluation. Going beyond the limited explanatory power of low correlations between tweets and citations, this chapter considers what types of scholarly documents are popular on Twitter, and how, when and by whom they are diffused in order to understand what tweets to scholarly documents measure. Although this chapter is not able to solve the problems associated with the creation of meaningful metrics from social media, it highlights particular issues and aims to provide the basis for advanced scholarly Twitter metrics.

H/T Gary Price’s InfoDocket post. — Joe

In Do altmetrics correlate with the quality of papers? A large-scale empirical study based on F1000Prime data, Lutz Bornmann and Robin Haunschild “address the question whether (and to what extent, respectively) altmetrics are related to the scientific quality of papers (as measured by peer assessments). Only a few studies have previously investigated the relationship between altmetrics and assessments by peers. In the first step, we analyse the underlying dimensions of measurement for traditional metrics (citation counts) and altmetrics–by using principal component analysis (PCA) and factor analysis (FA). In the second step, we test the relationship between the dimensions and quality of papers (as measured by the post-publication peer-review system of F1000Prime assessments)–using regression analysis. The results of the PCA and FA show that altmetrics operate along different dimensions, whereas Mendeley counts are related to citation counts, and tweets form a separate dimension. The results of the regression analysis indicate that citation-based metrics and readership counts are significantly more related to quality, than tweets. This result on the one hand questions the use of Twitter counts for research evaluation purposes and on the other hand indicates potential use of Mendeley reader counts.”

H/T to Gary Price’s InfoDocket post. — Joe

Here’s the abstract for Sergey Feldman, Kyle Lo and Waleed Ammar’s Citation Count Analysis for Papers with Preprints (May 14, 2018):

We explore the degree to which papers prepublished on arXiv garner more citations, in an attempt to paint a sharper picture of fairness issues related to prepublishing. A paper’s citation count is estimated using a negative-binomial generalized linear model (GLM) while observing a binary variable which indicates whether the paper has been prepublished. We control for author influence (via the authors’ h-index at the time of paper writing), publication venue, and overall time that paper has been available on arXiv. Our analysis only includes papers that were eventually accepted for publication at top-tier CS conferences, and were posted on arXiv either before or after the acceptance notification. We observe that papers submitted to arXiv before acceptance have, on average, 65\% more citations in the following year compared to papers submitted after. We note that this finding is not causal, and discuss possible next steps.

H/T to Gary Price’s InfoDocket post. — Joe

Here’s the abstract for The Journal Impact Factor: A brief history, critique, and discussion of adverse effects by Vincent Larivière and Cassidy R. Sugimoto:

The Journal Impact Factor (JIF) is, by far, the most discussed bibliometric indicator. Since its introduction over 40 years ago, it has had enormous effects on the scientific ecosystem: transforming the publishing industry, shaping hiring practices and the allocation of resources, and, as a result, reorienting the research activities and dissemination practices of scholars. Given both the ubiquity and impact of the indicator, the JIF has been widely dissected and debated by scholars of every disciplinary orientation. Drawing on the existing literature as well as on original research, this chapter provides a brief history of the indicator and highlights well-known limitations—such as the asymmetry between the numerator and the denominator, differences across disciplines, the insufficient citation window, and the skewness of the underlying citation distributions. The inflation of the JIF and the weakening predictive power is discussed, as well as the adverse effects on the behaviors of individual actors and the research enterprise. Alternative journal-based indicators are described and the chapter concludes with a call for responsible application and a commentary on future developments in journal indicators.

H/T to Gary Price’s InfoDocket post. — Joe

H/T to Legal Skills Prof Blog for the tip to Bradley Areheart’s The Top 100 Law Reviews: A Reference Guide Based on Historical USNWR Data (Aug. 25, 2017). Here’s the abstract:

The best proxy for how other law professors react and respond to publishing in main, or flagship, law reviews is the US News and World Report (USNWR) rankings. This paper utilizes historical USNWR data to rank the top 100 law reviews. The USNWR rankings are important in shaping many – if not most – law professors’ perceptions about the relative strength of a law school (and derivatively, the home law review). This document contains a chart that is sorted by the 10-year rolling average for each school, but it also contains the 5-year and 15-year rolling averages. This paper also describes my methodology and responds to a series of frequently asked questions.

And the top 10 law schools based on USNWR data and sorted by 10-year rolling averages is

  1. Yale
  2. Harvard
  3. Stanford
  4. Columbia
  5. Chicago
  6. NYU
  7. Pennsylvania
  8. Berkeley
  9. Virginia
  10. Michigan

— Joe

In Legal Research in Search of Attention: A Quantitative Assessment, 27 King’s Law Journal 170 (2016) Mathias M. Siem writes

[T]he Social Science Research Network (SSRN) is a good platform to test which research is more or less appealing. In the study reported in this article, 1107 papers of SSRN’s Legal Scholarship Network were analysed in order to identify the main determinants of SSRN downloads, abstract views, and downloads per abstract views. This analysis fills a gap in the growing literature that deals with the impact of published research. It is also suggested that examining SSRN is important because its open nature reflects the general trend from offline publications in domestic journals to global availability of publications online.

Here’s the abstract:

In today’s world it is easy to make research publicly available by putting it online. But this improved availability raises the question how to produce research that actually gets attention. Bibliometrics can contribute to this debate. Based on a sample of 1107 papers of SSRN’s Legal Scholarship Network, this article finds that a short title, a top-20 university affiliation, US authorship, and writing about topics of corporate law and international law have a positive effect on downloads and/or abstract views. The article also reflects on the implications of these findings, in particular how they may be related to contentious attempts to identify what is “good” legal research through metrics and peer review.

— Joe

The purpose of Citation Performance Indicators — A Very Short Introduction by Phil Davis, Scholarly Kitchen (May 15, 2017) “is to provide a brief summary of the main citation indicators used today. It is not intended to be comprehensive, nor is it intended to opine on which indicator is best. It is geared for casual users of performance metrics and not bibliometricians.”

H/T to Gary Price’s InfoDocket post. — Joe

Interbrand released its annual survey of the top 100 of the most valuable brands.  Apple and Google hold the number 1 and 2 spots respectively.  Barnes & Noble is nowhere to be found, but Amazon comes in at number 10.  Lego broke into the list for the first time at number 82 (Ninja Go!!!!).  Facebook is listed as a top rise are number 23.  I guess having 1 billion users helps with brand awareness.  My old friend Jack Daniels makes the list at number 84.  Thomson Reuters comes in at number 63, though that represents a drop of 12% in brand value.  There must be some people out there still pining for Westlaw Classic I imagine.

Mark

Lex Machina issued a report last Tuesday that analyzes copyright litigation trends over the last five years.  The report is impressive for the level of detail in the statistical analysis and charts presented in the 37 page document.  The report is designed to highlight legal analytics in copyright litigation.  The target audience appears to be plaintiffs with a heavy interest in protecting their media assets, firms that are considering taking on copyright cases, and those with an interest in the mechanics of copyright litigation.  As the report indicates, it is the first survey of its kind.  I’ve followed file sharing and other IP cases which I have reported on in this forum from time to time.  I found the report interesting for its snapshot of how litigation progresses through the courts.

Highlights from the press release include:

  • Top plaintiffs include music (Broadcast Music, Sony/ATV Songs, Songs of Universal, UMG Records, EMI, and more), software (Microsoft), fashion (Coach), and textile patterns (Star Fabrics) industries.
  • Top defendants include retailers (Ross Stores, TJX (TJ Maxx), Amazon, Burlington Coat Factory, Rainbow USA, J.C. Penny, Sears, Forever 21, Wal-mart, and Nordstroms), music labels (Universal Music, Sony Music Entertainment, UMG Recordings), & publishing / education, (Pearson Education and John Wiley and Sons).
  • Doniger Burroughs, a California fashion, art, and entertainment boutique leads among plaintiffs firms with 741 cases, more than double the next firm.
  • Copyright litigation is heavily concentrated in the Central District of California (2,496 cases, 26.2% of all since 2009) and the Southern District of New York (1,061 cases, 11.1%).
  • Fair use is usually decided on summary judgment.
  • The majority of infringement findings happen as a result of default, and almost all default findings are for infringement.
  • Top parties winning damages include companies in movies and entertainment (Disney, Twentieth Century Fox, Columbia Pictures, Warner Brothers, Universal, Paramount Pictures, and more), software (Quantlab, Foundry Networks), and music (UMG Recording).
  • In file sharing cases, about 90% of cases settle. Top plaintiffs include movie production companies. And an erotic website leads the list of Internet file-sharing plaintiffs with 4,238 cases – about 15 times as many cases as the next most litigious plaintiff.

The report registration and download link is here.

Mark

What are we talking about? The Blog Emperor is comparing the differential in the US News law school overall and academic reputation rankings. In this blog post, he listed 53 law schools that are over-performing and underperforming their overall rankings because, well, academic reputation is very, very important.

How about the US News judges-attorneys reputational rankings? No, that’s not important. Only peer assessment scores are. Considering the low sample sizes and, in some years, response rates for both US News reputational surveys, the annual reputational findings are absurd (unless one might be fishing to increase human and robot traffic because law prof blog traffic dips during Winter Break; see today’s earlier post about web communications traffic stats).

For reaction to the nonsense, see the comment trail for Staci Zaretsky’s ATL post. My favorite, so far, is

I’m sorry, but who gives a shit what law professors and law deans think of the school? IF they count as part of the legal community (which I don’t really think they do), it is a small, insular, largely irrelevant portion.

Tell me what real lawyers think about the schools.

Joe

According to Incapsula, bots went from 51% of web traffic in 2012 to 61% of web traffic this year for a 21% year-over-year increase. The cloud computing firm found that most of the increase in bot traffic was due to increased activity by “good bots” like search engines. Spam bots, comparatively, are on the decline. However, the fact remains that any self-congratulatory remarks about a blog reaching a visit and/or page view milestone or blog rankings based on those metrics are wild inflations of this form of “social media.” If the bot traffic trend continues at this pace, pretty soon one will have divide web traffic stats by four to come up with a reasonable estimate of human mouse clicks and eyeballs. — Joe

bothumantraffic20122013

I wrote a post earlier this week on LG’s data collection practices on its smart TVs.  LG televisions apparently have a feature for collecting and transmitting to LG channel view information as well as file names from a connected USB stick.  There is a setting that allows a user to turn off the feature though when selected the information was still collected and transmitted.  The rationale for this was to customize ads sent to the TV based on viewing habits even though there was (allegedly) no personally identifiable information collected.

LG figured out it had a publicity problem on its hand.  The company issued a statement saying it will fix the problem with a firmware update that will actually turn off the feature when a user turns it off.  The same update will stop the reading of file names from attached devices as well.  LG admitted that data was transmitted though never stored on its servers.  What?  Then what was the point of transmitting it in the first place?  I think there is more to this than the company wants to admit.  Perhaps the information was sent to a third party who supplies the customized ads.  Who knows.   I think LG needs to be a bit more forthcoming on just what is going on here.

Details are in a report on CNET.

Mark

There seems to be a convergence of stories recently about privacy and tracking lately.  If privacy isn’t dead it certainly seems to be fighting a losing battle while on life support.  Where to start?  There is a report in CNET on Vint Cerf’s statement, “Privacy may be an anomaly.”  The reason for that is the level of detail people are sharing through social media.  Another Cerf quote:  “Technology has outraced our social intellect.”  I find that hard to argue with that concept.  There are multitudes of ways to track people and their habits down to fine details.

An older story in Ars Technica tells that Facebook is working on a way to collect mouse movements.  As the story points out, it’s not uncommon for web sites to track where someone clicks on a page.  That’s one way to determine an ad’s effectiveness.  What Facebook intends to do is watch the mouse.  How does someone move along the page?  Where does the mouse hover and for how long?  Mobile views obviously do not use mice, but tracking in this context extends to tracking when a newsfeed is visible.  My understanding is that the Facebook like button is its own tracking device between sites whether one has a Facebook account or not.

The next item concerns the humble toothbrush, though it is symbolic of the so-called “Internet of things.”  The concept is promoted as a social good in that all of the dumb devices we use will become smart at some point and our interactivity with them will come with new conveniences.  Consider this statement from Salesforce CEO Marc Benioff as reported by ZDNET:

“Everything is on the Net. And we will be connected in phenomenal new ways,” said Benioff. Benioff highlighted how his toothbrush of the future will be connected. The new Philips toothbrush is Wi-Fi based and have GPS. “When I go into the dentist he won’t ask if I brushed. He will say what’s your login to your Philips account. There will be a whole new level of transparency with my dentist,” gushed Benioff.

Any marketer would gush over this level of personal detail.  It may benefit the doctor-patient relationship, but who else would have access to this information and how will it be used?  I’m not sure I would be comforted by doctor-patient confidentiality in these circumstances.  I’m sure it will all be in the terms and conditions for the device, or not, at least if the next story’s details are accurate.

A blogger in the U.K. has discovered that his LG smart TV sends details about his viewing habits back to LG servers.  Those habits also include the file names of items viewed from a connected USB stick.  There is a setting in the TV that purports to turn this behavior off (it’s on by default).  It doesn’t work as data is forwarded to LG no matter what the setting.  LG responded to this disclosure as reported in the story on Ars Technica:

“The advice we have been given is that unfortunately as you accepted the Terms and Conditions on your TV, your concerns would be best directed to the retailer,” the representatives wrote in a response to the blogger. “We understand you feel you should have been made aware of these T’s and C’s at the point of sale, and for obvious reasons LG are unable to pass comment on their actions.”

Or putting it another way, we don’t care if you’re put out by these practices.  Life’s good, as they say, depending on who has the power in these relationships.

When I think of Marc Benioff’s toothbrush scenario I can imagine smart devices coming with embedded chips that connect to the web automatically and upload information.  As of now the choice is ours as to whether to connect our devices to the web.  I have a DVD player that is web-enabled though I have not turned on that feature.  My TV set is huge, but also not connected to the web.  My choice, of course, and I may not be typical.  In fact, I’m sure I’m not.

I can predict that there will be a time when a web connection is going to be mandatory for some devices to even work out of the box.  It’s in every marketer’s interest if that came to be.  Or, if I wanted to be exotic, I can predict another pervasive wireless Internet that overlays the one we know and love.  It will just be for smart devices that will connect automatically for our “convenience.”  There may just be enough moneyed interests to make that happen.  Terms and conditions may or may not apply.

Mark

Sounds like a simple question that can be easily answered, right? Well, not according to a review of a recent “report” provided to our elected leaders at their November board meeting. See Membership Statistics 2019-2013 (Numbers as of May 31 of each year) behind AALL’s paywall.

The report includes a table for the “number of entities with AALL members” and itemizes AALL member entities in the follow categories:

  • Law School
  • Private Firm
  • Government & Court
  • Corporation
  • Other
  • Non-Affiliated

A couple of data definition questions. Did any member of the E-board seek clarification about the categories used? For example:

  • Does the “Corporation” category report data just for member corporate legal departments, etc., or does it include vendors?

Whatever it includes, “Corporation” membership declined from 80 in 2008-09 to 52 in 2012-13.

  • “Other” probably includes a couple of library consortia, non-profit, non-library-types but god knows what else. Vendors here?

Whatever this category’s stats capture, “Other” declined from 169 in 2008-09 to 133 in 2012-13.

  • As for “Non-Affiliated,” a footnote explains that the category covers those who “have not indicated an affiliation.”

Does that mean individual human beings are being included as institutions or entities in this head count? It’s kind of hard to draw any other conclusion.

Just the “facts”, please. Excluding the mysterious categories a/k/a “Other” and “Non-Affiliated,” but including “Corporations” under the assumption, right or wrong, that it captures corporate legal departments and the like, total law school + private firm + government and courts + corporations membership declined by 191 institutions, from 1,595 in 2008-09 to 1,404 in 2012-13. That’s only a 12% decline. Not bad. Not bad at all.

Oh wait, that’s about half the percentage decline for similar reporting periods reported in  “Table 5: AALL Libraries Estimated Information Budgets” published in the online editions of AALL’s Biennial Salary and Organizational Characteristics Survey.

There also is a substantial difference in the absolute number of AALL member libraries, institutions, entities, whatever, for similar reporting periods when the above reported stats are compared to stats used to estimate AALL member libraries total information budgets. Compare the below chart sourced with the data supplied to the E-board this month (which includes “Corporations” in the Private Sector category)

aall member entities 08 13

with the below chart compiled from AALL biennial survey data that was reported at Has AALL lost more than 50% of its institutional membership since 2001? (Nov. 4, 2013):

aall member libraries stats

What’s up with this? Hell if I know. I lean toward having more confidence in the committee that has been responsible for collecting and reporting AALL’s biennial survey findings. But  if  the data reported to the E-Board is correct,  then  AALL’s estimated total information budget stats for AALL member libraries are wildly inaccurate,  unless  someone recently decided to count “affiliations” at some sort of internal local level, like, for example, counting each branch office or each functional unit of a law firm as a unique institution, entity, whatever.

— Joe

Oh, wait, that’s the null hypothesis. “Law libraries and their librarians have no value” also demonstrates that thinking or rethinking about value is a double-edged sword. What if the null hypothesis is proven to be true?

From this perspective it is clear that library associations will ignore the null hypothesis by publishing reports that identify value. See for example the FT-SLA report entitled The Evolving Value of Information Management And Five Essential Attributes of the Modern Information Professional (free registration required to download). This recent study identifies the perception gap between executives and information professionals in special libraries and identifies ways the latter group can demonstrate their value to the former group by the means of always illuminating case studies and an equally useful to-do list of recommendations.

And then, in the specific context of law libraries and their information professions, there is this:

The last several years have brought fundamental changes to the legal profession and business of law. These changes have served as an impetus for law libraries to transform their operations and services in varied and profound ways—and it is now imperative that law libraries demonstrate the value they bring in concise, measurable ways.

Instead of attempting to test the null hypothesis, AALL appears intent on spending money to prove the value proposition once someone offers an empirical methodology that only does half of the empirically sound task. For more, see the republished text of AALL’s Oct. 28, 2013 press release (source of the above quote) and commentary at 3 Geeks’ AALL’s RFP on Law Library Value Report.

Frankly, I think value is like porn. One knows it when one sees it by the impact it stimulates. Got a ruler to measure the null? — Joe