“The central problem” writes Robert Parnell “is that not all samples of legal data contain sufficient information to be usefully applied to decision making. By the time big data sets are filtered down to the type of matter that is relevant, sample sizes may be too small and measurements may be exposed to potentially large sampling errors. If Big Data becomes ‘small data’, it may in fact be quite useless.”

Parnell adds

“In practice, although the volume of available legal data will sometimes be sufficient to produce statistically meaningful insights, this will not always be the case. While litigants and law firms would no doubt like to use legal data to extract some kind of informational signal from the random noise that is ever-present in data samples, the hard truth is that there will not always be one. Needless to say, it is important for legal professionals to be able to identify when this is the case.

“Overall, the quantitative analysis of legal data is much more challenging and error-prone than is generally acknowledged. Although it is appealing to view data analytics as a simple tool, there is a danger of neglecting the science in what is basically data science. The consequences of this can be harmful to decision making. To draw an analogy, legal data analytics without inferential statistics is like legal argument without case law or rules of precedent — it lacks a meaningful point of reference and authority.”

For more see When Big Legal Data isn’t Big Enough: Limitations in Legal Data Analytics (Settlement Analytics, 2016). Recommended.

Bob Ambrogi is reporting that Thomson Reuters is rolling out Precedent Analytics for Westlaw Edge users today. “Precedent Analytics lets users see the citation patterns of individual judges, revealing the cases, courts, judges and citation language they rely on in deciding different legal issues. It also shows the frequency with which judges have dealt with different issues,” writes Bob. Details on LawSites. See also this Dewey B Strategic post.

From the abstract for Daniel L. Chen, Judicial Analytics and the Great Transformation of American Law, Journal of Artificial Intelligence and the Law, Forthcoming:

Predictive judicial analytics holds the promise of increasing efficiency and fairness of law. Judicial analytics can assess extra-legal factors that influence decisions. Behavioral anomalies in judicial decision-making offer an intuitive understanding of feature relevance, which can then be used for debiasing the law. A conceptual distinction between inter-judge disparities in predictions and inter- judge disparities in prediction accuracy suggests another normatively relevant criterion with regards to fairness. Predictive analytics can also be used in the first step of causal inference, where the features employed in the first step are exogenous to the case. Machine learning thus offers an approach to assess bias in the law and evaluate theories about the potential consequences of legal change.

H/T beSpacific.