From Oleksii Kharkovyna’s A Beginner’s Guide To Data Science (June 10, 2019): “[T]he popularity of Data Science lies in the fact it encompasses the collection of large arrays of structured and unstructured data and their conversion into human-readable format, including visualization, work with statistics and analytical methods–machine and deep learning, probability analysis and predictive models, neural networks and their application for solving actual problems.” Read more about it.
“The central problem” writes Robert Parnell “is that not all samples of legal data contain sufficient information to be usefully applied to decision making. By the time big data sets are filtered down to the type of matter that is relevant, sample sizes may be too small and measurements may be exposed to potentially large sampling errors. If Big Data becomes ‘small data’, it may in fact be quite useless.”
“In practice, although the volume of available legal data will sometimes be sufficient to produce statistically meaningful insights, this will not always be the case. While litigants and law firms would no doubt like to use legal data to extract some kind of informational signal from the random noise that is ever-present in data samples, the hard truth is that there will not always be one. Needless to say, it is important for legal professionals to be able to identify when this is the case.
“Overall, the quantitative analysis of legal data is much more challenging and error-prone than is generally acknowledged. Although it is appealing to view data analytics as a simple tool, there is a danger of neglecting the science in what is basically data science. The consequences of this can be harmful to decision making. To draw an analogy, legal data analytics without inferential statistics is like legal argument without case law or rules of precedent — it lacks a meaningful point of reference and authority.”
For more see When Big Legal Data isn’t Big Enough: Limitations in Legal Data Analytics (Settlement Analytics, 2016). Recommended.