The Law Society’s Technology and Law Public Policy Commission was created to explore the role of, and concerns about, the use of algorithms in the justice system. Among the recommendations, the UK needs to create a ‘national register of algorithms’ used in the criminal justice system that would include a record of the datasets that were used in training. Interesting. Read the report.

To get AI systems off the ground, training data must be voluminous and accurately labeled and annotated. With AI becoming a growing enterprise priority, data science teams are under tremendous pressure to deliver projects but frequently are challenged to produce training data at the required scale and quality. Nearly eight out of 10 organizations engaged in AI and machine learning said that projects have stalled, according to a Dimensional Research’s  Artificial Intelligence and Machine Learning Projects Obstructed by Data Issues. The majority (96%) of these organizations said they have run into problems with data quality, data labeling necessary to train AI, and building model confidence.

Aryan Pegwar asks and answers the post title’s question. “Today Modern technologies like artificial intelligence, machine learning, data science have become the buzzwords. Everybody talks about but no one fully understands. They seem very complex to a layman. People often get confused by words like AI, ML and data science. In this article, we explain these technologies in simple words so that you can easily understand the difference between them.” Details here.

Related:

TechRepublic reports that Microsoft has announced that Word Online will incorporate a feature known as Ideas this fall. Backed by artificial intelligence and machine learning courtesy of Microsoft Graph, Ideas will suggest ways to help you enhance your writing and create better documents. Ideas will also show you how to better organize and structure your documents by suggesting tables, styles, and other features already available in Word.

Terms of service of on-line platforms too often contain clauses that are potentially unfair to the consumer. The developers of “CLAUDETTE” present an experimental study where machine learning is employed to automatically detect such potentially unfair clauses. Results show that the proposed system could provide a valuable tool for lawyers and consumers alike. Details here.

From the abstract for Daniel L. Chen, Machine Learning and the Rule of Law, Computational Analysis of Law, Santa Fe Institute Press, ed. M. Livermore and D. Rockmore, Forthcoming:

“Predictive judicial analytics holds the promise of increasing the fairness of law. Much empirical work observes inconsistencies in judicial behavior. By predicting judicial decisions—with more or less accuracy depending on judicial attributes or case characteristics—machine learning offers an approach to detecting when judges most likely to allow extra legal biases to influence their decision making. In particular, low predictive accuracy may identify cases of judicial “indifference,” where case characteristics (interacting with judicial attributes) do no strongly dispose a judge in favor of one or another outcome. In such cases, biases may hold greater sway, implicating the fairness of the legal system.”