Category Archives: Information Technology

On the malicious use of artificial intelligence

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Feb. 2018) “surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.” — Joe

The law of cyber interference in elections

The Law of Cyber Interference in Elections by Jacqueline Van De Velde “explores the international legal framework that apples to cyber interference in elections. It makes the normative argument that stretching countermeasures to encompass cyber episodes is not only wrong, but also dangerous. Unless modern understanding of sovereignty and the norm of non-intervention are updated for a networked age, countermeasures represent an impermissible expansion of the use of force.” — Joe

How YouTube’s algorithm distorts reality: The Guardian’s video explainer

The 2016 presidential race was fought online in a swamp of disinformation, conspiracy theories and fake news. A Guardian investigation has uncovered evidence suggesting YouTube’s recommendation algorithm was disproportionately prompting users to watch pro-Trump and anti-Clinton videos. See Paul Lewis’s ‘Fiction is outperforming reality’: how YouTube’s algorithm distorts truth, The Guardian, Feb. 2, 2018 for details. — Joe

Stewardship in the age of algorithms

Stewardship in the “Age of Algorithms” by Clifford Lynch, First Monday, 4 December 2017: “This paper explores pragmatic approaches that might be employed to document the behavior of large, complex socio-technical systems (often today shorthanded as “algorithms”) that centrally involve some mixture of personalization, opaque rules, and machine learning components. Thinking rooted in traditional archival methodology — focusing on the preservation of physical and digital objects, and perhaps the accompanying preservation of their environments to permit subsequent interpretation or performance of the objects — has been a total failure for many reasons, and we must address this problem. The approaches presented here are clearly imperfect, unproven, labor-intensive, and sensitive to the often hidden factors that the target systems use for decision-making (including personalization of results, where relevant); but they are a place to begin, and their limitations are at least outlined. Numerous research questions must be explored before we can fully understand the strengths and limitations of what is proposed here. But it represents a way forward. This is essentially the first paper I am aware of which tries to effectively make progress on the stewardship challenges facing our society in the so-called “Age of Algorithms;” the paper concludes with some discussion of the failure to address these challenges to date, and the implications for the roles of archivists as opposed to other players in the broader enterprise of stewardship — that is, the capture of a record of the present and the transmission of this record, and the records bequeathed by the past, into the future. It may well be that we see the emergence of a new group of creators of documentation, perhaps predominantly social scientists and humanists, taking the front lines in dealing with the “Age of Algorithms,” with their materials then destined for our memory organizations to be cared for into the future.” Interesting.

h/t to beSpacific. — Joe

Conventional and computational methods for topic modeling

Here’s the abstract for Topic Modeling the President: Conventional and Computational Methods, George Washington Law Review, Forthcoming, by J. B. Ruhl, John Nay and Jonathan M. Gilligan:

Law is generally represented through text, and lawyers have for centuries classified large bodies of legal text into distinct topics — they “topic model” the law. But large bodies of legal documents present challenges for conventional topic modeling methods. The task of gathering, reviewing, coding, sorting, and assessing a body of tens of thousands of legal documents is a daunting proposition. Recent advances in computational text analytics, a subset of the field of “artificial intelligence,” are already gaining traction in legal practice settings such as e-discovery by leveraging the speed and capacity of computers to process enormous bodies of documents. Differences between conventional and computational methods, however, suggest that computational text modeling has its own limitations, but that the two methods used in unison could be a powerful research tool for legal scholars in their research as well.

To explore that potential — and to do so critically rather than under the “shiny rock” spell of artificial intelligence — we assembled a large corpus of presidential documents to assess how computational topic modeling compares to conventional methods and evaluate how legal scholars can best make use of the computational methods. The presidential documents of interest comprise presidential “direct actions,” such as executive orders, presidential memoranda, proclamations, and other exercises of authority the president can take alone, without congressional concurrence or agency involvement. Presidents have been issuing direct actions throughout the history of the republic, and while they have often been the target of criticism and controversy in the past, lately they have become a tinderbox of debate. Hence, although long ignored by political scientists and legal scholars, there has been a surge of interest in the scope, content, and impact of presidential direct actions.

Legal and policy scholars modeling direct actions into substantive topic classifications thus far have not employed computational methods. This gives us an opportunity to compare results of the two methods. We generated computational topic models of all direct actions over time periods other scholars have studied using conventional methods, and did the same for a case study of environmental policy direct actions. Our computational model of all direct actions closely matched one of the two comprehensive empirical models developed using conventional methods. By contrast, our environmental case study model differed markedly from the only other empirical topic model of environmental policy direct actions, revealing that the conventional methods model included trivial categories and omitted important alternative topics.

Our findings support the assessment that computational topic modeling, provided a sufficiently large corpus of documents is used, can provide important insights for legal scholars in designing and validating their topic models of legal text. To be sure, computational topic modeling used alone has its limitations, some of which are evident in our models, but when used along with conventional methods, it opens doors towards reaching more confident conclusions about how to conceptualize topics in law. Drawing from these results, we offer several use cases for computational topic modeling in legal research. At the front-end, researchers can use the method to generate better and more complete model hypotheses. At the back-end, the method can effectively be used, as we did, to validate existing topic models. And at a meta-scale, the method opens windows to test and challenge conventional legal theory. Legal scholars can do all of these without “the machines,” but there is good reason to believe we can do it better with them in the toolkit.

Interesting. — Joe

Algorithmic authorship through the lens of copyright and First Amendment law

Here’s the abstract for Margot E. Kaminski’s Authorship, Disrupted: AI Authors in Copyright and First Amendment Law, UC Davis Law Review, Vol. 51, No. 589, 2017:

Technology is often characterized as an outside force, with essential qualities, acting on the law. But the law, through both doctrine and theory, constructs the meaning of the technology it encounters. A particular feature of a particular technology disrupts the law only because the law has been structured in a way that makes that feature relevant. The law, in other words, plays a significant role in shaping its own disruption. This Essay is a study of how a particular technology, artificial intelligence, is framed by both copyright law and the First Amendment. How the algorithmic author is framed by these two areas illustrates the importance of legal context and legal construction to the disruption story.

— Joe

“Training data:” Artificial intelligence’s fair use crisis

From the abstract of Artificial Intelligence’s Fair Use Crisis by Benjamin L. W. Sobel:

As automation supplants more forms of labor, creative expression still seems like a distinctly human enterprise. This may someday change: by ingesting works of authorship as “training data,” computer programs can teach themselves to write natural prose, compose music, and generate movies. Machine learning is an artificial intelligence (AI) technology with immense potential and a commensurate appetite for copyrighted works. In the United States, the copyright law mechanism most likely to facilitate machine learning’s uses of protected data is the fair use doctrine. However, current fair use doctrine threatens either to derail the progress of machine learning or to disenfranchise the human creators whose work makes it possible.

This Article addresses the problem in three parts: using popular machine learning datasets and research as case studies, Part I describes how programs “learn” from corpora of copyrighted works and catalogs the legal risks of this practice. It concludes that fair use may not protect expressive machine learning applications, including the burgeoning field of natural language generation. Part II explains that applying today’s fair use doctrine to expressive machine learning will yield one of two undesirable outcomes: if US courts reject the fair use defense for machine learning, valuable innovation may move to another jurisdiction or halt entirely; alternatively, if courts find the technology to be fair use, sophisticated software may divert rightful earnings from the authors of input data. This dilemma shows that fair use may no longer serve its historical purpose. Traditionally, fair use is understood to benefit the public by fostering expressive activity. Today, the doctrine increasingly serves the economic interests of powerful firms at the expense of disempowered individual rightsholders. Finally, in Part III, this Article contemplates changes in doctrine and policy that could address these problems. It concludes that the United States’ interest in avoiding both prongs of AI’s fair use dilemma offers a novel justification for redistributive measures that could promote social equity alongside technological progress.

— Joe

What is push research?

Back in the good old days of the 1980’s when I was a big law firm research librarian (instead of a bean-counting library administrator I have become), I would “push” or proactively supply attorneys resources unsolicited because I thought they may need the information for pending matters we had worked on together. Usually, the material provided was relevant and welcomed. “Push research” as explained by Casetext’s Jake Heller applies artificial intelligence to what amounts to be the electronic footprints of researchers to alert, update and supply resources to the end user sometimes unsolicited as I had done. By the sound of it, AI-engineered “push research” would perform a far better job of providing unsolicited pertinent information than the typical legal research librarian of the 1980s. On ATL, see Jake Heller, Push Research: How AI Is Fundamentally Changing The Way We Research The Law. Recommended. — Joe

Digital innovation benefits the 1% by giving rise to “winner-take-all” markets

Here’s the abstract for Dominique Guellec and Caroline Paunov’s Digital Innovation and the Distribution of Income (Nov. 2017 NBER Working Paper No. 23987):

Income inequalities have increased in most OECD countries over the past decades; particularly the income share of the top 1%. In this paper we argue that the growing importance of digital innovation – new products and processes based on software code and data – has increased market rents, which benefit disproportionately the top income groups. In line with Schumpeter’s vision, digital innovation gives rise to ”winner-take-all” market structures, characterized by higher market power and risk than was the case in the previous economy of tangible products. The cause for these new market structures is digital non-rivalry, which allows for massive economies of scale and reduces costs of innovation. The latter stimulates higher rates of creative destruction, leading to higher risk as only marginally superior products can take over the entire market, hence rendering market shares unstable. Instability commands risk premia for investors. Market rents accrue mainly to investors and top managers and less to the average workers, hence increasing income inequality. Market rents are needed to incentivize innovation and compensate for its costs, but beyond a certain level they become detrimental. Public policy may stimulate innovation by reducing ex ante the market conditions which favor rent extraction from anti-competitive practices.

— Joe

Facebook’s impact on American democracy

“Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election,” wrote Alexis Madrigal in What Facebook Did to American Democracy, The Atlantic, Oct. 12, 2017.

Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

In What Facebook Did to American Democracy, Alexis Madrigal traces Facebook’s impact on American democracy. — Joe

Balkin on free speech in the algorithmic society

Here’s the abstract for Yale Law Prof Jack Balkin’s Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, UC Davis Law Review (2018 Forthcoming):

We have now moved from the early days of the Internet to the Algorithmic Society. The Algorithmic Society features the use of algorithms, artificial intelligence agents, and Big Data to govern populations. It also features digital infrastructure companies, large multi-national social media platforms, and search engines that sit between traditional nation states and ordinary individuals, and serve as special-purpose governors of speech.

The Algorithmic Society presents two central problems for freedom of expression. First, Big Data allows new forms of manipulation and control, which private companies will attempt to legitimate and insulate from regulation by invoking free speech principles. Here First Amendment arguments will likely be employed to forestall digital privacy guarantees and prevent consumer protection regulation. Second, privately owned digital infrastructure companies and online platforms govern speech much as nation states once did. Here the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak.

The first part of the essay describes how to regulate online businesses that employ Big Data and algorithmic decision making consistent with free speech principles. Some of these businesses are “information fiduciaries” toward their end-users; they must exercise duties of good faith and non-manipulation. Other businesses who are not information fiduciaries have a duty not to engage in “algorithmic nuisance”: they may not externalize the costs of their analysis and use of Big Data onto innocent third parties.

The second part of the essay turns to the emerging pluralist model of online speech regulation. This pluralist model contrasts with the traditional dyadic model in which nation states regulated the speech of their citizens.

In the pluralist model, territorial governments continue to regulate speech directly. But they also attempt to coerce or co-opt owners of digital infrastructure to regulate the speech of others. This is “new school” speech regulation. Digital infrastructure owners, and especially social media companies, now act as private governors of speech communities, creating and enforcing various rules and norms of the communities they govern. Finally, end users, civil society organizations, hackers, and other private actors repeatedly put pressure on digital infrastructure companies to regulate speech in certain ways and not to regulate it in others. This triangular tug of war — rather than the traditional dyadic model of states regulating the speech of private parties — characterizes the practical ability to speak in the algorithmic society.

The essay uses the examples of the right to be forgotten and the problem of fake news to illustrate the emerging pluralist model — and new school speech regulation — in action.

As private governance becomes central to freedom of speech, both end-users and nation states put pressure on private governance. Nation states attempt to co-opt private companies into becoming bureaucracies for the enforcement of hate speech regulation and new doctrines like the right to be forgotten. Conversely, end users increasingly demand procedural guarantees, due process, transparency, and equal protection from private online companies.

The more that end-users view businesses as governors, or as special-purpose sovereigns, the more end-users will expect — and demand — that these companies should conform to the basic obligations of governors towards those they govern. These obligations include procedural fairness in handling complaints and applying sanctions, notice, transparency, reasoned explanations, consistency, and conformity to rule of law values — the “law” in this case being the publicly stated norms and policies of the company. Digital infrastructure companies, in turn, will find that they must take on new social obligations to meet these growing threats and expectations from nation states and end-users alike.

Interesting. — Joe

Cambridge Analytica and the rise of weaponized AI propaganda in political election campaigns

Cambridge Analytica, a data mining firm known for being a leader in behavioral microtargeting for election processes (and for bragging about its contribution to the successful Trump presidential campaign), is being investigated by the House Select Committee of Intelligence. See April Glaser, Congress Is Investigating Trump Campaign’s Voter Targeting Firm as Part of the Russia Probe, Slate Oct. 11, 2017. Jared Kushner, who ran the Trump campaign’s data operations, eventually may be implicated. See Jared Kushner In His Own Words On The Trump Data Operation The FBI Is Reportedly Probing, Forbes, May 26, 2017 and Did Russians Target Democratic Voters, With Kushner’s Help? Newsweek, May 23, 2017.

Before joining the Trump campaign, Steve Bannon was on the board of Cambridge Analytica. The Company’s primary financier is hedge fund billionaire and Breitbart investor Robert Mercer. Here’s a presentation at the 2016 Concordia Summit by Alexander Nix, CEO, Cambridge Analytica. Nix discusses the power of big data in global elections and Cambridge Analytica’s revolutionary approach to audience targeting, data modeling, and psychographic profiling for election processes around the world.

The Rise of the Weaponized AI Propaganda Machine discusses how this new automated propaganda machine is driving global politics. This is where big data meets computational psychology, where automated engagement scripts prey on human emotions in a propaganda network that accelerates ideas in minutes with political bots policing public debate. Highly recommended. See also Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? Forbes, March 5, 2017. — Joe

End note: In a separate probe, the UK’s Information Commissioner is investigating Cambridge Analytica for its successful Leave.eu campaign in the UK.

IEEE Spectrum special report: Blockchain World

Sections of the report, which can be found here, include how smart contracts work, how blockchains work, blockchain terminology, how Wall Street firms plan to move trillions to blockchains in 2018, Illinois vs. Dubai, two experiments bring blockchains to government, and more. — Joe

The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control

Here’s the abstract for Richard Warner and Robert H. Sloan’s The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control (July 22, 2017):

David Mindell notes in Our Robots, Ourselves, “For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, ‘there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.’”

Designing and using “the wrapper of human control” means making moral decisions — decisions about what ought to happen. The point is not new as the “soldiers, sailors, airmen or Marines” references shows. What is new is the rise of predictive analytics, the process of using large data sets in order to make predictions.

Predictive analytics greatly exacerbates the long-standing problem about how to balance the benefits of data collection and analysis against the value of privacy, and its pervasive and its ever-increasing use of gives the tradeoff problems an urgency that can no longer be ignore. In tackling the tradeoff issues, it is not enough merely to address obviously invidious uses like a recent photo-editing app for photos of faces from a company called FaceApp. When users asked the app to increase the “hotness” of the photo, the app made skin tones lighter. We focus on widely accepted — or at least currently tolerated — uses of predictive analytics in credit rating, targeted advertising, navigation apps, search engine page ranking, and a variety of other areas. These uses yield considerable benefits, but they also impose significant costs through misclassification in the form of a large number of false positives and false negatives. Predictive analytics not only looks into your private life to construct its profiles of you, it often misrepresents who you are.

How should society respond? Our working assumption is that predictive analytics has significant benefits and should not be eliminated, and moreover, that it is now utterly politically infeasible to eliminate it. Thus, we propose making tradeoffs between the benefits and the costs by constraining the use and distribution of information. The constraints would have to apply across a wide range of complex situations. Is there an existing system that makes relevant tradeoffs by constraining the distribution and use of information across a highly varied range of contexts? Indeed, there is: informational norms. Informational norms are social norms that constrain not only the collection, but also the use and distribution of information. We focus on the use and distribution constraints. Those constraints establish an appropriately selective flow of information in a wide range of cases.

We contend they provide an essential “wrapper of human control” for predictive analytics. The obvious objection is that the relevant norms do not exist. Technological-driven economic, social, and political developments have far outpaced the slow evolution of norms. New norms will nonetheless evolve and existing norms will adapt to condone surveillance. Reasonable public policy requires controlling the evolution and adaption of norms to reach desirable outcomes.

— Joe

Are corporate legal departments ready for AI technology?

According to Thomson Reuters’ new report, Ready or Not: Artificial Intelligence and Corporate Legal Departments, “corporate counsel believe they are tech savvy but acknowledge that their comfort level and confidence with technology have limitations, specifically around artificial intelligence (AI).” From the press release:

The report notes that more than half (56%) of in-house attorneys either perceive that AI technology is not used or are not yet familiar with the use of AI technology in their legal department. And for others, there is skepticism about its reliability and cost-effectiveness. Despite the unknown, some in-house attorneys surveyed envision AI as being beneficial in increasing efficiency (17%), reducing costs (13%), minimizing risk (7%) and supporting document review (6%).

The top concern among respondents in using AI was cost (19%), as the mantra of doing more with less and budget constraints were key factors to adoption. Reliability (15%) was another concern, especially in areas of ethical considerations and confidentiality. A third concern is a constant with any new technology or process: change management (9%).

H/T to Bob Ambrogi’s LawSites post. — Joe

Russia’s interference in the 2016 US presidential election and information warfare

According to the declassified report, Assessing Russian Activities and Intentions in Recent US Elections: The Analytic Process and Cyber Incident Attribution, the CIA, FBI and NSA have “high confidence” that Russian President Vladimir Putin “ordered an influence campaign in 2016 aimed at the US presidential election” in order to “undermine public faith in the US democratic process, denigrate Clinton, and harm her electability and potential presidency.” The report also contends the Russian government “aspired to help President-elect Trump’s election chances when possible by discrediting Secretary Clinton and publicly contrasting her unfavorably to him.” See Russia and the U.S. Presidential Election (Jan. 17, 2017 IN10635) for the Congressional Research Service’s backgrounder.

Russian infomation warfare activities is the topic of Information Warfare: Russian Activities (Sept. 2, 2016 IN10563). From the report:

Russian doctrine typically refers to a holistic concept of “information war,” which is used to accomplish two primary aims:
•To achieve political objectives without the use of military force.
•To shape a favorable international response to the deployment of its military forces, or military forces with which Moscow is allied.

Tactics used to accomplish these goals include damaging information systems and critical infrastructure; subverting political, economic, and social systems; instigating “massive psychological manipulation of the population to destabilize the society and state”; and coercing targets to make decisions counter to their interests. Recent events suggest that Russia may be employing a mix of propaganda, misinformation, and deliberately misleading or corrupted disinformation in order to do so. And while Russian organizations appear to be using cyberspace as a primary medium through which these goals are achieved, the government also appears to potentially be using the physical realm to conduct more traditional influence operations including denying the deployment of troops in conflict areas and the use of online “troll armies” to propagate pro-Russian rhetoric.

These activities are placed in the larger context of US policy towards Russia in Russia: Background and U.S. Policy (Aug. 21, 2017 R44775). — Joe

Law Library of Congress launches legal reference Chatbot

One of the highlights of the American Association of Law Libraries (AALL) conference in Austin this year was the Innovation Tournament which pitted three librarians’ tech innovations against each other. With two prizes, each worth $2,500, up for grabs, the competition was pretty tough. There was a scanning project management innovation, a Virtual Reality presentation preparedness tool, and an innovative ChatBot for legal information assistance. The ChatBot really caught my attention as something that I would love to test out on a local level. — Greg Lambert, Now I want a Chatbot, 3 Geeks and a Law Blog, July 27, 2017

Now Greg, you and I can test drive a new chatbot that walks a user through a basic reference interview. According to In Custodias Legis, the new chatbot can connect a user to primary legal sources, law library research guides and foreign law reports. The chatbot can also respond to a limited number of text commands. Go to the Law Library of Congress Facebook page to try the chatbot.

H/T to Gary Price’s InfoDocket report. — Joe

Bloomberg Law’s New Feature, Points Of Law

Bloomberg Law announced a new research feature, Points of Law, a little over a week ago.  I’ve been playing around with it using the ATV injury problem I created for teaching online legal research concepts.  In summary, An ATV rider was injured while riding on someone else’s private property without permission.  The problem called for the researcher to identify relevant cases where assumption of risk was a viable defense and collect them for later analysis.  The jurisdiction is New York.

Let me explain a little about Points of Law before I dive into my experience with it.  Bloomberg’s press release describes the feature:

Points of Law offers a more efficient way to conduct case law research.  Through the application of machine learning to Bloomberg Law’s database of 13 million court opinions, Points of Law highlights language critical to the court’s holding, links this language to governing statements of law and relevant on-point case law.

Bloomberg Law provides context – connecting keyword search results to governing statements of law – and unparalleled breadth of coverage, generating one million Points of Law from our state and federal court opinion database.

I found the press release accurate.  I used one of the sample searches I set up for the research problem, <all-terrain vehicle and assumption of risk>.  The case law I expected to see in the list of results was there.  Some of the cases, not all, had a Points of Law icon on the right side of the text.  Clicking that highlights text that the AI in the database considers to be significant.  My search highlighted what I would describe as a combination of black letter law on a keyword related topic or significant points on how the courts treat that topic.  The focus here was on assumption of risk, obviously,  as and all-terrain vehicle is not a legal concept.

Here are some example results extracted from Marcano v. City of New York, 296 A.D.2d 43, 743 N.Y.S.2d 456 (App Div, 1st Dept 2002):

Generally, the issue of assumption of risk is a question of fact for the jury.” (Lamey v Foley, 188 AD2d 157, 163-164 .)

“The policy underlying this tort rule is intended to facilitate free and vigorous participation in athletic activities.” (Benitez v New York City Bd. of Educ., 73 NY2d 650, 657 .)  [Discussing how assumption of the risk in sports is handled by the courts. – MG]

Because of the factual nature of the inquiry, whether a danger is open and obvious is most often a jury question * * *.”

What I found most interesting about using Points of Law is how viewing multiple extracts informed me about assumption of risk without requiring a lot of lengthy analysis.  Now, not all cases in the search results were useful in my context where an ATV rider was injured.  At the same time, a researcher will find what they need to know conceptually about assumption of the risk as treated by the New York Courts.  I assume that applies to other legal doctrines as well.

Another feature worth mentioning is that clicking on the highlighted phrase will open a side window that cites other cases expressing the same point of law (up to 10).  There is also a button that shows a citation map of the Point:

Bloomberg Cite Map.

Another button shows a list of opinions that expressed related concepts along with the Point text:

Bloomberg Related Points of Law

All in all, I think this is a nifty feature that researchers and litigators will actually use.  I wonder if it will integrate with any of the current general search products on the market, as in “Hey Google, find me cases in New York State that discuss assumption of risk in the context of recreational activities.”  If we now think that first year law students take the lazy route in legal research based on their Google use, just wait for the future to show up.

In the Not Everything is Perfect category, one case, Bierach v. Nichols, 248 A.D.2d 916, 669 N.Y.S.2d 988 (App Div, 3d Dept 1998), had one Point of Law listed but not highlighted in the text.  It was short enough that I was able to guess what was the likely text that would have been highlighted.  Oh well.  –Mark

What sort of information transparency is necessary for reliance on analytical tools?

According to RNRMarketResearch.com, the legal analytics market is growing at a compound annual growth rate of 32.7% and is projected to grow from $451.1 million in 2017 to $1.858 billion by 2022. But not all legal analytical products are created equally. On the AALL CS-SIS blog, Jonathan Germann demonstrates this by comparing Docket Navigator with Bloomberg Law. The latter comes up lacking. “As information professionals become regular users and gatekeepers of analytics tools, what information transparency is necessary for reliance?,” asks Germann. He proceeds to provide a transparency checklist. — Joe

LexisNexis tests research assistant chatbot

Here’s how LN chief product officer Jamie Buckley described this development in this Venture Beat story.

“Something that we’re playing with in the lab, we actually have an internal chatbot where you can start asking it questions. It replies with either an answer or what it thinks might be what you’re looking for, and it also helps you filter the results,” Buckley said. “So you might get 100,000 results on the return, but it can help to understand where are some of the differences between the results and then ask you clarifying questions based on that.”

— Joe