Category Archives: Information Technology

“Training data:” Artificial intelligence’s fair use crisis

From the abstract of Artificial Intelligence’s Fair Use Crisis by Benjamin L. W. Sobel:

As automation supplants more forms of labor, creative expression still seems like a distinctly human enterprise. This may someday change: by ingesting works of authorship as “training data,” computer programs can teach themselves to write natural prose, compose music, and generate movies. Machine learning is an artificial intelligence (AI) technology with immense potential and a commensurate appetite for copyrighted works. In the United States, the copyright law mechanism most likely to facilitate machine learning’s uses of protected data is the fair use doctrine. However, current fair use doctrine threatens either to derail the progress of machine learning or to disenfranchise the human creators whose work makes it possible.

This Article addresses the problem in three parts: using popular machine learning datasets and research as case studies, Part I describes how programs “learn” from corpora of copyrighted works and catalogs the legal risks of this practice. It concludes that fair use may not protect expressive machine learning applications, including the burgeoning field of natural language generation. Part II explains that applying today’s fair use doctrine to expressive machine learning will yield one of two undesirable outcomes: if US courts reject the fair use defense for machine learning, valuable innovation may move to another jurisdiction or halt entirely; alternatively, if courts find the technology to be fair use, sophisticated software may divert rightful earnings from the authors of input data. This dilemma shows that fair use may no longer serve its historical purpose. Traditionally, fair use is understood to benefit the public by fostering expressive activity. Today, the doctrine increasingly serves the economic interests of powerful firms at the expense of disempowered individual rightsholders. Finally, in Part III, this Article contemplates changes in doctrine and policy that could address these problems. It concludes that the United States’ interest in avoiding both prongs of AI’s fair use dilemma offers a novel justification for redistributive measures that could promote social equity alongside technological progress.

— Joe

What is push research?

Back in the good old days of the 1980’s when I was a big law firm research librarian (instead of a bean-counting library administrator I have become), I would “push” or proactively supply attorneys resources unsolicited because I thought they may need the information for pending matters we had worked on together. Usually, the material provided was relevant and welcomed. “Push research” as explained by Casetext’s Jake Heller applies artificial intelligence to what amounts to be the electronic footprints of researchers to alert, update and supply resources to the end user sometimes unsolicited as I had done. By the sound of it, AI-engineered “push research” would perform a far better job of providing unsolicited pertinent information than the typical legal research librarian of the 1980s. On ATL, see Jake Heller, Push Research: How AI Is Fundamentally Changing The Way We Research The Law. Recommended. — Joe

Digital innovation benefits the 1% by giving rise to “winner-take-all” markets

Here’s the abstract for Dominique Guellec and Caroline Paunov’s Digital Innovation and the Distribution of Income (Nov. 2017 NBER Working Paper No. 23987):

Income inequalities have increased in most OECD countries over the past decades; particularly the income share of the top 1%. In this paper we argue that the growing importance of digital innovation – new products and processes based on software code and data – has increased market rents, which benefit disproportionately the top income groups. In line with Schumpeter’s vision, digital innovation gives rise to ”winner-take-all” market structures, characterized by higher market power and risk than was the case in the previous economy of tangible products. The cause for these new market structures is digital non-rivalry, which allows for massive economies of scale and reduces costs of innovation. The latter stimulates higher rates of creative destruction, leading to higher risk as only marginally superior products can take over the entire market, hence rendering market shares unstable. Instability commands risk premia for investors. Market rents accrue mainly to investors and top managers and less to the average workers, hence increasing income inequality. Market rents are needed to incentivize innovation and compensate for its costs, but beyond a certain level they become detrimental. Public policy may stimulate innovation by reducing ex ante the market conditions which favor rent extraction from anti-competitive practices.

— Joe

Facebook’s impact on American democracy

“Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election,” wrote Alexis Madrigal in What Facebook Did to American Democracy, The Atlantic, Oct. 12, 2017.

Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

In What Facebook Did to American Democracy, Alexis Madrigal traces Facebook’s impact on American democracy. — Joe

Balkin on free speech in the algorithmic society

Here’s the abstract for Yale Law Prof Jack Balkin’s Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, UC Davis Law Review (2018 Forthcoming):

We have now moved from the early days of the Internet to the Algorithmic Society. The Algorithmic Society features the use of algorithms, artificial intelligence agents, and Big Data to govern populations. It also features digital infrastructure companies, large multi-national social media platforms, and search engines that sit between traditional nation states and ordinary individuals, and serve as special-purpose governors of speech.

The Algorithmic Society presents two central problems for freedom of expression. First, Big Data allows new forms of manipulation and control, which private companies will attempt to legitimate and insulate from regulation by invoking free speech principles. Here First Amendment arguments will likely be employed to forestall digital privacy guarantees and prevent consumer protection regulation. Second, privately owned digital infrastructure companies and online platforms govern speech much as nation states once did. Here the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak.

The first part of the essay describes how to regulate online businesses that employ Big Data and algorithmic decision making consistent with free speech principles. Some of these businesses are “information fiduciaries” toward their end-users; they must exercise duties of good faith and non-manipulation. Other businesses who are not information fiduciaries have a duty not to engage in “algorithmic nuisance”: they may not externalize the costs of their analysis and use of Big Data onto innocent third parties.

The second part of the essay turns to the emerging pluralist model of online speech regulation. This pluralist model contrasts with the traditional dyadic model in which nation states regulated the speech of their citizens.

In the pluralist model, territorial governments continue to regulate speech directly. But they also attempt to coerce or co-opt owners of digital infrastructure to regulate the speech of others. This is “new school” speech regulation. Digital infrastructure owners, and especially social media companies, now act as private governors of speech communities, creating and enforcing various rules and norms of the communities they govern. Finally, end users, civil society organizations, hackers, and other private actors repeatedly put pressure on digital infrastructure companies to regulate speech in certain ways and not to regulate it in others. This triangular tug of war — rather than the traditional dyadic model of states regulating the speech of private parties — characterizes the practical ability to speak in the algorithmic society.

The essay uses the examples of the right to be forgotten and the problem of fake news to illustrate the emerging pluralist model — and new school speech regulation — in action.

As private governance becomes central to freedom of speech, both end-users and nation states put pressure on private governance. Nation states attempt to co-opt private companies into becoming bureaucracies for the enforcement of hate speech regulation and new doctrines like the right to be forgotten. Conversely, end users increasingly demand procedural guarantees, due process, transparency, and equal protection from private online companies.

The more that end-users view businesses as governors, or as special-purpose sovereigns, the more end-users will expect — and demand — that these companies should conform to the basic obligations of governors towards those they govern. These obligations include procedural fairness in handling complaints and applying sanctions, notice, transparency, reasoned explanations, consistency, and conformity to rule of law values — the “law” in this case being the publicly stated norms and policies of the company. Digital infrastructure companies, in turn, will find that they must take on new social obligations to meet these growing threats and expectations from nation states and end-users alike.

Interesting. — Joe

Cambridge Analytica and the rise of weaponized AI propaganda in political election campaigns

Cambridge Analytica, a data mining firm known for being a leader in behavioral microtargeting for election processes (and for bragging about its contribution to the successful Trump presidential campaign), is being investigated by the House Select Committee of Intelligence. See April Glaser, Congress Is Investigating Trump Campaign’s Voter Targeting Firm as Part of the Russia Probe, Slate Oct. 11, 2017. Jared Kushner, who ran the Trump campaign’s data operations, eventually may be implicated. See Jared Kushner In His Own Words On The Trump Data Operation The FBI Is Reportedly Probing, Forbes, May 26, 2017 and Did Russians Target Democratic Voters, With Kushner’s Help? Newsweek, May 23, 2017.

Before joining the Trump campaign, Steve Bannon was on the board of Cambridge Analytica. The Company’s primary financier is hedge fund billionaire and Breitbart investor Robert Mercer. Here’s a presentation at the 2016 Concordia Summit by Alexander Nix, CEO, Cambridge Analytica. Nix discusses the power of big data in global elections and Cambridge Analytica’s revolutionary approach to audience targeting, data modeling, and psychographic profiling for election processes around the world.

The Rise of the Weaponized AI Propaganda Machine discusses how this new automated propaganda machine is driving global politics. This is where big data meets computational psychology, where automated engagement scripts prey on human emotions in a propaganda network that accelerates ideas in minutes with political bots policing public debate. Highly recommended. See also Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? Forbes, March 5, 2017. — Joe

End note: In a separate probe, the UK’s Information Commissioner is investigating Cambridge Analytica for its successful Leave.eu campaign in the UK.

IEEE Spectrum special report: Blockchain World

Sections of the report, which can be found here, include how smart contracts work, how blockchains work, blockchain terminology, how Wall Street firms plan to move trillions to blockchains in 2018, Illinois vs. Dubai, two experiments bring blockchains to government, and more. — Joe

The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control

Here’s the abstract for Richard Warner and Robert H. Sloan’s The Ethics of the Algorithm: Autonomous Systems and the Wrapper of Human Control (July 22, 2017):

David Mindell notes in Our Robots, Ourselves, “For any apparently autonomous system, we can always find the wrapper of human control that makes it useful and returns meaningful data. In the words of a recent report by the Defense Science Board, ‘there are no fully autonomous systems just as there are no fully autonomous soldiers, sailors, airmen or Marines.’”

Designing and using “the wrapper of human control” means making moral decisions — decisions about what ought to happen. The point is not new as the “soldiers, sailors, airmen or Marines” references shows. What is new is the rise of predictive analytics, the process of using large data sets in order to make predictions.

Predictive analytics greatly exacerbates the long-standing problem about how to balance the benefits of data collection and analysis against the value of privacy, and its pervasive and its ever-increasing use of gives the tradeoff problems an urgency that can no longer be ignore. In tackling the tradeoff issues, it is not enough merely to address obviously invidious uses like a recent photo-editing app for photos of faces from a company called FaceApp. When users asked the app to increase the “hotness” of the photo, the app made skin tones lighter. We focus on widely accepted — or at least currently tolerated — uses of predictive analytics in credit rating, targeted advertising, navigation apps, search engine page ranking, and a variety of other areas. These uses yield considerable benefits, but they also impose significant costs through misclassification in the form of a large number of false positives and false negatives. Predictive analytics not only looks into your private life to construct its profiles of you, it often misrepresents who you are.

How should society respond? Our working assumption is that predictive analytics has significant benefits and should not be eliminated, and moreover, that it is now utterly politically infeasible to eliminate it. Thus, we propose making tradeoffs between the benefits and the costs by constraining the use and distribution of information. The constraints would have to apply across a wide range of complex situations. Is there an existing system that makes relevant tradeoffs by constraining the distribution and use of information across a highly varied range of contexts? Indeed, there is: informational norms. Informational norms are social norms that constrain not only the collection, but also the use and distribution of information. We focus on the use and distribution constraints. Those constraints establish an appropriately selective flow of information in a wide range of cases.

We contend they provide an essential “wrapper of human control” for predictive analytics. The obvious objection is that the relevant norms do not exist. Technological-driven economic, social, and political developments have far outpaced the slow evolution of norms. New norms will nonetheless evolve and existing norms will adapt to condone surveillance. Reasonable public policy requires controlling the evolution and adaption of norms to reach desirable outcomes.

— Joe

Are corporate legal departments ready for AI technology?

According to Thomson Reuters’ new report, Ready or Not: Artificial Intelligence and Corporate Legal Departments, “corporate counsel believe they are tech savvy but acknowledge that their comfort level and confidence with technology have limitations, specifically around artificial intelligence (AI).” From the press release:

The report notes that more than half (56%) of in-house attorneys either perceive that AI technology is not used or are not yet familiar with the use of AI technology in their legal department. And for others, there is skepticism about its reliability and cost-effectiveness. Despite the unknown, some in-house attorneys surveyed envision AI as being beneficial in increasing efficiency (17%), reducing costs (13%), minimizing risk (7%) and supporting document review (6%).

The top concern among respondents in using AI was cost (19%), as the mantra of doing more with less and budget constraints were key factors to adoption. Reliability (15%) was another concern, especially in areas of ethical considerations and confidentiality. A third concern is a constant with any new technology or process: change management (9%).

H/T to Bob Ambrogi’s LawSites post. — Joe

Russia’s interference in the 2016 US presidential election and information warfare

According to the declassified report, Assessing Russian Activities and Intentions in Recent US Elections: The Analytic Process and Cyber Incident Attribution, the CIA, FBI and NSA have “high confidence” that Russian President Vladimir Putin “ordered an influence campaign in 2016 aimed at the US presidential election” in order to “undermine public faith in the US democratic process, denigrate Clinton, and harm her electability and potential presidency.” The report also contends the Russian government “aspired to help President-elect Trump’s election chances when possible by discrediting Secretary Clinton and publicly contrasting her unfavorably to him.” See Russia and the U.S. Presidential Election (Jan. 17, 2017 IN10635) for the Congressional Research Service’s backgrounder.

Russian infomation warfare activities is the topic of Information Warfare: Russian Activities (Sept. 2, 2016 IN10563). From the report:

Russian doctrine typically refers to a holistic concept of “information war,” which is used to accomplish two primary aims:
•To achieve political objectives without the use of military force.
•To shape a favorable international response to the deployment of its military forces, or military forces with which Moscow is allied.

Tactics used to accomplish these goals include damaging information systems and critical infrastructure; subverting political, economic, and social systems; instigating “massive psychological manipulation of the population to destabilize the society and state”; and coercing targets to make decisions counter to their interests. Recent events suggest that Russia may be employing a mix of propaganda, misinformation, and deliberately misleading or corrupted disinformation in order to do so. And while Russian organizations appear to be using cyberspace as a primary medium through which these goals are achieved, the government also appears to potentially be using the physical realm to conduct more traditional influence operations including denying the deployment of troops in conflict areas and the use of online “troll armies” to propagate pro-Russian rhetoric.

These activities are placed in the larger context of US policy towards Russia in Russia: Background and U.S. Policy (Aug. 21, 2017 R44775). — Joe

Law Library of Congress launches legal reference Chatbot

One of the highlights of the American Association of Law Libraries (AALL) conference in Austin this year was the Innovation Tournament which pitted three librarians’ tech innovations against each other. With two prizes, each worth $2,500, up for grabs, the competition was pretty tough. There was a scanning project management innovation, a Virtual Reality presentation preparedness tool, and an innovative ChatBot for legal information assistance. The ChatBot really caught my attention as something that I would love to test out on a local level. — Greg Lambert, Now I want a Chatbot, 3 Geeks and a Law Blog, July 27, 2017

Now Greg, you and I can test drive a new chatbot that walks a user through a basic reference interview. According to In Custodias Legis, the new chatbot can connect a user to primary legal sources, law library research guides and foreign law reports. The chatbot can also respond to a limited number of text commands. Go to the Law Library of Congress Facebook page to try the chatbot.

H/T to Gary Price’s InfoDocket report. — Joe

Bloomberg Law’s New Feature, Points Of Law

Bloomberg Law announced a new research feature, Points of Law, a little over a week ago.  I’ve been playing around with it using the ATV injury problem I created for teaching online legal research concepts.  In summary, An ATV rider was injured while riding on someone else’s private property without permission.  The problem called for the researcher to identify relevant cases where assumption of risk was a viable defense and collect them for later analysis.  The jurisdiction is New York.

Let me explain a little about Points of Law before I dive into my experience with it.  Bloomberg’s press release describes the feature:

Points of Law offers a more efficient way to conduct case law research.  Through the application of machine learning to Bloomberg Law’s database of 13 million court opinions, Points of Law highlights language critical to the court’s holding, links this language to governing statements of law and relevant on-point case law.

Bloomberg Law provides context – connecting keyword search results to governing statements of law – and unparalleled breadth of coverage, generating one million Points of Law from our state and federal court opinion database.

I found the press release accurate.  I used one of the sample searches I set up for the research problem, <all-terrain vehicle and assumption of risk>.  The case law I expected to see in the list of results was there.  Some of the cases, not all, had a Points of Law icon on the right side of the text.  Clicking that highlights text that the AI in the database considers to be significant.  My search highlighted what I would describe as a combination of black letter law on a keyword related topic or significant points on how the courts treat that topic.  The focus here was on assumption of risk, obviously,  as and all-terrain vehicle is not a legal concept.

Here are some example results extracted from Marcano v. City of New York, 296 A.D.2d 43, 743 N.Y.S.2d 456 (App Div, 1st Dept 2002):

Generally, the issue of assumption of risk is a question of fact for the jury.” (Lamey v Foley, 188 AD2d 157, 163-164 .)

“The policy underlying this tort rule is intended to facilitate free and vigorous participation in athletic activities.” (Benitez v New York City Bd. of Educ., 73 NY2d 650, 657 .)  [Discussing how assumption of the risk in sports is handled by the courts. – MG]

Because of the factual nature of the inquiry, whether a danger is open and obvious is most often a jury question * * *.”

What I found most interesting about using Points of Law is how viewing multiple extracts informed me about assumption of risk without requiring a lot of lengthy analysis.  Now, not all cases in the search results were useful in my context where an ATV rider was injured.  At the same time, a researcher will find what they need to know conceptually about assumption of the risk as treated by the New York Courts.  I assume that applies to other legal doctrines as well.

Another feature worth mentioning is that clicking on the highlighted phrase will open a side window that cites other cases expressing the same point of law (up to 10).  There is also a button that shows a citation map of the Point:

Bloomberg Cite Map.

Another button shows a list of opinions that expressed related concepts along with the Point text:

Bloomberg Related Points of Law

All in all, I think this is a nifty feature that researchers and litigators will actually use.  I wonder if it will integrate with any of the current general search products on the market, as in “Hey Google, find me cases in New York State that discuss assumption of risk in the context of recreational activities.”  If we now think that first year law students take the lazy route in legal research based on their Google use, just wait for the future to show up.

In the Not Everything is Perfect category, one case, Bierach v. Nichols, 248 A.D.2d 916, 669 N.Y.S.2d 988 (App Div, 3d Dept 1998), had one Point of Law listed but not highlighted in the text.  It was short enough that I was able to guess what was the likely text that would have been highlighted.  Oh well.  –Mark

What sort of information transparency is necessary for reliance on analytical tools?

According to RNRMarketResearch.com, the legal analytics market is growing at a compound annual growth rate of 32.7% and is projected to grow from $451.1 million in 2017 to $1.858 billion by 2022. But not all legal analytical products are created equally. On the AALL CS-SIS blog, Jonathan Germann demonstrates this by comparing Docket Navigator with Bloomberg Law. The latter comes up lacking. “As information professionals become regular users and gatekeepers of analytics tools, what information transparency is necessary for reliance?,” asks Germann. He proceeds to provide a transparency checklist. — Joe

LexisNexis tests research assistant chatbot

Here’s how LN chief product officer Jamie Buckley described this development in this Venture Beat story.

“Something that we’re playing with in the lab, we actually have an internal chatbot where you can start asking it questions. It replies with either an answer or what it thinks might be what you’re looking for, and it also helps you filter the results,” Buckley said. “So you might get 100,000 results on the return, but it can help to understand where are some of the differences between the results and then ask you clarifying questions based on that.”

— Joe

Algorithmic challenges to autonomous choice

Here’s the abstract for Michal Gal’s very interesting article, Algorithmic Challenges to Autonomous Choice (May 20, 2017):

Human choice is a foundational part of our social, economic and political institutions. This focus is about to be significantly challenged. Technological advances in data collection, data science, artificial intelligence, and communications systems are ushering in a new era in which digital agents, operated through algorithms, replace human choice with regard to many transactions and actions. While algorithms will be given assignments, they will autonomously determine how to carry them out. This game-changing technological development goes to the heart of autonomous human choice. It is therefore time to determine whether and, if so, under which conditions, are we willing to give up our autonomous choice.

To do so, this article explores the rationales that stand at the basis of human choice, and how they are affected by autonomous algorithmic assistants; it conscientiously contends with the “choice paradox” which arises from the fact that the decision to turn over one’s choices to an algorithm is, itself, an act of choice. As shown, while some rationales are not harmed – and might even be strengthened – by the use of autonomous algorithmic assistants, others require us to think hard about the meaning and the role that choice plays in our lives. The article then examines whether the existing legal framework is sufficiently potent to deal with this brave new world, or whether we need new regulatory tools. In particular, it identifies and analyzes three main areas which are based on choice: consent, intent and laws protecting negative freedom.

— JH

Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning

From the abstract for Paul Lambert’s Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine Learning (2017):

Since the first generation of computer generated works protected by copyright, the types of computer generated works have multiplied further. This article examines some of the scenarios involving new types of computer generated works and recent claims for copyright protection. This includes contextual consideration and comparison of monkey selfies, camera traps, robots, artificial intelligence (AI) and machine learning. While often commercially important, questions arise as to whether these new manifestations of copyright works are actually protected under copyright at all.

— Joe

When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies

Here’s the abstract for Steven Goldfeder, Harry Kalodner, Dillon Reismany & Arvind Narayanan’s When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies:

We show how third-party web trackers can deanonymize users of cryptocurrencies. We present two distinct but complementary attacks. On most shopping websites, third party trackers receive information about user purchases for purposes of advertising and analytics. We show that, if the user pays using a cryptocurrency, trackers typically possess enough information about the purchase to uniquely identify the transaction on the blockchain, link it to the user’s cookie, and further to the user’s real identity. Our second attack shows that if the tracker is able to link two purchases of the same user to the blockchain in this manner, it can identify the user’s entire cluster of addresses and transactions on the blockchain, even if the user employs blockchain anonymity techniques such as CoinJoin. The attacks are passive and hence can be retroactively applied to past purchases. We discuss several mitigations, but none are perfect.

H/T Freedom to Tinker post. — Joe

Data’s intangibility challenges traditional international law on jurisdiction

Here’s the abstract for Kristen Eichensehr’s Data Extraterritoriality:

Data’s intangibility poses significant difficulties for determining where data is located. The problem is not that data is located nowhere, but that it may be located anywhere, and at least parts of it may be located nearly everywhere. And access to data does not depend on physical proximity.

These implications of data’s intangibility challenge traditional international law on jurisdiction. International jurisdictional rules rest in large part on States’ sovereignty over a particular territory and authority over people and things within it, and they presuppose that the location of people and things are finite and knowable. The era of cloud computing — where data crosses borders seamlessly, parts of a single file may exist in multiple jurisdictions, and data’s storage location often depends on choices by private companies — raises new and difficult questions for States exercising enforcement authority, companies receiving requests from law enforcement agencies, and individuals seeking to protect their privacy.

As a part of the Texas Law Review’s symposium on the Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, this Essay critiques Tallinn 2.0’s rules and commentary on the international law governing jurisdiction, especially its treatment of extraterritorial jurisdiction. The Essay first describes the Manual’s rules and commentary on extraterritorial jurisdiction, and then raises a procedural objection to the Manual’s approach, namely that ongoing debates about how to determine data’s location make the law too unsettled for a restatement project. The Essay then highlights several substantive concerns with and questions raised by the Manual’s approach. In light of these critiques, the Essay concludes with some suggestions on how to make progress in resolving conflicting international claims to jurisdiction over data going forward.

— Joe

Would curbing social blots be effective in reducing the spread of fake news?

Here’s the abstract for Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Alessandro Flammini, and Filippo Menczer’s The Spread of Fake News by Social Bots:

The massive spread of fake news has been identified as a major global risk and has been alleged to influence recent elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in eorts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these reports have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target in influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.

H/T to Gary Price’s InfoDocket post. — Joe

Should Law library ChatBots join AALL?

In a recent 3 Geeks post Greg Lambert promotes the use of law library ChatBots to answer routine patron service questions from library users like “what’s my password?” See Now I Want a ChatBot. Not a bad idea as long as law library ChatBots are required to join AALL (and pay the cost of full membership). 😉 — Joe