Category Archives: Information Technology

The Trump Bubble? Tracking Trump’s impact on your investments

From the Sentieo Blog’s post titled Introducing the Sentieo Trump Tracker: Follow The President’s Impact on Your Investments: “Today, we are excited to introduce the [Sentieo] Trump Tracker. It’s a bot that constantly scans new public financial documents for mentions of President Trump. These documents include all SEC filings, conference call transcripts, investor presentations, press releases, and more. The bot instantly surfaces new mentions of Trump as soon as they’re published, while intelligent queries automatically sort them into topics like Obamacare, Mexico, and NAFTA. … Anyone interested in following the administration’s impact on public companies can engage with the Trump Tracker by checking the dedicated website, following the @trumptrackerbot Twitter account, or signing up for a daily email alert on the site.” Will we call the stock market’s recent performance the “Trump Bubble”?

On a related note, Sentieo recently analyzed over 9 million financial documents of 35,000+ companies globally for mentions of Trump and Obama during their respective campaigns. Details here. — Joe

Webinar: Principles and practices of machine learning

Machine learning explores the study and construction of algorithms that can learn from and make predictions on data. SAS data scientist Patrick Hall discusses the principles of machine learning, the multidisciplinary nature of data analysis and the traditional methods used in machine learning applications in this on-demand webinar. — Joe

What have we learned from the Web so far?

“For a researcher in the twenty-second century, it will seem unimaginable that someone studying the twenty-first century would do anything but draw heavily on the online world to tell them about peoples’ changing lives. Currently, however, the web remains an almost untapped source for research. This book aims to make a start in this direction,” write Niels Brugger and Ralph Schroeder in the Introduction to their compilation titled  The Web as History (UCL Press, 2017). Here’s the blurb:

The World Wide Web has now been in use for more than 20 years. From early browsers to today’s principal source of information, entertainment and much else, the Web is an integral part of our daily lives, to the extent that some people believe ‘if it’s not online, it doesn’t exist.’ While this statement is not entirely true, it is becoming increasingly accurate, and reflects the Web’s role as an indispensable treasure trove. It is curious, therefore, that historians and social scientists have thus far made little use of the Web to investigate historical patterns of culture and society, despite making good use of letters, novels, newspapers, radio and television programmes, and other pre-digital artefacts.

This volume argues that now is the time to question what we have learnt from the Web so far. The 12 chapters explore this topic from a number of interdisciplinary angles – through histories of national web spaces and case studies of different government and media domains – as well as an introduction that provides an overview of this exciting new area of research.

An open access PDF version of the book is available here. Recommended. — Joe

Holding algorithms accountable

Here’s the abstract to Accountable Algorithms, 165 University of Pennsylvania Law Review ___ 2017 (Forthcoming). This very interesting article was written by Joshua Kroll, Joanna Huey, Solon Barocas, Edward Felten, Joel Reidenberg, David Robinson and Harlan Yu.


Many important decisions historically made by people are now made by computers. Algorithms count votes, approve loan and credit card applications, target citizens or neighborhoods for police scrutiny, select taxpayers for an IRS audit, and grant or deny immigration visas.

The accountability mechanisms and legal standards that govern such decision processes have not kept pace with technology. The tools currently available to policymakers, legislators, and courts were developed to oversee human decision-makers and often fail when applied to computers instead: for example, how do you judge the intent of a piece of software? Additional approaches are needed to make automated decision systems — with their potentially incorrect, unjustified or unfair results — accountable and governable. This Article reveals a new technological toolkit to verify that automated decisions comply with key standards of legal fairness.

We challenge the dominant position in the legal literature that transparency will solve these problems. Disclosure of source code is often neither necessary (because of alternative techniques from computer science) nor sufficient (because of the complexity of code) to demonstrate the fairness of a process. Furthermore, transparency may be undesirable, such as when it permits tax cheats or terrorists to game the systems determining audits or security screening.

The central issue is how to assure the interests of citizens, and society as a whole, in making these processes more accountable. This Article argues that technology is creating new opportunities — more subtle and flexible than total transparency — to design decision-making algorithms so that they better align with legal and policy objectives. Doing so will improve not only the current governance of algorithms, but also — in certain cases — the governance of decision-making in general. The implicit (or explicit) biases of human decision-makers can be difficult to find and root out, but we can peer into the “brain” of an algorithm: computational processes and purpose specifications can be declared prior to use and verified afterwards.

The technological tools introduced in this Article apply widely. They can be used in designing decision-making processes from both the private and public sectors, and they can be tailored to verify different characteristics as desired by decision-makers, regulators, or the public. By forcing a more careful consideration of the effects of decision rules, they also engender policy discussions and closer looks at legal standards. As such, these tools have far-reaching implications throughout law and society.

Part I of this Article provides an accessible and concise introduction to foundational computer science concepts that can be used to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached. Part II then describes how these techniques can assure that decisions are made with the key governance attribute of procedural regularity, meaning that decisions are made under an announced set of rules consistently applied in each case. We demonstrate how this approach could be used to redesign and resolve issues with the State Department’s diversity visa lottery. In Part III, we go further and explore how other computational techniques can assure that automated decisions preserve fidelity to substantive legal and policy choices. We show how these tools may be used to assure that certain kinds of unjust discrimination are avoided and that automated decision processes behave in ways that comport with the social or legal standards that govern the decision. We also show how algorithmic decision-making may even complicate existing doctrines of disparate treatment and disparate impact, and we discuss some recent computer science work on detecting and removing discrimination in algorithms, especially in the context of big data and machine learning. And lastly in Part IV, we propose an agenda to further synergistic collaboration between computer science, law and policy to advance the design of automated decision processes for accountability.

Recommended. — Joe

Does Facebook’s personalization of social media pages through the use of machine-learning algorithms constitute the “development” of content under the Communications Decency Act?

Catherine A. Tremble attempts to answer that question in her forthcoming Fordham Law Review note, Wild Westworld: The Application of Section 230 of the Communications Decency Act to Social Networks’ Use of Machine-Learning Algorithms.

Here’s the abstract:

On August 10th, 2016, a complaint filed in the Eastern District of New York formally accused Facebook of aiding the execution of terrorist attacks. The complaint depicted user-generated posts and groups promoting and directing the incitement of terrorist activities. Under section 230 of the Communications Decency Act (CDA), Interactive Service Providers (ISPs), such as Facebook, cannot be held liable for user-generated content where the ISP did not create or develop the information. However, this case stands out because it seeks to hold Facebook liable not only for the content of third parties, but also for the effect its personalized machine-learning algorithms — or “services” — have had on the ability of terrorists to orchestrate and execute attacks. By alleging that Facebook’s conduct goes beyond the mere act of publication, and includes the actual services’ effect on terrorists’ abilities to more effectively execute attacks, the complaint seeks to prevent the court from granting section 230 immunity to Facebook.

This Note argues that Facebook’s services — specifically the personalization of social media pages through the use of machine-learning algorithms — constitute the “development” of content and as such do not qualify for immunity under section 230 of the CDA. Recognizing the challenge of applying a static statute to a shifting technological landscape, this Note analyzes recent jurisprudential evolutions in section 230 doctrine to revise the original analytical framework applied in early cases. This Framework is guided by congressional and public policy goals but evolves to demonstrate awareness of technological evolution and ability. It specifically tailors section 230 immunity to account for behavioral data mined for ISP use, and the effect the use of that data has on users — two issues that courts have yet to confront. This Note concludes that, under the updated section 230 framework, personalized machine-learning algorithms made effective through the collection of individualized behavioral data make ISPs co-developers of content and as such bar them from section 230 immunity.

— Joe

Artificial intelligence in legal research

ILTA’s Beyond the Hype: Artificial Intelligence in Legal Research webinar was conducted last month and features ROSS Intelligence CEO and co-founder Andrew Arruda. The link takes you to the archived webinar. Interesting. — Joe

The Ethics of algorithms

From the abstract of The Ethics of Algorithms: Mapping the Debate, Big Data & Society, Vol. 3(2)(2016) by Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter and Luciano Floridi (all Oxford Internet Institute, Univ. of Oxford):

In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences affecting individuals as well as groups and whole societies. This paper makes three contributions to clarify the ethical importance of algorithmic mediation. It provides a prescriptive map to organise the debate. It reviews the current discussion of ethical aspects of algorithms. And it assesses the available literature in order to identify areas requiring further work to develop the ethics of algorithms.

— Joe

Risk assessment algorithm used in bail hearings

“As part of a bold effort at bail reform,” writes Ephrat Livni, “the state of New Jersey replaced bail hearings with algorithmically informed risk assessments this year. Anyone’s eligible for release, no money down, if they meet certain criteria. To ensure unbiased, scientific decisions, judges use a machine-generated score. … The Public Safety Assessment (PSA) formula that New Jersey is now using for bail—along with about 20 other jurisdictions that employ it less extensively—aims to make risk calculations neutral, purely evidence-based, and premised on data. It compares risks and outcomes in a database of 1.5 million cases from 300 jurisdictions nationwide, producing a score of one to six for the defendant based on the information.”

“The automated recommendation”, Livni adds, “serves as a guide and doesn’t replace judicial discretion. Still, the program raises questions about the claimed neutrality of machine reasoning, and the wisdom of reliance on mechanical judgment.” For more, see In the US, some criminal court judges now use algorithms to guide decisions on bail. — Joe

Balkin’s three basic laws of robotics for the algorithmic society

From the abstract of Yale Law prof Jack Balkin’s The Three Laws of Robotics in the Age of Big Data, 78 Ohio State Law Journal ___ (2017)(Forthcoming):

This essay introduces these basic legal principles using four key ideas: (1) the homunculus fallacy; (2) the substitution effect; (3) the concept of information fiduciaries; and (4) the idea of algorithmic nuisance.

The homunculus fallacy is the attribution of human intention and agency to robots and algorithms. It is the false belief there is a little person inside the robot or program who has good or bad intentions.

The substitution effect refers to the multiple effects on social power and social relations that arise from the fact that robots, AI agents, and algorithms substitute for human beings and operate as special-purpose people.

The most important issues in the law of robotics require us to understand how human beings exercise power over other human beings mediated through new technologies. The “three laws of robotics” for our Algorithmic Society, in other words, should be laws directed at human beings and human organizations, not at robots themselves.

Behind robots, artificial intelligence agents and algorithms are governments and businesses organized and staffed by human beings. A characteristic feature of the Algorithmic Society is that new technologies permit both public and private organizations to govern large populations. In addition, the Algorithmic Society also features significant asymmetries of information, monitoring capacity, and computational power between those who govern others with technology and those who are governed.

With this in mind, we can state three basic “laws of robotics” for the Algorithmic Society:

First, operators of robots, algorithms and artificial intelligence agents are information fiduciaries who have special duties of good faith and fair dealing toward their end-users, clients and customers.

Second, privately owned businesses who are not information fiduciaries nevertheless have duties toward the general public.

Third, the central public duty of those who use robots, algorithms and artificial intelligence agents is not to be algorithmic nuisances. Businesses and organizations may not leverage asymmetries of information, monitoring capacity, and computational power to externalize the costs of their activities onto the general public. The term “algorithmic nuisance” captures the idea that the best analogy for the harms of algorithmic decision making is not intentional discrimination but socially unjustified “pollution” – that is, using computational power to make others pay for the costs of one’s activities.

Obligations of transparency, due process and accountability flow from these three substantive requirements. Transparency – and its cousins, accountability and due process – apply in different ways with respect to all three principles. Transparency and/or accountability may be an obligation of fiduciary relations, they may follow from public duties, and they may be a prophylactic measure designed to prevent unjustified externalization of harms or in order to provide a remedy for harm.

Very interesting. — Joe

Primer on artificial intelligence in legal services

Here’s the blurb for Joanna Goodman’s Robots in Law: How Artificial Intelligence is Transforming Legal Services:

Although 2016 was a breakthrough year for artificial intelligence (AI) in legal services in terms of market awareness and significant take-up, legal AI represents evolution rather than revolution. Since the first ‘robot lawyers’ started receiving mainstream press coverage, many law firms, other legal service providers and law colleges are being asked what they are doing about AI. Robots in Law: How Artificial Intelligence is Transforming Legal Services is designed to provide a starting point in the form of an independent primer for anyone looking to get up to speed on AI in legal services. The book is organized into four distinct sections: Part I: Legal AI – Beyond the hype Part II: Putting AI to work Part III: AI giving back – Return on investment Part IV: Looking ahead The first three present an in-depth overview, and analysis, of the current legal AI landscape; the final section includes contributions from AI experts with connections to the legal space, on the prospects for legal AI in the short-term future. Along with the emergence of New Law and the burgeoning lawtech start-up economy, AI is part of a new dynamic in legal technology and it is here to stay. The question now is whether AI will find its place as a facilitator of legal services delivery, or whether it will initiate a shift in the value chain that will transform the legal business model.

For more, see Bob Ambrogi’s This Week In Legal Tech column (ATL). — Joe

Tracking Trump, there’s apps for that

In The Best Apps To Track Trump’s Legal Changes, Bob Ambrogi identifies three apps designed to monitor the Trump administration’s actions.

  1. The goal of Track Trump is “to isolate actual policy changes from rhetoric and political theater and to hold the administration accountable for the promises it made.”
  2. The Cabinet Center for Administrative Transition (CCAT) from the law firm Cadwalader, Wickersham & Taft collects “pronouncements, position papers, policy statements, and requirements as to legislative and regulatory change related to the financial service agenda of the President, the new administration and the new Congress. It tracks legislative developments, executive orders, policy positions, regulations, the regulators themselves, and relevant Trump administration news.”
  3. Columbia Law School’s Trump Human Rights Tracker follows the Trump administration’s actions and their implications for human rights.

— Joe

How likely is algorithmic transparency?

Here’s the abstract for Opening the Black Box: In Search of Algorithmic Transparency by Rachel Pollack Ichou (University of Oxford, Oxford Internet Institute):

Given the importance of search engines for public access to knowledge and questions over their neutrality, there have been many theoretical debates about the regulation of the search market and the transparency of search algorithms. However, there is little research on how such debates have played out empirically in the policy sphere. This paper aims to map how key actors in Europe and North America have positioned themselves in regard to transparency of search engine algorithms and the underlying political and economic ideas and interests that explain these positions. It also discusses the strategies actors have used to advocate for their positions and the likely impact of their efforts for or against greater transparency on the regulation of search engines. Using a range of qualitative research methods, including analysis of textual material and elite interviews with a wide range of stakeholders, this paper concludes that while discussions around algorithmic transparency will likely appear in future policy proposals, it is highly unlikely that search engines will ever be legally required to share their algorithms due to a confluence of interests shared by Google and its competitors. It ends with recommendations for how algorithmic transparency could be enhanced through qualified transparency, consumer choice, and education.

— Joe

Seven major themes about the algorithm era

Pew Internet’s report, Code-Dependent: Pros and Cons of the Algorithm Age, identifies seven major algorithm era themes (listed below). See also NPR’s Will Algorithms Erode Our Decision-Making Skills? — Joe


Baker & Hostetler licenses ROSS Intelligence’s AI product for bankruptcy matters

In what is reported to be ROSS Intelligence’s first BigLaw client, Baker & Hostetler has licensed ROSS for use by its Bankruptcy, Restructuring and Creditors’ Rights team. Here’s the joint press release. — Joe

Protecting the public from unregulated, non-traditional legal service providers

Legal software publishing companies and legal application developers that serve the public directly beware. A discussion paper from the ABA Commission on the Future of Legal Services is inviting comments on proposing a regulatory scheme that would impose restrictions on currently unregulated, non-traditional legal service providers. See Issues Paper Concerning Unregulated LSP Entities (March 31, 2016). Is the ABA protecting the “public interest” or attempting to expand its control over competitive threats to the organized bar’s hegemony? — Joe

Cognitive computing at ROSS (now) and Thomson Reuters Legal (forthcoming)

“When you pair the computer with the human, you get something way better than either the human or the computer. If you look at it from that formula, humans will always be on the winning side.” — Andrew Arruda, CEO and co-founder of ROSS.

Ed Sohn, Senior Director at Thomson Reuters Legal Managed Services (formerly Pangea3), reviews recent developments in cognitive computing at Thomson Reuters and ROSS in Can Computers Beat Humans At Law? (Above the Law, March 23, 2016). One snip from the very interesting blog post is displayed above. — Joe

The Digital Future of the Oldest Information Profession

Recently, Ray Worthy Campbell (Peking University School of Transnational Law) uploaded to SSRN The Digital Future of the Oldest Information Profession, very interesting. From the essay’s introduction:

This article will look at three ways legal practice is being disrupted by the digital information revolution, and then examine how education for legal service providers might evolve to best serve society in light of those disruptions.

First, from outside legal practice have come and will come changes in how white collar work is performed that affect law practice along with other occupations. For example, the digitization of documents and the development of digitally monitored business process management both arose outside of law practice, but have combined to change how documents get reviewed and processed in major litigation and corporate deals. Digital documents are easy to ship worldwide and susceptible to machine review, and technology enables higher levels of planning and performance tracking than were possible in the era of legal pads. While not limited to law practice, such exogenous business process changes have had and will continue to have a significant impact on how traditional legal businesses operate.

Second, digital products and processes will arise or be modified specifically to solve legal problems without resort to traditional legal practice or analysis. An example of this type of innovation would be LexMachina or IBM’s legal application for its Watson product, ‘Ross’, which apply Big Data techniques to legal issues. Other examples would be rule-based document assembly systems, which assess client needs and deliver appropriate legal documents. Some of these digitized systems will replace lawyers as software-only solutions, while others will assist lawyers. Still others – and perhaps the most economically significant, if regulation allows – will enable non-lawyers to serve as the interface between client needs and digitized expert knowledge, delivering an acceptable level of problem solving without recourse to traditionally trained lawyers.

Third, and not least important, will be changes in the law itself to adapt to a digital environment – that is, the ways in which legal rules and processes will need to evolve to function effectively and justly in a digital world. Many of the new digital technologies rely on massive data sets, and the justice system does not – and perhaps should not – create data in the same way Internet sites or retail supply chains do. Just as businesses and government bureaucracies have had to adjust workflows and information capture to take advantage of digital possibilities, pressure will be brought on legal systems to restructure in order to be digital friendly. As rules become embedded in software code, perhaps even removing the option for choice, legal thinkers will have to address how such embedded directives fit into a system of rules formerly captured only in text.

— Joe

Docket-based research needed to find “submerged precedent”

“[S]ubmerged precedent pushes docketology in an uncharted direction by identifying a mass of reasoned opinions—putative precedent and not mere evidence of decision-making—that exist only on dockets,” writes Elizabeth McCuskey (Toledo) in Submerged Precedent, 16 Nevada Law Journal ___ 2016 (forthcoming)[SSRN]. Professor McCuskey adds “[s]ubmerged precedent thus raises the specter that docket-based research may be necessary in some areas to ascertain an accurate picture of the law itself, not just trial courts’ administration of it.” Here the abstract for this very interesting article:

This article scrutinizes the intensely individual, yet powerfully public nature of precedent, inquiring which decisions are made available for posterity in the body of precedent and which remain solely with their authors and the instant parties. At its broadest level, this article investigates the intricate relationships among precedent, access, and technology in the federal district courts, examining how technology can operationalize precedent doctrine.

Theory and empiricism inform these inquiries. Drawing from a sample of district court decisions on Grable federal question jurisdiction, the study presented here identifies and explores the phenomenon of “submerged precedent” – reasoned opinions hidden on court dockets, and not included in the Westlaw or Lexis databases. The study detailed here found that submergence may obscure as much as 30% of reasoned law on Grable federal questions from the view of conventional research.

This article investigates the structural and institutional forces behind submergence, as well as its doctrinal implications. By effectively insulating some reasoned opinions from future use by judges and practitioners, the phenomenon of submerged precedent threatens to skew substantive law and erode the precedential system’s animating principles of fairness, efficiency, and legitimacy. Most urgently, the existence of submerged precedent suggests that Congress’s mandate for public access to federal precedents in the E-Government Act of 2002 lies unfulfilled in important respects. The application of precedent theory informed by empirical observation suggests a more thoughtful approach to technology and public access to precedent.

— Joe

Commercializing AI: TR’s Watson Initiative to launch global financial regulation product by year’s end

Among several other product announcements, Thomson Reuters Legal recently disclosed that it will release in beta the first legal product using Watson’s cognitive computing technologies by year’s end. On Dewey B Strategic, Jean O’Grady writes

Ever since TR announced their collaboration with IBM Watson last October, the legal community has been impatient to learn how this alliance will manifest in a legal product. We still don’t know but TR did promise that they will be the first company for built a legal product using Watson technology. The alliance will combine IBM’s cognitive computing with TR’s deep domain expertise. A panel of executives from TR and Watson revealed that there will be a beta product available by the end of 2016. Their first collaboration will focus on taming the complexities of global financial regulation.

Bob Ambrogi adds “The product will help users untangle the sometimes-confusing web of global legal and regulatory requirements and will be targeted at customers in corporate legal, corporate compliance and law firms. Initially, it will focus on financial services, [Erik Laughlin, managing director, Legal Managed Services and Corporate Segment, and head of the Watson Initiative] suggested, but will also address other domains important to corporations.”

Very interesting. Wouldn’t it be something if TR was prepared to demonstrate how this product will work at AALL ALI AALL in Chicago this year? — Joe

Librarians Are Here To Stay

I get asked every now and then about the future of librarians.  I work in an academic environment.  I get questions from students, faculty members, the general public, other librarians, you name ‘em.  The type of questions I get are contrasted, to some extent, with statements that with everything on the Internet we will be obsolete.  I’m sure many librarians, not just law librarians hear that.  Those with that attitude tend to think that because they never use a librarian’s services that no one else would need that assistance either.

All of you should know, for example, that Google offers free case law that extends back to approximately 50 years for state cases and 80 years for federal cases.  I have found unreported cases and slip opinions in the archive.  My point is that Google is hardly a secret to the Internet-going world.  At the same time, I get calls from non-law libraries about case law and the librarian or patron at the other end seems to have no idea that this archive exists.  They are delighted to know that exists once they find out about it.  Public patrons in particular seem happy to know that they don’t have to trudge to downtown Chicago to find accurate case law that isn’t behind a paywall.

I encounter students almost every day who seem not to have a clue as to how to read a result in a catalog search result.  They’ll flash their phone or tablet screens at me and ask me what to do to get a copy.  Sometimes the answer is as simple as pointing out the location on a paper map.  Other times it can be pointing out that there is a link on the record that can give instant access as an e-book.

Let me state categorically that I do not think these circumstances or the people asking them are dumb.  They obviously either do not have the knowledge that resources exist or have thought about how get the information on their own.  That is where we come in.  The public Internet has been around for at least 25 years if not longer.  There is so much out there and so many strategies for locating information that may or may not be behind a paywall.  There are scams to avoid.  I remember a phone call where an individual called and said she was contacted by phone from the IRS demanding a tax payment.  I looked up the IRS page and read the statement detailing how the Service contacts individuals.  It noted that the Service never contacts people by phone demanding money.  For those pondering the “unauthorized practice of law” angle, I read the text verbatim and let her draw her own conclusions.

Information is power.  We know how to find it and put it in context.  I would never claim to know everything there is to know about content online.  At the same time, there are no shortage of people who draw upon that experience and that of my colleagues.  For those who claim they don’t need us, fine.  But don’t assume that no one needs us.  Librarians will be here for a long time to come if my experience is accurate.