Category Archives: Information Technology

Algorithms as illegal agreements

Here’s the abstract for Michal Gal’s Algorithms as Illegal Agreements, Berkeley Technology Law Journal, Forthcoming:

Despite the increased transparency, connectivity, and search abilities that characterize the digital marketplace, the digital revolution has not always yielded the bargain prices that many consumers expected. What is going on? Some researchers suggest that one factor may be coordination between the algorithms used by suppliers to determine trade terms. Simple coordination-facilitating algorithms are already available off the shelf, and such coordination is only likely to become more commonplace in the near future. This is not surprising. If algorithms offer a legal way to overcome obstacles to profit-boosting coordination, and create a jointly profitable status quo in the market, why should suppliers not use them? In light of these developments, seeking solutions – both regulatory and market-driven – is timely and essential. While current research has largely focused on the concerns raised by algorithmic-facilitated coordination, this article takes the next step, asking to what extent current laws can be fitted to effectively deal with this phenomenon.

To meet this challenge, this article advances in three stages. The first part analyzes the effects of algorithms on the ability of competitors to coordinate their conduct. While this issue has been addressed by other researchers, this article seeks to contribute to the analysis by systematically charting the technological abilities of algorithms that may affect coordination in the digital ecosystem in which they operate. Special emphasis is placed on the fact that the algorithms is a “recipe for action”, which can be directly or indirectly observed by competitors. The second part explores the promises as well as the limits of market solutions. In particular, it considers the use of algorithms by consumers and off-the-grid transactions to counteract some of the effects of algorithmic-facilitated coordination by suppliers. The shortcomings of such market solutions lead to the third part, which focuses on the ability of existing legal tools to deal effectively with algorithmic-facilitated coordination, while not harming the efficiencies they bring about. The analysis explores three interconnected questions that stand at the basis of designing a welfare-enhancing policy: What exactly do we wish to prohibit, and can we spell this out clearly for market participants? What types of conduct are captured under the existing antitrust laws? And is there justification for widening the regulatory net beyond its current prohibitions in light of the changing nature of the marketplace? In particular, the article explores the application of the concepts of plus factors and facilitating practices to algorithms. The analysis refutes the Federal Trade Commission’s acting Chairwoman’s claim that current laws are sufficient to deal with algorithmic-facilitated coordination.

— Joe

Ben-Shahar on Data Pollution

Omri Ben-Shahar (University of Chicago Law School) has posted Data Pollution on SSRN. Here is the abstract:

Digital information is the fuel of the new economy. But like the old economy’s carbon fuel, it also pollutes. Harmful “data emissions” are leaked into the digital ecosystem, disrupting social institutions and public interests. This article develops a novel framework- data pollution-to rethink the harms the data economy creates and the way they have to be regulated. It argues that social intervention should focus on the external harms from collection and misuse of personal data. The article challenges the hegemony of the prevailing view-that the harm from digital data enterprise is to the privacy of the people whose information is used. It claims that a central problem has been largely ignored: how the information individuals give affects others, and how it undermines and degrade public goods and interests. The data pollution metaphor offers a novel perspective why existing regulatory tools-torts, contracts, and disclosure law-are ineffective, mirroring their historical futility in curbing the external social harms from environmental pollution. The data pollution framework also opens up a rich roadmap for new regulatory devices-an environmental law for data protection-that focus on controlling these external effects. The article examines whether the general tools society has long used to control industrial pollution-production restrictions, carbon tax, and emissions liability-could be adapted to govern data pollution.

H/T Legal Theory Blog. — Joe

The law school innovation index

The Law School Innovation Index was launch in November 2017 as a prototype that highlights 38 law school legal-service delivery innovation and technology programs of which the creators were aware as of October 31, 2017. In this prototype, the creators endeavored to build a framework for the index so that they can receive feedback before undertaking adding each of the 200+ U.S. law schools.

The objective of this study are:

  • Create a measure of the extent to which each of the 200+ U.S. law schools prepare students to deliver legal services in the 21st century.
  • Create a taxonomy of law school legal-service delivery innovation and technology programs.
  • Differentiate between programs and courses focused on “legal-service delivery innovation and technology” and those focused on the intersection of law and technology (e.g., “law and [technology] courses”).
  • Raise public awareness of law schools that are educating students about legal-service delivery innovation and technology, including awareness among employers, prospective and current law students, and alumni.
  • Raise prospective and current law students’ awareness of the disciplines and skills needed to be successful in the 21st century.

To have made this prototype list, a law school must offer a course with instruction in at least one of these legal-service delivery disciplines:

  • Business of law.
  • Process improvement.
  • Leadership for lawyers.
  • Project management.
  • Innovative/entrepreneurial lawyering.
  • Computational law.
  • Empirical methods.
  • Data analytics.
  • Technology basics.
  • Applied technology.

Only two law schools teach all 10 disciplines: MSU Law, which is home to LegalRndD, and Chicago-Kent College of Law, home to The Law Lab and the Center for Access to Justice and Technology. Northwestern University Pritzker School of Law, Stanford Law School, Suffolk University Law School, and the University of Miami School of Law topped the index as well.

What do you think? — Joe

What’s the difference between legal analytics and legal research?

From the conclusion from Law Technology Today’s Legal Analytics vs. Legal Research: What’s the Difference?:

Technology is transforming the legal services industry. Some attorneys may resist this transformation out of fear that new technologies might change how they practice law or even make their jobs obsolete. Similar concerns were voiced when legal research moved from books to computers. But that transition did not reduce the need for attorneys skilled in legal research. Instead, it made attorneys better and more effective at their jobs.

Similarly, legal analytics will not make the judgment and expertise of seasoned lawyers obsolete. It will, however, enable those who employ it to provide better and more cost-effective representation for their clients and better compete with their opponents.

— Joe

Systemic social media regulation

Here’s the abstract for Frank Fagan’s Systemic Social Media Regulation, Duke Law & Technology Review, Forthcoming:

Social media platforms are motivated by profit, corporate image, long-term viability, good citizenship, and a desire for friendly legal environments. These managerial interests stand in contrast to the gubernatorial interests of the state, which include the promotion of free speech, the development of e-commerce, various counter terrorism initiatives, and the discouragement of hate speech. Inasmuch as managerial and gubernatorial interests overlap, a self-regulation model of platform governance should prevail. Inasmuch as they diverge, regulation is desirable when its benefits exceed its costs. An assessment of the benefits and costs of social media regulation should account for how social facts, norms, and falsehoods proliferate. This Article sketches a basic economic model. What emerges from the analysis is that the quality of discourse cannot be controlled through suppression of content, or even disclosure of source. A better approach is to modify, in a manner conducive to discursive excellence, the structure of the forum. Optimal platform architecture should aim to reduce the systemic externalities generated by the social interactions that they enable, including the social costs of unlawful interference in elections and the proliferation of hate speech. Simultaneously, a systemic approach to social media regulation implies fewer controls on user behavior and content creation, and attendant First Amendment complications. Several examples are explored, including algorithmic newsfeeds, online advertising, and invited campus speakers.

— Joe

How we learned to talk to computers, and how they learned to answer back

TechRepublic’s Charles McLellan explains how the combination of automatic speech recognition, natural-language understanding and text-to-speech has come to mainstream attention in virtual assistants such as Apple’s Siri, Google Now, Microsoft’s Cortana, and Amazon’s Alexa. For details, see his How we learned to talk to computers, and how they learned to answer back. Recommended. — Joe

Case study on information war: Examination of recent Russian information warfare activities and capabilities

Three snips from the conclusion of Volodymyr Lysenko and Catherine Brooks, Russian information troops, disinformation, and democracy, 23 First Monday no. 7 (May 7, 2018:

This work illuminates some of the activities, investments, and strategies behind a case of contemporary information war, an approach that will be ever more prevalent in this increasingly digital world. We provide evidence showing these kinds of patterns emanating from Russia, given the potential effects Russia’s information-based strategies may be having around the globe, and especially in electoral processes (e.g., in the U.S., France, and Germany). Indeed these findings show that in this exemplary case of Russian information-based activities, digital hacking is so far an “easy and cheap road” for Russia to deploy the kinds of disruptions that can interrupt democratic processes or governing efforts around the world. We investigate Russian information-based global influences or “hacks” in order to generate new ideas about disruptive digital activities that can emanate from any country and bring effects that are potentially global in size.

we can see an important chain of command worth reviewing. Based on our findings, we argue that Putin’s geopolitical advisors point to areas of concern and political tension, and those get translated into hacking assignments taking place in the FSB, GRU, possibly the SVR (Sluzhba vneshney razvedki, Foreign Intelligence Service), or by paid civil trolls or “unpaid” cyber-patrol “volunteers”. These assignments are sent via curators in these contexts who, in turn, distribute assignments to their subordinate hackers and trolls. Such chain of command may explain why the DNC was independently and simultaneously hacked by the APT 29 (FSB) and APT 28 (GRU). That is, the assignments were likely passed along to the FSB and GRU independently, to increase the likelihood of the successful hack.

Putin admitted in May 2017 that there may exist some “patriotic” hackers who may fight for Russia globally on their own, and may have interfered in a recent U.S. election. At the same time, he denied state-level interference. We assert that this kind of reference to volunteer patriots is similar to his reasoning about Russian involvement in Ukrainian disruptions, that attacks were simply activities of average citizens and not of state-sponsored employees and troops. There’s a blurring of lines we find in the case of Russia between state-sponsored workers and those can be viewed as average citizens being encouraged and rewarded for hacking activities.

As hybrid war is on the rise — that is, war involving both physical military strategies and information/cyber tactics — new kinds of information/cyber strategies will continue to emerge. The type of attacks or disinformation efforts will shift over time, by country, and with rapid advancements in digital life. With this work, we offer an in-depth investigation of a case of hybrid war, focusing on information/cyber strategies in particular. From this case we can consider other cases underway and ideally, begin to consider the kinds of peace-keeping strategies in an information era in order to maintain a healthy geopolitical climate.

Recommended. — Joe

Ready for GDPR compliance?

The General Data Protection Regulation (GDPR) is a regulation in EU law on data protection and privacy for all individuals within the European Union. It also addresses the export of personal data outside the EU. The GDPR aims primarily to give control to citizens and residents over their personal data and to simplify the regulatory environment for international business by unifying the regulation within the EU. It was adopted on April 14 2016, and after a two-year transition period, becomes enforceable on May 25 2018. Any company that stores or processes personal information about EU citizens within EU states must comply with the GDPR, even if they do not have a business presence within the EU.

What types of privacy data does the GDPR protect?

  • Basic identity information such as name, address and ID numbers
  • Web data such as location, IP address, cookie data and RFID tags
  • Health and genetic data
  • Biometric data
  • Racial or ethnic data
  • Political opinions
  • Sexual orientation

Kelly LeBlanc’s Europe’s GDPR to Set New Standards in Data Protection and Privacy Law focuses on the GDPR’s over-arching purpose and mission, common misconceptions, and the road to compliance. Recommended. — Joe

Police use of Amazon facial recognition service draws criticism

Ars technica and the Washington Post report that Amazon is actively courting law-enforcement agencies to use a cloud-based facial-recognition service called Rekognition that can identify people in real time. Rekognition is already being used by the Orlando Police Department and the Washington County Sheriff’s Office in Oregon, according to documents the ACLU obtained under Freedom of Information requests. The ACLU and more than two dozen other civil rights organizations called on Amazon CEO Jeff Bezos to stop selling the face-recognition services to government agencies. — Joe

A practical guide to the European Union’s GDPR for American businesses

American businesses operating or serving customers in the EU must comply with the EU’s GDPR which becomes effective on May 25. A recent survey found that 91 percent of American businesses lack awareness surrounding the details of the GDPR, while 84 percent don’t understand the GDPR’s implications for their specific business. On Recode, Nancy Harris offers a practical guide to the European Union’s GDPR for American businesses. — Joe

Buyer’s Guide suggests law-related AI companies have grown by 65% in the last year

According to the In-House Counsel’s LegalTech Buyer’s Guide 2018, the number of artificial intelligence companies catering to the legal field has grown by 65 percent in the last year, from 40 to 66. In his LawSites post, Bob Ambrogi offers some caveats:

First, its listing of AI companies is not complete. Most notably, it omits Thomson Reuters, whose Westlaw, with its natural-language processing, was one of the earliest AI products in legal. Thomson Reuters Labs and, within it, the Center for Cognitive Computing, are major initiatives devoted to the study of AI and data science. Just in January, TR rolled out an AI-powered product for data privacy law.

In addition, there are a number of small legal tech startups that are using AI but that are not included on this list.

Second, when the guide suggests that established players such as LexisNexis are joining the field, it should be pointed out, for the sake of accuracy, that LexisNexis, like TR, was using AI in its research platform well before most of these other players came along.

— Joe

How law made silicon valley

Here’s the abstract for Anupam Chandler’s How Law Made Silicon Valley, ___ Emory Law Journal ___:

Explanations for the success of Silicon Valley focus on the confluence of capital and education. In this article, I put forward a new explanation, one that better elucidates the rise of Silicon Valley as a global trader. Just as nineteenth century American judges altered the common law in order to subsidize industrial development, American judges and legislators altered the law at the turn of the Millennium to promote the development of Internet enterprise. Europe and Asia, by contrast, imposed strict intermediary liability regimes, inflexible intellectual property rules, and strong privacy constraints, impeding local Internet entrepreneurs. The study challenges the conventional wisdom that holds that strong intellectual property rights undergird innovation. While American law favored both commerce and speech enabled by this new medium, European and Asian jurisdictions attended more to the risks to intellectual property rights-holders and, to a lesser extent, ordinary individuals. Innovations that might be celebrated in the United States could lead to jail in Japan. I show how American companies leveraged their liberal home base to become global leaders in cyberspace. Nations seeking to incubate their own Silicon Valley must focus not only on money and education, but also a law that embraces innovation.

— Joe

CRS report on artificial intelligence and national security

From the summary of Artificial Intelligence and National Security (R45178, Apr. 26, 2018):

The U.S. Department of Defense (DOD) is developing AI applications for a range of military functions. AI research is underway in the fields of intelligence collection and analysis, logistics, cyberspace operations, command and control, and a variety of military autonomous vehicles. AI applications are already playing a role in operations in Iraq and Syria, with algorithms designed to speed up the target identification process. Congressional action has the potential to shape the technology’s trajectory, with fiscal and regulatory decisions potentially influencing growth of national security applications and the standing of military AI development versus international competitors.

Military AI development presents a number of potential issues for Congress

  • What is the right balance of commercial and government funding for AI development?
  • How might Congress influence Defense Acquisition reform initiatives that ease military AI adaptation?
  • What changes, if any, are necessary in Congress and DOD to implement effective oversight of AI development?
  • What regulatory changes are necessary for military AI applications?
  • What measures can be taken to protect AI from exploitation by international competitors and preserve a U.S. advantage in the field?

— Joe

Weekend reading: Re-Engineering Humanity

Here’s the blurb for Brett Frischmann and Evan Selinger’s Re-Engineering Humanity (Cambridge UP, Apr. 19, 2018):

Every day, new warnings emerge about artificial intelligence rebelling against us. All the while, a more immediate dilemma flies under the radar. Have forces been unleashed that are thrusting humanity down an ill-advised path, one that’s increasingly making us behave like simple machines? In this wide-reaching, interdisciplinary book, Brett Frischmann and Evan Selinger examine what’s happening to our lives as society embraces big data, predictive analytics, and smart environments. They explain how the goal of designing programmable worlds goes hand in hand with engineering predictable and programmable people. Detailing new frameworks, provocative case studies, and mind-blowing thought experiments, Frischmann and Selinger reveal hidden connections between fitness trackers, electronic contracts, social media platforms, robotic companions, fake news, autonomous cars, and more. This powerful analysis should be read by anyone interested in understanding exactly how technology threatens the future of our society, and what we can do now to build something better.

— Joe

AALL’s 2018 New Product Award goes to BLaw’s Points of Law

According to today’s AALL eBriefing, Bloomberg Law’s Point of Law artificial intelligence solution has been awarded AALL’s 2018 New Product Award. For a review of the product, see Mark Giangrande’s LLB post.  — Joe

Google launches “Talk to Books” semantic-search tool

Ask Google’s new semantic-search tool “Talk to Books” a question and the tool will return a list of books that include information on that specific question. How? An AI-powered tool will scan every sentence in 100,000 volumes in Google Books and generate a list of likely responses with the pertinent passage bolded. Give the new tool a test drive here. — Joe

Weekend reading: Cyber Mercenaries: The State, Hackers, and Power

Cyber Mercenaries: The State, Hackers, and Power (Cambridge UP, Jan. 18, 2018) by Tim Maurer “explores the secretive relationships between states and hackers. As cyberspace has emerged as the new frontier for geopolitics, states have become entrepreneurial in their sponsorship, deployment, and exploitation of hackers as proxies to project power. Such modern-day mercenaries and privateers can impose significant harm undermining global security, stability, and human rights. These state-hacker relationships therefore raise important questions about the control, authority, and use of offensive cyber capabilities. While different countries pursue different models for their proxy relationships, they face the common challenge of balancing the benefits of these relationships with their costs and the potential risks of escalation. This book examines case studies in the United States, Iran, Syria, Russia, and China for the purpose of establishing a framework to better understand and manage the impact and risks of cyber proxies on global politics.” — Joe

Bots in the Twittersphere

Pew Internet estimates two-thirds of tweeted links to popular websites are posted by automated accounts. Among the key findings of this research:

  • Of all tweeted links to popular websites, 66% are shared by accounts with characteristics common among automated “bots,” rather than human users.
  • Among popular news and current event websites, 66% of tweeted links are made by suspected bots – identical to the overall average. The share of bot-created tweeted links is even higher among certain kinds of news sites. For example, an estimated 89% of tweeted links to popular aggregation sites that compile stories from around the web are posted by bots.
  • A relatively small number of highly active bots are responsible for a significant share of links to prominent news and media sites. This analysis finds that the 500 most-active suspected bot accounts are responsible for 22% of the tweeted links to popular news and current events sites over the period in which this study was conducted. By comparison, the 500 most-active human users are responsible for a much smaller share (an estimated 6%) of tweeted links to these outlets.
  • The study does not find evidence that automated accounts currently have a liberal or conservative “political bias” in their overall link-sharing behavior. This emerges from an analysis of the subset of news sites that contain politically oriented material. Suspected bots share roughly 41% of links to political sites shared primarily by conservatives and 44% of links to political sites shared primarily by liberals – a difference that is not statistically significant. By contrast, suspected bots share 57% to 66% of links from news and current events sites shared primarily by an ideologically mixed or centrist human audience.

H/T to Gary Price’s InfoDocket post. — Joe

Zuckerberg’s prepared statement for Congress [text]

Ahead of two days of congressional testimony, Facebook CEO Mark Zuckerberg’s prepared statement can be read here. — Joe

Computer science and law: a new paradigm

Here’s the abstract for James Miller’s The Emergence of ‘Computer Science and Law’: The New Legal Paradigm for Law and Policy Practice in the Computational Age of Algorithmic Reasoning and Big Data Practice:

Some thirty years ago “law and economics” emerged as a new paradigm of legal reasoning by providing new legal resolutions to a set a problems that were particularly suited to the application of economics in the legal process. Today algorithms and data, software-based systems, and technology solutions like blockchain both stress existing legal practice and offer new avenues for solving legal problems. This paper proposes that the rise of “computer science and law” as a new legal paradigm is emerging in ways that leverage and respond to the application and ability of computer science knowledge and reasoning to answer novel and venerable legal problems.

The paper’s analytic approach maps the boundaries of law and computer science in this new paradigm, against the stressors that necessitate new approaches with the value of technology solutions already revolutionizing other sectors. The paper answers questions such as what is persuasive or explanatory about law, what social function does it serve, and how is legal reasoning distinctive from philosophy, sociology, economics, and computer science? Following this analytic approach, the paper presents the current evolution of legal pedagogy, practice, and expectations and contributes to a deeper comparative understanding of how law can serve important social goals.

The paper begins with a definitional section. Descriptions from jurisprudence and legal theory provide a baseline of how philosophy and social sciences differentiate “law” from other disciplines, based on the nature of the reasoning, justifications, outcomes and knowledge that law entails. Leveraging what is distinctive about legal reasoning and knowledge, a historical review of computer and data science and artificial intelligence provides a view of the evolution of reasoning and knowledge is modeled using software to accomplish tasks relevant to law.

The paper explores how legal practice is evolving to challenges and opportunities posed by computational systems. The paper reviews the “legalhacker” movement that began as a software programming and policy advocacy effort and other “computation law” examples of innovations in law and policy practice, and focus on technology policy issues. A survey of new legal pedagogy focused on teaching data science, software programming and other technical skills reveals a roadmap of computer science skillsets and techniques that are a current focus for legal educators. Review and comparisons of the information technology response of “legaltech” with “fintech” IT innovations focused on finance or other sectors will reveal the relative trends and strengths observed in the space.

Finally, two analytic approaches are proposed for evaluating the strength of new technology tools and law and policy practice approaches. A set of key features identify metrics for evaluating automating legal reasoning systems ability to predict, explain, and defend legal decisions. A roadmap of technical skills and areas of focus for new law and policy practitioners provide a useful rubric for development of new practice groups, outsourcing and IT strategies, and legal training focused on “computer science and law” practice.

Whether the challenge of legal practice in administrative law with comment dockets numbering in the tens of millions, protecting fundamental legal principles in practices using complex software systems controlling the fate of defendants, or improving and expanding access to law and policy services, the paper describes the expanding role of computer science and law and a path forward for legal practitioners in the computational age.

Interesting. — Joe