Category Archives: Information Technology

When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies

Here’s the abstract for Steven Goldfeder, Harry Kalodner, Dillon Reismany & Arvind Narayanan’s When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies:

We show how third-party web trackers can deanonymize users of cryptocurrencies. We present two distinct but complementary attacks. On most shopping websites, third party trackers receive information about user purchases for purposes of advertising and analytics. We show that, if the user pays using a cryptocurrency, trackers typically possess enough information about the purchase to uniquely identify the transaction on the blockchain, link it to the user’s cookie, and further to the user’s real identity. Our second attack shows that if the tracker is able to link two purchases of the same user to the blockchain in this manner, it can identify the user’s entire cluster of addresses and transactions on the blockchain, even if the user employs blockchain anonymity techniques such as CoinJoin. The attacks are passive and hence can be retroactively applied to past purchases. We discuss several mitigations, but none are perfect.

H/T Freedom to Tinker post. — Joe

Data’s intangibility challenges traditional international law on jurisdiction

Here’s the abstract for Kristen Eichensehr’s Data Extraterritoriality:

Data’s intangibility poses significant difficulties for determining where data is located. The problem is not that data is located nowhere, but that it may be located anywhere, and at least parts of it may be located nearly everywhere. And access to data does not depend on physical proximity.

These implications of data’s intangibility challenge traditional international law on jurisdiction. International jurisdictional rules rest in large part on States’ sovereignty over a particular territory and authority over people and things within it, and they presuppose that the location of people and things are finite and knowable. The era of cloud computing — where data crosses borders seamlessly, parts of a single file may exist in multiple jurisdictions, and data’s storage location often depends on choices by private companies — raises new and difficult questions for States exercising enforcement authority, companies receiving requests from law enforcement agencies, and individuals seeking to protect their privacy.

As a part of the Texas Law Review’s symposium on the Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations, this Essay critiques Tallinn 2.0’s rules and commentary on the international law governing jurisdiction, especially its treatment of extraterritorial jurisdiction. The Essay first describes the Manual’s rules and commentary on extraterritorial jurisdiction, and then raises a procedural objection to the Manual’s approach, namely that ongoing debates about how to determine data’s location make the law too unsettled for a restatement project. The Essay then highlights several substantive concerns with and questions raised by the Manual’s approach. In light of these critiques, the Essay concludes with some suggestions on how to make progress in resolving conflicting international claims to jurisdiction over data going forward.

— Joe

Would curbing social blots be effective in reducing the spread of fake news?

Here’s the abstract for Chengcheng Shao, Giovanni Luca Ciampaglia, Onur Varol, Alessandro Flammini, and Filippo Menczer’s The Spread of Fake News by Social Bots:

The massive spread of fake news has been identified as a major global risk and has been alleged to influence recent elections and threaten democracies. Communication, cognitive, social, and computer scientists are engaged in eorts to study the complex causes for the viral diffusion of digital misinformation and to develop solutions, while search and social media platforms are beginning to deploy countermeasures. However, to date, these reports have been mainly informed by anecdotal evidence rather than systematic data. Here we analyze 14 million messages spreading 400 thousand claims on Twitter during and following the 2016 U.S. presidential campaign and election. We find evidence that social bots play a key role in the spread of fake news. Accounts that actively spread misinformation are significantly more likely to be bots. Automated accounts are particularly active in the early spreading phases of viral claims, and tend to target in influential users. Humans are vulnerable to this manipulation, retweeting bots who post false news. Successful sources of false and biased claims are heavily supported by social bots. These results suggests that curbing social bots may be an effective strategy for mitigating the spread of online misinformation.

H/T to Gary Price’s InfoDocket post. — Joe

Should Law library ChatBots join AALL?

In a recent 3 Geeks post Greg Lambert promotes the use of law library ChatBots to answer routine patron service questions from library users like “what’s my password?” See Now I Want a ChatBot. Not a bad idea as long as law library ChatBots are required to join AALL (and pay the cost of full membership). 😉 — Joe

Is artificial intelligence causing a premature disruption in legal research?

Here’s the abstract for Jamie Baker’s 2017 A Legal Research Odyssey: Artificial Intelligence as Disruptor:

Cognitive computing is revolutionizing finance through the ability to combine structured and unstructured data and provide precise market analysis. It is also revolutionizing medicine by providing well-informed options for diagnoses. Analogously, ROSS, a progeny of IBM’s Watson, is set to revolutionize the legal field by bringing cognitive computing to legal research. While ROSS is currently being touted as possessing the requisite sophistication to perform effortless legal research, there is a real danger in a technology like ROSS causing premature disruption. As in medicine and finance, cognitive computing has the power to make legal research more efficient. But the technology is not ready to replace the need for law students to learn sound legal research process and strategy. When done properly, legal research is a highly creative skill that requires a deep level of analysis. Law librarians must infuse law students with an understanding of legal research process, as well as instruct on the practical aspects of using artificial intelligence responsibly in the face of algorithmic transparency, the duty of technology competence, malpractice pitfalls, and the unauthorized practice of law.

— Joe

A sneak peek inside Microsoft’s AI research lab (video)

Hat tip to Gary Price’s InfoDocket post for this BBC report. — Joe

Delaware passes blockchain legislation

The Delaware General Assembly has passed a historic bill that legalizes the right to track stocks in a blockchain. The legislation is expected to be signed into law this month. Here’s the bill. — Joe

How technology could challenge our concept of law

Here’s the abstract for Warming Up to Inscrutability: How Technology Could Challenge Our Concept of Law by Brian Sheppard:

In the article, I describe the trajectory of legal technology and discuss how developments in that area might change the way that we think about the essential features of legality. In particular, I focus on the strengths and weaknesses of machine learning in the context of legislation and adjudication. I argue that the content of those essential features could depend upon our willingness to make tradeoffs between intelligibility and results. These tradeoffs might lead us to reject a concept that requires critical officials (Hart), reason-based tests of legitimacy (Raz), or deep justifications for coercion (Dworkin). I conclude that our concept of law will likely be shaped by our willingness to accept a growing disconnect between the way that we decide and the way that the system does. This is a pre-edit draft of an article.

Interesting. — Joe

On the explanation problem for algorithms

In What does it mean to ask for an “explainable” algorithm?, Ed Felten discusses the explanation problem for algorithms in terms of (1) claims of confidentiality; (2) complexity; (3) unreasonableness; and (4) injustice. See Felten’s Freedom to Tinker blog post for details. — Joe

Are you ready for the Third Wave of AI for Law?

An Artificial Lawyer post, The Third Wave of Legal AI by Kripa Rajshekhar, the founder of legal AI company Metonymy Labs, has three goals (1) Introduce the Third Wave of AI, (2) Outline, in broad strokes, what this means for AI and Law, (3) Illustrate the path forward with a specific application of the approach: Metonymy Labs’ work to augment diligence with AI. Interesting. — Joe

A DARPA perspective on artificial intelligence

According to John Launchbury, director of DARPA’s Information Innovation Office, the development of artificial intelligence is progressing in three waves: handcrafted knowledge, statistical learning and contextual adaptation. In the below video, Launchbury explains his theory. From the YouTube description:

John Launchbury … attempts to demystify AI–what it can do, what it can’t do, and where it is headed. Through a discussion of the “three waves of AI” and the capabilities required for AI to reach its full potential, John provides analytical context to help understand the roles AI already has played, does play now, and could play in the future.

The video is a companion communication for Launchbury’s stack, A DARPA Perspective on Artificial Intelligence. Recommended. — Joe

Learning analytics implicates ALA’s Code of Ethics

In Learning Analytics and the Academic Library: Professional Ethics Commitments at a Crossroad, College & Research Libraries, Forthcoming, Kyle Jones and Dorothea Salo discuss learning analytics and the ways academic libraries are beginning to participate in wider institutional learning analytics initiatives. The authors address how learning analytics implicates professional commitments to promote intellectual freedom; protect patron privacy and confidentiality; and balance intellectual property interests between library users, their institution, and content creators and vendors. From the article’s conclusion:

Though pursuing  LA [learning analytics] may lead to good outcomes for students and their institutions, higher education and the library profession still face an ethical crossroads. LA practices present significant conflicts with the ALA’s Code of Ethics with respect to intellectual privacy, intellectual freedom, and intellectual property rights. We recommend that librarians respond by strategically embedding their values in LA through actively participating in the conversations, governance structures, and policies that ultimately shape the use of the technology on their respective campuses.

— Joe

On the call for algorithmic transparency

Here’s the abstract for Deven R. Desai and Joshua A. Kroll’s very interesting article, Trust But Verify: A Guide to Algorithms and the Law, Harvard Journal of Law & Technology, Forthcoming:

The call for algorithmic transparency as a way to manage the power of new data-driven decision-making techniques misunderstands the nature of the processes at issue and underlying technology. Part of the problem is that the term, algorithm, is broad. It encompasses disparate concepts even in mathematics and computer science. Matters worsen in law and policy. Law is driven by a linear, almost Newtonian, view of cause and effect where inputs and defined process lead to clear outputs. In that world, a call for transparency has the potential to work. The reality is quite different. Real computer systems use vast data sets not amenable to disclosure. The rules used to make decisions are often inferred from these data and cannot be readily explained or understood. And at a deep and mathematically provable level, certain things, including the exact behavior of an algorithm, can sometimes not be tested or analyzed. From a technical perspective, current attempts to expose algorithms to the sun will fail to deliver critics’ desired results and may create the illusion of clarity in cases where clarity is not possible.

At a high-level, the recent calls for algorithmic transparency follow a pattern that this paper seeks to correct. Policy makers and technologists often talk past each other about the realities of technology and the demands of policy. Policy makers may identify good concerns but offer solutions that misunderstand technology. This misunderstanding can lead to calls for regulation that make little to no sense to technologists. Technologists often see systems as neutral tools, with uses to be governed only when systems interact with the real world. Both sides think the other simply “does not get it,” and important problems receive little attention from either group. By setting out the core concerns over the use of algorithms, offering a primer on the nature of algorithms, and a guide on the way in which computer scientists deal with the inherent limits of their field, this paper shows that there are coherent ways to manage algorithms and the law.

Recommended. — Joe

Will well-prepared workers be able to keep up in the race with AI tools?

“Machines are eating humans’ jobs talents. And it’s not just about jobs that are repetitive and low-skill. Automation, robotics, algorithms and artificial intelligence (AI) in recent times have shown they can do equal or sometimes even better work than humans who are dermatologists, insurance claims adjusters, lawyers, seismic testers in oil fields, sports journalists and financial reporters, crew members on guided-missile destroyers, hiring managers, psychological testers, retail salespeople, and border patrol agents. Moreover, there is growing anxiety that technology developments on the near horizon will crush the jobs of the millions who drive cars and trucks, analyze medical tests and data, perform middle management chores, dispense medicine, trade stocks and evaluate markets, fight on battlefields, perform government functions, and even replace those who program software – that is, the creators of algorithms” is how Lee Rainie and Janna Anderson start their Pew experts survey report titled The Future of Jobs and Jobs Training (May 3, 2017). As machines and technology continue to transform the workplace, the Pew Research Center says technologists, futurists and scholars are predicting a surge of interest in artificial-intelligence training program. Will that be enough to satisfy the labor demand?

The report identifies five major themes about the future of jobs training in the tech age. See below. — Joe

Like having a personal assistant in your pocket: When AI-first becomes the new mobile-first

Two snips from David Skerrett’s very interesting EContent post, The Year AI-First Will Become the New Mobile-First:

For 6 years, achieving a mobile-first ethos has been a badge of honor within organizations and a genuine philosophy to solve problems for—or preview content on—the device users are most likely to turn to first. Now, true AI is progressing quickly, while there is already a proliferation of practical AI on our mobiles. Contextual assistants, bots, and digital concierges are right at our fingertips—notably Siri, Alexa, Cortana, and Google Now. … However, these intelligent helpers aren’t so helpful when it comes to playing well together. Once these assistants are able to “talk” to each other, it will create a more connected experience for us all, enabling even greater business transformation in the next few years.

Basically, AI will be completely and seamlessly woven into the fabric of our lives and how we get things done. Microsoft can already scan your emails and see you have a flight coming up. Cortana then adds it to your diary and helps get things done via reminders and prompts. Depending on how much trust we put in our devices and into AI, it will be like having a personal assistant in your pocket.

— Joe

Synthetic crowdsourcing models designed to aid legal decision making

In Synthetic Crowdsourcing: A Machine-Learning Approach to the Problems of Inconsistency and Bias in Adjudication, Hannah Laqueur and Ryan Copus present an algorithmic approach to the problems of inconsistency and bias in legal decision making. From the abstract:

First, we propose a new tool for reducing inconsistency: “Synthetic Crowdsourcing Models” (“SCMs”) built with machine learning methods. By providing judges with recommendations generated from statistical models of themselves, such models can help those judges make better and more consistent decisions. To illustrate these advantages, we build an SCM of release decisions for the California Board of Parole Hearings. Second, we describe a means to address systematic biases that are embedded in an algorithm (e.g., disparate racial treatment). We argue for making direct changes to algorithmic output based on explicit estimates of bias. Most commentators concerned with embedded biases have focused on constructing algorithms without the use of bias-inducing variables. Given the complex ways that variables may correlate and interact, that approach is both practically difficult and harmful to predictive power. In contrast, our two-step approach can address bias with minimal sacrifice of predictive performance.

— Joe

Automated suspicion algorithms and the Fourth Amendment

Here’s the abstract for Michael Rich’s (Elon Law) Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment:

At the conceptual intersection of machine learning and government data collection lie Automated Suspicion Algorithms, or ASAs, algorithms created through the application of machine learning methods to collections of government data with the purpose of identifying individuals likely to be engaged in criminal activity. The novel promise of ASAs is that they can identify data-supported correlations between innocent conduct and criminal activity and help police prevent crime. ASAs present a novel doctrinal challenge, as well, as they intrude on a step of the Fourth Amendment’s individualized suspicion analysis previously the sole province of human actors: the determination of when reasonable suspicion or probable cause can be inferred from established facts. This Article analyzes ASAs under existing Fourth Amendment doctrine for the benefit of courts who will soon be asked to deal with ASAs. In the process, the Article reveals how that doctrine is inadequate to the task of handling these new technologies and proposes extra-judicial means of ensuring that ASAs are accurate and effective.

— Joe

On the rise of the digital regulator in the administrative state

Here’s the abstract for Rory Van Loo’s (Boston Univ. School of Law) interesting article, Rise of the Digital Regulator, 66 Duke Law Journal 1267 (2017):

The administrative state is leveraging algorithms to influence individuals’ private decisions. Agencies have begun to write rules to shape for-profit websites such as Expedia and have launched their own online tools such as the Consumer Financial Protection Bureau’s mortgage calculator. These digital intermediaries aim to guide people toward better schools, healthier food, and more savings. But enthusiasm for this regulatory paradigm rests on two questionable assumptions. First, digital intermediaries effectively police consumer markets. Second, they require minimal government involvement. Instead, some for-profit online advisers such as travel websites have become what many mortgage brokers were before the 2008 financial crisis. Although they make buying easier, they can also subtly advance their interests at the expense of those they serve. Publicly run alternatives lack accountability or—like the Affordable Care Act health-insurance exchanges—are massive undertakings. The unpleasant truth is that creating effective digital regulators would require investing heavily in a new oversight regime or sophisticated state machines. Either path would benefit from an interdisciplinary uniform process to modernize administrative, antitrust, commercial, and intellectual property laws. Ideally, a technology meta-agency would then help keep that legal framework updated.

— Joe

A visual demonstration of blockchain (video)

Following up on an earlier LLB post, this is a detailed visual introduction to the concepts behind a blockchain. The creator introduces the idea of an immutable ledger using an interactive web demonstration. See also Blockgeeks’ What is Blockchain Technology? A Step-by-Step Guide For Beginners. — Joe

Using public blockchain to keep public data accessible

From Brian Forde’s Harvard Business Review article, Using Blockchain to Keep Public Data Public:

The public blockchain would fundamentally change the way we govern and do business. Rather than asking companies and consumers to downgrade their digital interactions in order to comply with the law, the government would create an adaptable system that would reduce the amount of paperwork and compliance for businesses and consumers. Rather than force emerging technologies and business models into legal gray areas, the government would use algorithmic regulation to create a level playing field for incumbent companies in their respective industries.

H/T to Gary Price’s InfoDocket post. — Joe