Category Archives: Information Technology

AALL’s 2018 New Product Award goes to BLaw’s Points of Law

According to today’s AALL eBriefing, Bloomberg Law’s Point of Law artificial intelligence solution has been awarded AALL’s 2018 New Product Award. For a review of the product, see Mark Giangrande’s LLB post.  — Joe

Google launches “Talk to Books” semantic-search tool

Ask Google’s new semantic-search tool “Talk to Books” a question and the tool will return a list of books that include information on that specific question. How? An AI-powered tool will scan every sentence in 100,000 volumes in Google Books and generate a list of likely responses with the pertinent passage bolded. Give the new tool a test drive here. — Joe

Weekend reading: Cyber Mercenaries: The State, Hackers, and Power

Cyber Mercenaries: The State, Hackers, and Power (Cambridge UP, Jan. 18, 2018) by Tim Maurer “explores the secretive relationships between states and hackers. As cyberspace has emerged as the new frontier for geopolitics, states have become entrepreneurial in their sponsorship, deployment, and exploitation of hackers as proxies to project power. Such modern-day mercenaries and privateers can impose significant harm undermining global security, stability, and human rights. These state-hacker relationships therefore raise important questions about the control, authority, and use of offensive cyber capabilities. While different countries pursue different models for their proxy relationships, they face the common challenge of balancing the benefits of these relationships with their costs and the potential risks of escalation. This book examines case studies in the United States, Iran, Syria, Russia, and China for the purpose of establishing a framework to better understand and manage the impact and risks of cyber proxies on global politics.” — Joe

Bots in the Twittersphere

Pew Internet estimates two-thirds of tweeted links to popular websites are posted by automated accounts. Among the key findings of this research:

  • Of all tweeted links to popular websites, 66% are shared by accounts with characteristics common among automated “bots,” rather than human users.
  • Among popular news and current event websites, 66% of tweeted links are made by suspected bots – identical to the overall average. The share of bot-created tweeted links is even higher among certain kinds of news sites. For example, an estimated 89% of tweeted links to popular aggregation sites that compile stories from around the web are posted by bots.
  • A relatively small number of highly active bots are responsible for a significant share of links to prominent news and media sites. This analysis finds that the 500 most-active suspected bot accounts are responsible for 22% of the tweeted links to popular news and current events sites over the period in which this study was conducted. By comparison, the 500 most-active human users are responsible for a much smaller share (an estimated 6%) of tweeted links to these outlets.
  • The study does not find evidence that automated accounts currently have a liberal or conservative “political bias” in their overall link-sharing behavior. This emerges from an analysis of the subset of news sites that contain politically oriented material. Suspected bots share roughly 41% of links to political sites shared primarily by conservatives and 44% of links to political sites shared primarily by liberals – a difference that is not statistically significant. By contrast, suspected bots share 57% to 66% of links from news and current events sites shared primarily by an ideologically mixed or centrist human audience.

H/T to Gary Price’s InfoDocket post. — Joe

Zuckerberg’s prepared statement for Congress [text]

Ahead of two days of congressional testimony, Facebook CEO Mark Zuckerberg’s prepared statement can be read here. — Joe

Computer science and law: a new paradigm

Here’s the abstract for James Miller’s The Emergence of ‘Computer Science and Law’: The New Legal Paradigm for Law and Policy Practice in the Computational Age of Algorithmic Reasoning and Big Data Practice:

Some thirty years ago “law and economics” emerged as a new paradigm of legal reasoning by providing new legal resolutions to a set a problems that were particularly suited to the application of economics in the legal process. Today algorithms and data, software-based systems, and technology solutions like blockchain both stress existing legal practice and offer new avenues for solving legal problems. This paper proposes that the rise of “computer science and law” as a new legal paradigm is emerging in ways that leverage and respond to the application and ability of computer science knowledge and reasoning to answer novel and venerable legal problems.

The paper’s analytic approach maps the boundaries of law and computer science in this new paradigm, against the stressors that necessitate new approaches with the value of technology solutions already revolutionizing other sectors. The paper answers questions such as what is persuasive or explanatory about law, what social function does it serve, and how is legal reasoning distinctive from philosophy, sociology, economics, and computer science? Following this analytic approach, the paper presents the current evolution of legal pedagogy, practice, and expectations and contributes to a deeper comparative understanding of how law can serve important social goals.

The paper begins with a definitional section. Descriptions from jurisprudence and legal theory provide a baseline of how philosophy and social sciences differentiate “law” from other disciplines, based on the nature of the reasoning, justifications, outcomes and knowledge that law entails. Leveraging what is distinctive about legal reasoning and knowledge, a historical review of computer and data science and artificial intelligence provides a view of the evolution of reasoning and knowledge is modeled using software to accomplish tasks relevant to law.

The paper explores how legal practice is evolving to challenges and opportunities posed by computational systems. The paper reviews the “legalhacker” movement that began as a software programming and policy advocacy effort and other “computation law” examples of innovations in law and policy practice, and focus on technology policy issues. A survey of new legal pedagogy focused on teaching data science, software programming and other technical skills reveals a roadmap of computer science skillsets and techniques that are a current focus for legal educators. Review and comparisons of the information technology response of “legaltech” with “fintech” IT innovations focused on finance or other sectors will reveal the relative trends and strengths observed in the space.

Finally, two analytic approaches are proposed for evaluating the strength of new technology tools and law and policy practice approaches. A set of key features identify metrics for evaluating automating legal reasoning systems ability to predict, explain, and defend legal decisions. A roadmap of technical skills and areas of focus for new law and policy practitioners provide a useful rubric for development of new practice groups, outsourcing and IT strategies, and legal training focused on “computer science and law” practice.

Whether the challenge of legal practice in administrative law with comment dockets numbering in the tens of millions, protecting fundamental legal principles in practices using complex software systems controlling the fate of defendants, or improving and expanding access to law and policy services, the paper describes the expanding role of computer science and law and a path forward for legal practitioners in the computational age.

Interesting. — Joe

Blockchain and the Law: The Rule of Code

From the blurb for Primavera De Filippi and Aaron Wright’s Blockchain and the Law: The Rule of Code (Harvard UP, Apr. 9, 2018):

A general-purpose tool for creating secure, decentralized, peer-to-peer applications, blockchain technology has been compared to the Internet itself in both form and impact. Some have said this tool may change society as we know it. Blockchains are being used to create autonomous computer programs known as “smart contracts,” to expedite payments, to create financial instruments, to organize the exchange of data and information, and to facilitate interactions between humans and machines. The technology could affect governance itself, by supporting new organizational structures that promote more democratic and participatory decision making.

Primavera De Filippi and Aaron Wright acknowledge this potential and urge the law to catch up. That is because disintermediation―a blockchain’s greatest asset―subverts critical regulation. By cutting out middlemen, such as large online operators and multinational corporations, blockchains run the risk of undermining the capacity of governmental authorities to supervise activities in banking, commerce, law, and other vital areas. De Filippi and Wright welcome the new possibilities inherent in blockchains. But as Blockchain and the Law makes clear, the technology cannot be harnessed productively without new rules and new approaches to legal thinking.

Recommended. — Joe

Pasquale on the automated public sphere

The Automated Public Sphere (Nov. 8, 2017) by Frank Pasquale “first describes the documented, negative effects of online propagandists’ interventions (and platforms’ neglect) in both electoral politics and the broader public sphere (Part I). It then proposes several legal and educational tactics to mitigate platforms’ power, or to encourage or require them to exercise it responsibly (Part II). The penultimate section (Part III) offers a concession to those suspicious of governmental intervention in the public sphere: some regimes are already too authoritarian or unreliable to be trusted with extensive powers of regulation over media (whether old or new media), or intermediaries. However, the inadvisability of extensive media regulation in disordered societies only makes this agenda more urgent in well-ordered societies, lest predictable pathologies of the automated public sphere degrade their processes of democratic will formation.”

— Joe

Online political microtargeting

From the abstract of Frederik Zuiderveen Borgesius, et al., Online Political Microtargeting: Promises and Threats for Democracy, 14 Utrecht Law Review 82 (2018):

Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind. This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights.

— Joe

An evolving interpretative framework for corpus linguistics in legal interpretation

Here’s the abstract for Stephen Mouritsen’s Corpus Linguistics in Legal Interpretation. An Evolving Interpretative Framework, Journal of Language and Law, 6 (2017), 67-89:

When called upon to interpret the undefined words in a legal text, U.S. judges will often invoke a rule (or canon) of interpretation called the “plain meaning rule,” which holds that if the language of the text is clear and unambiguous, courts cannot consider any extrinsic evidence to determine what the text means. But U.S. courts have no uniform definition of what “plain meaning” actually means and no systematic method for discovering and resolving ambiguities in legal texts. Faced with these challenges, some U.S. judges and academics have recently begun to consider the use of corpus linguistics to resolve uncertainties in the interpretation of legal texts. A corpus-based approach to legal interpretation promises to increase the objectivity and predictability of decisions about the meanings of legal texts. However, such an approach also presents a number of theoretical problems that must be addressed before corpus methods can be fully incorporated into a theory of legal interpretation. This article documents this recent turn to corpus linguistics in legal interpretation and outlines some of the challenges facing the corpus-based approach to legal interpretation.

— Joe

Was Cambridge Analytica really able to effectively target campaign messages to citizens based on their personality characteristics?

From the introduction to Matthew Hindman’s How Cambridge Analytica’s Facebook targeting model really worked:

The researcher whose work is at the center of the uproar over Cambridge Analytica’s Facebook data analysis and political advertising has revealed that his method worked much like the one Netflix uses to recommend movies.

In an email to me, Cambridge University scholar Aleksandr Kogan explained how his statistical model processed Facebook data for Cambridge Analytica. He claims it works about as well as more traditional voter-targeting methods based on demographics like race, age, and gender.

If confirmed, Kogan’s account would mean the digital modeling Cambridge Analytica used was hardly the virtual crystal ball a few have claimed. Yet the numbers Kogan provides also show what is—and isn’t—actually possible by combining personal data with machine learning for political ends.

Interesting. — Joe

Noble’s Algorithms of Oppression: How Search Engines Reinforce Racism

Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, Feb. 20, 2018) by Safiya Umoja Noble “challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem; Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, specifically women of color.

“Through an analysis of textual and media searches as well as extensive research on paid online advertising, Noble exposes a culture of racism and sexism in the way discoverability is created online. As search engines and their related companies grow in importance—operating as a source for email, a major vehicle for primary and secondary school learning, and beyond—understanding and reversing these disquieting trends and discriminatory practices is of utmost importance.”

— Joe

Law and language and the European Union Case Law Corpus

From the abstract for The European Union Case Law Corpus (EUCLCORP): A Multilingual Parallel and Comparative Corpus of EU Court Judgments by Aleksandar Trklja and Karen McAuliffe:

The empirical approach to the study of legal language has recently undergone profound development. Corpus linguistics study has, in particular, revealed previously unnoticed features of the legal language at both the lexico-grammatical and discourse level. Existing resources such as legal databases, however, do not contain functionalities that enable the application of corpus linguistics methodology. To address this gap in the context of EU law we developed a multilingual corpus of judgments that allows scholars and practitioners to investigate in a systematic way a range of issues such as the history of the meaning(s) of legal term, the migration of terms between legal systems, the use of binominals or the distribution of formulaic expressions in EU legal sub-languages. As well as being the first multilingual corpus of judgments it is also the largest legal multilingual corpus ever created. Since it contains case law from two sources (the Court of Justice of the European Union and EU national courts) it is also the largest comparable corpus of legal texts. The aim of the corpus is to contribute to the further development of the emerging field of language and law.

— Joe

All the data Facebook and Google collect about you

If you want to see what data Google collects on you, go here. For Facebook data collection on you, go here. For an overview of Google and Facebook personal data collection, see this Guardian article. Recommended.

H/T to bespacific. — Joe

Should we be concerned about data monopolies?

Should We Be Concerned About Data-Opolies?, Georgetown Law Technology Review (Forthcoming), by Maurice E. Stucke “explores some of the potential harms from data-opolies. Data-opolies, in contrast to the earlier monopolies, are unlikely to exercise their power by charging higher prices to consumers. But this does not mean they are harmless. Data-opolies can raise other significant concerns, including less privacy, degraded quality, a transfer of wealth from consumers to data-opolies, less innovation and dynamic disruption in markets in which they dominate, and political and social concerns. Data-opolies can also be more durable than some earlier monopolies. Moreover, data-opolies at times can more easily avoid antitrust scrutiny when they engage in anticompetitive tactics to attain or maintain their dominance.” — Joe

FTC investigating Facebook for possible violation of 2011 consent decree

The 2011 consent decree was the result of a two-year-long investigation by the FTC into Facebook’s privacy practices. The current investigation probes the possible misuse of the personal information of as many as 50 million Facebook users by Cambridge Analytica.

— Joe

Track major federal litigation using Big Cases Bot

Big Cases Bot follows major cases in SCOTUS and federal district courts, including Mueller prosecutions, challenges to Trump’s executive orders, and many others. It updates every four minutes.

h/t Bob Ambrogi’s LawSites post. — Joe

An introduction to computer-assisted legal linguistics

Computer-Assisted Legal Linguistics: Corpus Analysis as a New Tool for Legal Studies, ___ Law & Social Inquiry ___ (2017), by Friedemann Vogel, Hanjo Hamann and Isabelle Gauer introduces computer-assisted legal linguistics, an area of study ranging from computer-supported qualitative analysis of legal texts to legal semantics and legal sociosemiotics based on big data. From the article’s abstract:

Law exists solely in and through language. Nonetheless, systematical empirical analysis of legal language has been rare. Yet, the tides are turning: After judges at various courts (including the US Supreme Court) have championed a method of analysis called corpus linguistics, the Michigan Supreme Court held in June 2016 that this method “is consistent with how courts have understood statutory interpretation.” The court illustrated how corpus analysis can benefit legal casework, thus sanctifying twenty years of previous research into the matter. The present article synthesizes this research and introduces computer-assisted legal linguistics (CAL2) as a novel approach to legal studies. Computer-supported analysis of carefully preprocessed collections of legal texts lets lawyers analyze legal semantics, language, and sociosemiotics in different working contexts (judiciary, legislature, legal academia). The article introduces the interdisciplinary CAL2 research group (www.cal2.eu), its Corpus of German Law, and other related projects that make law more transparent.

— Joe

Mark Zuckerberg has broken his silence on the Cambridge Analytica data scandal [text]

Posted by Mark Zuckerberg on Facebook

I want to share an update on the Cambridge Analytica situation — including the steps we’ve already taken and our next steps to address this important issue.

We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.

Here’s a timeline of the events:

In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends’ birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them

In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who shared their data as well as some of their friends’ data. Given the way our platform worked at the time this meant Kogan was able to access tens of millions of their friends’ data.

In 2014, to prevent abusive apps, we announced that we were changing the entire platform to dramatically limit the data apps could access. Most importantly, apps like Kogan’s could no longer ask for data about a person’s friends unless their friends had also authorized the app. We also required developers to get approval from us before they could request any sensitive data from people. These actions would prevent any app like Kogan’s from being able to access so much data today.

In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people’s consent, so we immediately banned Kogan’s app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.

Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to confirm this. We’re also working with regulators as they investigate what happened.

This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.

In this case, we already took the most important steps a few years ago in 2014 to prevent bad actors from accessing people’s information in this way. But there’s more we need to do and I’ll outline those steps here:

First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.

Second, we will restrict developers’ data access even further to prevent other kinds of abuse. For example, we will remove developers’ access to your data if you haven’t used their app in 3 months. We will reduce the data you give an app when you sign in — to only your name, profile photo, and email address. We’ll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we’ll have more changes to share in the next few days.

Third, we want to make sure you understand which apps you’ve allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you’ve used and an easy way to revoke those apps’ permissions to your data. We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make sure everyone sees it.

Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.

I started Facebook, and at the end of the day I’m responsible for what happens on our platform. I’m serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn’t change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.

I want to thank all of you who continue to believe in our mission and work to build this community together. I know it takes longer to fix all these issues than we’d like, but I promise you we’ll work through this and build a better service over the long term.

— Joe

Lawfare’s legal primer on the Facebook-Cambridge Analytica controversy

On Lawfare, Andrew Keane Woods asks what laws might apply in the Facebook-Cambridge Analytica kerfuffle. He briefly reviews the Computer Fraud and Abuse Act, state-level computer crime laws, contract and tort law claims, Federal Trade Commission rules, and US securities laws in The Cambridge Analytica-Facebook Debacle: A Legal Primer. — Joe