Category Archives: Web Communications

False information distributed through web and social media platforms: A survey

Here’s the abstract for Srijan Kumar and Neil Shah’s False Information on Web and Social Media: A Survey (Apr. 23, 2018):

False information can be created and spread easily through the web and social media platforms, resulting in widespread real-world impact. Characterizing how false information proliferates on social platforms and why it succeeds in deceiving readers are critical to develop efficient detection algorithms and tools for early detection. A recent surge of research in this area has aimed to address the key issues using methods based on feature engineering, graph mining, and information modeling. Majority of the research has primarily focused on two broad categories of false information: opinion-based (e.g., fake reviews), and fact-based (e.g., false news and hoaxes). Therefore, in this work, we present a comprehensive survey spanning diverse aspects of false information, namely (i) the actors involved in spreading false information, (ii) rationale behind successfully deceiving readers, (iii) quantifying the impact of false information, (iv) measuring its characteristics across different dimensions, and finally, (iv) algorithms developed to detect false information. In doing so, we create a unified framework to describe these recent methods and highlight a number of important directions for future research.

H/T to Gary Price’s InfoDocket post. — Joe

Facebook publishes community standards for content publication by users

For the first time Facebook is disclosing its Community Standards rules and guidelines for deciding what FB users can post on the social network. Facebook is also introducing an appeals process for Facebook users who believe their posts were removed in error. — Joe

Jean O’Grady goes old school

And by that I mean, Jean is launching what sounds like an annual “best of” selection of Dewey B Strategic blog posts in a print-based “blog-o-zine.” The 2017 compilation is expected to be published on or about April 30th. Jean writes

Why should someone pay for it? The 2017 Dewey B Strategic Blog-o-zine is intended to be an easy access, reference handbook on the major legal research/technology trends, product releases and enhancements of 2017. The book includes 34 product reviews for cutting edge legal products incorporating AI, analytics, workflow tools and plain old expert analysis. What have the big players Thomson Reuters, LexisNexis, Bloomberg Law and Wolters Kluwer been up to? Did you catch the release of new features on Fastcase, CARA, Ravel, Lex Machina? Have you heard about the innovative new tools from Judicata, Gavelitcs and Voxgov? The blog-o-zine can be seen as a good investment in a tool that will make it easier for you to focus your time and your budgets on best products for your firm’s research
needs.

The book will retail for $99 but will be available at the pre-publication price of $79.00 (plus shipping) through April 30th. If interested follow this link to Jean’s blog post which includes a PayPal link for purchasing this compilation.

Interesting. — Joe

Oregon is second state to protect net neutrality by statute [text]

Following Washington State’s landmark move last month, Oregon’s governor signed House Bill 4155 into law to attempt to protect net neutrality. Legal challenges are expected. — Joe

Bots in the Twittersphere

Pew Internet estimates two-thirds of tweeted links to popular websites are posted by automated accounts. Among the key findings of this research:

  • Of all tweeted links to popular websites, 66% are shared by accounts with characteristics common among automated “bots,” rather than human users.
  • Among popular news and current event websites, 66% of tweeted links are made by suspected bots – identical to the overall average. The share of bot-created tweeted links is even higher among certain kinds of news sites. For example, an estimated 89% of tweeted links to popular aggregation sites that compile stories from around the web are posted by bots.
  • A relatively small number of highly active bots are responsible for a significant share of links to prominent news and media sites. This analysis finds that the 500 most-active suspected bot accounts are responsible for 22% of the tweeted links to popular news and current events sites over the period in which this study was conducted. By comparison, the 500 most-active human users are responsible for a much smaller share (an estimated 6%) of tweeted links to these outlets.
  • The study does not find evidence that automated accounts currently have a liberal or conservative “political bias” in their overall link-sharing behavior. This emerges from an analysis of the subset of news sites that contain politically oriented material. Suspected bots share roughly 41% of links to political sites shared primarily by conservatives and 44% of links to political sites shared primarily by liberals – a difference that is not statistically significant. By contrast, suspected bots share 57% to 66% of links from news and current events sites shared primarily by an ideologically mixed or centrist human audience.

H/T to Gary Price’s InfoDocket post. — Joe

Zuckerberg’s prepared statement for Congress [text]

Ahead of two days of congressional testimony, Facebook CEO Mark Zuckerberg’s prepared statement can be read here. — Joe

LexBlog seeks to expand number of blogs as it prepares to launch news and commentary network

Back in January LexBlog appointed Bob Ambrogi publisher and editor-in-chief. Now Bob announces that LexBlog is opening participation in the network to any legal blogger, without cost and without regard to whether the blog is hosted on the LexBlog platform. LexBlog, by the way, is preparing to launch a global news and commentary network based on content from its legal blogs. — Joe

3 Geeks get a makeover

Say “hello” again to the three intrepid geeks who blog at the freshly redesigned 3 Geeks and a Law Blog, now hosted on the LexBlog platform. — Joe

Pasquale on the automated public sphere

The Automated Public Sphere (Nov. 8, 2017) by Frank Pasquale “first describes the documented, negative effects of online propagandists’ interventions (and platforms’ neglect) in both electoral politics and the broader public sphere (Part I). It then proposes several legal and educational tactics to mitigate platforms’ power, or to encourage or require them to exercise it responsibly (Part II). The penultimate section (Part III) offers a concession to those suspicious of governmental intervention in the public sphere: some regimes are already too authoritarian or unreliable to be trusted with extensive powers of regulation over media (whether old or new media), or intermediaries. However, the inadvisability of extensive media regulation in disordered societies only makes this agenda more urgent in well-ordered societies, lest predictable pathologies of the automated public sphere degrade their processes of democratic will formation.”

— Joe

Online political microtargeting

From the abstract of Frederik Zuiderveen Borgesius, et al., Online Political Microtargeting: Promises and Threats for Democracy, 14 Utrecht Law Review 82 (2018):

Online political microtargeting involves monitoring people’s online behaviour, and using the collected data, sometimes enriched with other data, to show people-targeted political advertisements. Online political microtargeting is widely used in the US; Europe may not be far behind. This paper maps microtargeting’s promises and threats to democracy. For example, microtargeting promises to optimise the match between the electorate’s concerns and political campaigns, and to boost campaign engagement and political participation. But online microtargeting could also threaten democracy. For instance, a political party could, misleadingly, present itself as a different one-issue party to different individuals. And data collection for microtargeting raises privacy concerns. We sketch possibilities for policymakers if they seek to regulate online political microtargeting. We discuss which measures would be possible, while complying with the right to freedom of expression under the European Convention on Human Rights.

— Joe

GDPR IQ launched, automates compliance with the EU’s General Data Protection Regulation

Hat tip to Bob Ambrogi’s LawSite’s post about the launch of GDPR IQ by Parsons Behle & Latimer. This new and very timely tool generates the complete set of required policies, procedures and proof-of-compliance documents under the EU’s General Data Protection Regulation (GDPR). The GDPR takes effect May 25, 2018 and research indicates that 78% of affected US businesses do not yet have a GDPR plan in place so this new tool may be very helpful in achieving compliance by May 25th. — Joe

All the data Facebook and Google collect about you

If you want to see what data Google collects on you, go here. For Facebook data collection on you, go here. For an overview of Google and Facebook personal data collection, see this Guardian article. Recommended.

H/T to bespacific. — Joe

FTC investigating Facebook for possible violation of 2011 consent decree

The 2011 consent decree was the result of a two-year-long investigation by the FTC into Facebook’s privacy practices. The current investigation probes the possible misuse of the personal information of as many as 50 million Facebook users by Cambridge Analytica.

— Joe

Mark Zuckerberg has broken his silence on the Cambridge Analytica data scandal [text]

Posted by Mark Zuckerberg on Facebook

I want to share an update on the Cambridge Analytica situation — including the steps we’ve already taken and our next steps to address this important issue.

We have a responsibility to protect your data, and if we can’t then we don’t deserve to serve you. I’ve been working to understand exactly what happened and how to make sure this doesn’t happen again. The good news is that the most important actions to prevent this from happening again today we have already taken years ago. But we also made mistakes, there’s more to do, and we need to step up and do it.

Here’s a timeline of the events:

In 2007, we launched the Facebook Platform with the vision that more apps should be social. Your calendar should be able to show your friends’ birthdays, your maps should show where your friends live, and your address book should show their pictures. To do this, we enabled people to log into apps and share who their friends were and some information about them

In 2013, a Cambridge University researcher named Aleksandr Kogan created a personality quiz app. It was installed by around 300,000 people who shared their data as well as some of their friends’ data. Given the way our platform worked at the time this meant Kogan was able to access tens of millions of their friends’ data.

In 2014, to prevent abusive apps, we announced that we were changing the entire platform to dramatically limit the data apps could access. Most importantly, apps like Kogan’s could no longer ask for data about a person’s friends unless their friends had also authorized the app. We also required developers to get approval from us before they could request any sensitive data from people. These actions would prevent any app like Kogan’s from being able to access so much data today.

In 2015, we learned from journalists at The Guardian that Kogan had shared data from his app with Cambridge Analytica. It is against our policies for developers to share data without people’s consent, so we immediately banned Kogan’s app from our platform, and demanded that Kogan and Cambridge Analytica formally certify that they had deleted all improperly acquired data. They provided these certifications.

Last week, we learned from The Guardian, The New York Times and Channel 4 that Cambridge Analytica may not have deleted the data as they had certified. We immediately banned them from using any of our services. Cambridge Analytica claims they have already deleted the data and has agreed to a forensic audit by a firm we hired to confirm this. We’re also working with regulators as they investigate what happened.

This was a breach of trust between Kogan, Cambridge Analytica and Facebook. But it was also a breach of trust between Facebook and the people who share their data with us and expect us to protect it. We need to fix that.

In this case, we already took the most important steps a few years ago in 2014 to prevent bad actors from accessing people’s information in this way. But there’s more we need to do and I’ll outline those steps here:

First, we will investigate all apps that had access to large amounts of information before we changed our platform to dramatically reduce data access in 2014, and we will conduct a full audit of any app with suspicious activity. We will ban any developer from our platform that does not agree to a thorough audit. And if we find developers that misused personally identifiable information, we will ban them and tell everyone affected by those apps. That includes people whose data Kogan misused here as well.

Second, we will restrict developers’ data access even further to prevent other kinds of abuse. For example, we will remove developers’ access to your data if you haven’t used their app in 3 months. We will reduce the data you give an app when you sign in — to only your name, profile photo, and email address. We’ll require developers to not only get approval but also sign a contract in order to ask anyone for access to their posts or other private data. And we’ll have more changes to share in the next few days.

Third, we want to make sure you understand which apps you’ve allowed to access your data. In the next month, we will show everyone a tool at the top of your News Feed with the apps you’ve used and an easy way to revoke those apps’ permissions to your data. We already have a tool to do this in your privacy settings, and now we will put this tool at the top of your News Feed to make sure everyone sees it.

Beyond the steps we had already taken in 2014, I believe these are the next steps we must take to continue to secure our platform.

I started Facebook, and at the end of the day I’m responsible for what happens on our platform. I’m serious about doing what it takes to protect our community. While this specific issue involving Cambridge Analytica should no longer happen with new apps today, that doesn’t change what happened in the past. We will learn from this experience to secure our platform further and make our community safer for everyone going forward.

I want to thank all of you who continue to believe in our mission and work to build this community together. I know it takes longer to fix all these issues than we’d like, but I promise you we’ll work through this and build a better service over the long term.

— Joe

Lawfare’s legal primer on the Facebook-Cambridge Analytica controversy

On Lawfare, Andrew Keane Woods asks what laws might apply in the Facebook-Cambridge Analytica kerfuffle. He briefly reviews the Computer Fraud and Abuse Act, state-level computer crime laws, contract and tort law claims, Federal Trade Commission rules, and US securities laws in The Cambridge Analytica-Facebook Debacle: A Legal Primer. — Joe

Weaponizing your digital footprint because computer-based personality judgments are more accurate than those made by humans

Computer-based personality judgments are more accurate than those made by humans, PNAS January 27, 2015. 112 (4) 1036-1040, by Wu Youyou, Michal Kosinski and David Stillwell was one of the building blocks for the development of products and services sold to clients by firms like Cambridge Analytica. The study compares the accuracy of personality judgment —- a ubiquitous and important social-cognitive activity —- between computer models and humans. Using several criteria, the authors show that computers’ judgments of people’s personalities based on their digital footprints are more accurate and valid than judgments made by their close others or acquaintances (friends, family, spouse, colleagues, etc.). The study’s findings highlight that people’s personalities can be predicted automatically and without involving human social-cognitive skills. — Joe

Search engines, social media and the editorial analogy

A snip from the abstract from Heather M. Whitney’s Search Engines, Social Media, and the Editorial Analogy (Mar. 1, 2018):

Some prominent commentators claim that Facebook is analogous to a newspaper and that its handling of a feature like Trending Topics is analogous to a newspaper’s editorial choices. As a result, these commentators find congressional scrutiny of such matters to be constitutionally problematic. Moreover, the editorial analogy has been a remarkably effective shield for these tech companies in litigation. In a series of lower court cases, Google and others have argued that their decisions concerning their platforms — for example, what sites to list (or delist) and in what order, who can buy ads and where to place them, and what users to block or permanently ban — are analogous to the editorial decisions of publishers. And like editorial decisions, they argue, these decisions are protected “speech” under the First Amendment. While mostly wielded against small-fry, often pro se plaintiffs, courts have tended to accept this analogy wholesale.

Large consequences hinge on whether the various choices companies like Facebook and Google make are indeed analogous to editorial “speech.” The answer will partly determine whether and how the state can respond to current challenges ranging from the proliferation of fake news to high levels of market concentration to the lack of ad transparency. Furthermore, algorithmic discrimination and the discrimination facilitated by these platforms’ structures affect people’s lives today and no doubt will continue to do so. But if these algorithms and outputs are analogous to the decisions the New York Times makes on what to publish, then attempts to extend antidiscrimination laws to deal with such discrimination will face an onslaught of potentially insuperable constitutional challenges. In short, these companies’ deployment of the editorial analogy in the First Amendment context poses a major hurdle to government intervention.

Whether, or to what extent, the editorial analogy should work as a shield against looming legislation and litigation for companies like Facebook and Google is something this historical moment demands we carefully consider. My primary aim in this paper is to do just that. I will engage critically with, and ultimately raise questions about, the near-automatic application of the editorial analogy. The core takeaways are these: (1) we should be cognizant of the inherent limitations of analogical reasoning generally and of the editorial analogy specifically; (2) whether these companies’ various outputs should receive coverage as First Amendment “speech” is far from clear, both descriptively and normatively; (3) the proposition that regulations compelling these companies to add content (disclaimers, links to competitors, and so on) compel the companies to speak is also far from clear; and, finally and most crucially, (4) given the limits of analogical reasoning, our future debates about First Amendment coverage should focus less on analogy and more on what actually matters — the normative commitments that undergird free speech theory and how our choices either help or hinder their manifestations.

Interesting. — Joe

ABA issues formal opinion on law blogging

ABA Formal Opinion 480 concludes that lawyers who blog or engage in other public commentary may not reveal information relating to a representation, including information contained in a public record, unless authorized by a provision of the Model Rules. — Joe

Washington is first state to establish net neutrality rules by legislation

While several states, including Montana, New York and New Jersey, have taken steps to protect net neutrality by executive orders, Washington is the first state to enact net neutrality rules by legislation. HB 2282 will be put into practice 90 days from now (by June 6th) or whenever the FCC’s Restoring Internet Freedom order takes effect, whichever comes first. — Joe

Who falls for fake news?

From the abstract for Gordon Pennycook and David G. Rand’s Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity:

Fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. Here we investigate the cognitive psychological profile of individuals who fall prey to fake news. We find a consistent positive correlation between the propensity to think analytically – as measured by the Cognitive Reflection Test (CRT) – and the ability to differentiate fake news from real news (“media truth discernment”). This was true regardless of whether the article’s source was indicated (which, surprisingly, also had no main effect on accuracy judgments). Contrary to the motivated reasoning account, CRT was just as positively correlated with media truth discernment, if not more so, for headlines that aligned with individuals’ political ideology relative to those that were politically discordant. The link between analytic thinking and media truth discernment was driven both by a negative correlation between CRT and perceptions of fake news accuracy (particularly among Hillary Clinton supporters), and a positive correlation between CRT and perceptions of real news accuracy (particularly among Donald Trump supporters). This suggests that factors that undermine the legitimacy of traditional news media may exacerbate the problem of inaccurate political beliefs among Trump supporters, who engaged in less analytic thinking and were overall less able to discern fake from real news (regardless of the news’ political valence). We also found consistent evidence that pseudo-profound bullshit receptivity negatively correlates with perceptions of fake news accuracy; a correlation that is mediated by analytic thinking. Finally, analytic thinking was associated with an unwillingness to share both fake and real news on social media. Our results indicate that the propensity to think analytically plays an important role in the recognition of misinformation, regardless of political valence – a finding that opens up potential avenues for fighting fake news.

— Joe