Category Archives: Web Communications

Net neutrality comments to FCC gamed

Pew reports that 21.7 million comments were submitted electronically to the FCC in response to the Commission’s call for comments on the FCC’s net neutrality rule. “[Pew] Center’s analysis of these submissions finds that the comments present challenges to anyone hoping to understand the attitudes of the concerned public regarding net neutrality. It also highlights the ways in which individuals and groups are using modern digital tools to engage in the long-standing practice of speaking out in order to influence government policy decisions” according to Public Comments to the Federal Communications Commission About Net Neutrality Contain Many Inaccuracies and Duplicates (Nov. 29, 2017).

Pew’s findings include the following:

  • 57% of the comments utilized either duplicate email addresses or temporary email addresses created with the intention of being used for a short period of time and then discarded
  • Of the 21.7 million comments posted, 6% were unique. The other 94% were submitted multiple times – in some cases, hundreds of thousands of times.
  • On nine different occasions, more than 75,000 comments were submitted at the very same second – often including identical or highly similar comments Three of these nine instances featured variations of a popular pro-net-neutrality message, while the others promoted several different anti-net-neutrality statements.

Interesting. H/T to beSpacific. — Joe

CRS releases report examining the net neutrality debate

From the summary of The Net Neutrality Debate: Access to Broadband Networks (Nov. 22, 2017 R40616):

As congressional policymakers continue to debate telecommunications reform, a major discussion point revolves around what approach should be taken to ensure unfettered access to the Internet. The move to place restrictions on the owners of the networks that compose and provide access to the Internet, to ensure equal access and nondiscriminatory treatment, is referred to as “net neutrality.” While there is no single accepted definition of “net neutrality,” most agree that any such definition should include the general principles that owners of the networks that compose and provide access to the Internet should not control how consumers lawfully use that network, and they should not be able to discriminate against content provider access to that network.

The FCC’s move to reexamine its existing open Internet rules has reopened the debate over whether Congress should consider a more comprehensive measure to amend existing law to provide greater regulatory stability and guidance to the FCC. Whether Congress will choose to address more comprehensive legislation to amend the 1934 Communications Act, to provide a broad-based framework for such regulation, remains to be seen.

— Joe

Google’s Role in Spreading Fake News and Misinformation

H/T to Legal Skills Prof Blog for calling attention to Google’s Role in Spreading Fake News and Misinformation by Danaë Metaxa-Kakavouli and Nicolás Torres-Echeverry. Here’s the abstract:

This paper analyzes Google’s role in proliferating fake news and misinformation in the months leading up to and immediately following the U.S. 2016 national election. It is one section of a longer report, Fake News and Misinformation: The roles of the nation’s digital newsstands, Facebook, Google, Twitter and Reddit, that serves as the first phase of a continuing inquiry over the 2017-18 academic year. This paper reviews the role of Google, and specifically Google Search, in the misinformation landscape. It tracks the problem of misinformation in search engines from the advent of search engine optimization and spam through the present day, focusing on Google’s efforts to curb its role in spreading fake news following the 2016 U.S. elections.

Part 1 describes the “arms race” between search engines and spammers exploiting weaknesses in search algorithms, which contributes to Google’s role in proliferating fake and/or biased news in the 2016 elections. As part of the continuing accounting of the impact of fake news and misinformation on the 2016 elections, this analysis tracks search results for senate and presidential candidates in that election, revealing that up to 30% of these national candidates had their search results affected by potentially fake or biased content.

Part 2 summarizes Google’s recent efforts in 2017 to curb misleading or offensive content through user reporting and human reviewers, along with the opinions of users and experts who are largely supportive of these changes. The section broadly reviews the influence of the Internet on journalism, and then describes Google’s recent efforts to invest in initiatives that bolster investigative journalism and news. It concludes with suggestions for policy and research directions, recommending in particular that Google and other companies increase data transparency, in particular for researchers, to better understand misinformation phenomena online. The study concludes that transparency and civilian oversight are the next critical steps towards a society which benefits fully from the ubiquitous and powerful technologies that surround us.

— Joe

Cybersecurity reports and resources compiled by CRS

Much is written on this topic, and Cybersecurity: Cybercrime and National Security Authoritative Reports and Resources (Nov. 14, 2017 R44408) directs the reader to authoritative sources that address many of the most prominent cybersecurity issues. The annotated descriptions of these sources are listed in reverse chronological order, with an emphasis on material published in the past several years. This report includes resources and studies from government agencies (federal, state, local, and international), think tanks, academic institutions, news organizations, and other sources. — Joe

Trump’s tweets in court

Three Washington Post stories cover judicial notice of President Trump’s tweets on topics litigated in court: sanctuary cities, travel ban and transgender military ban cases. — Joe

Facebook’s impact on American democracy

“Tech journalists covering Facebook had a duty to cover what was happening before, during, and after the election,” wrote Alexis Madrigal in What Facebook Did to American Democracy, The Atlantic, Oct. 12, 2017.

Reporters tried to see past their often liberal political orientations and the unprecedented actions of Donald Trump to see how 2016 was playing out on the internet. Every component of the chaotic digital campaign has been reported on, here at The Atlantic, and elsewhere: Facebook’s enormous distribution power for political information, rapacious partisanship reinforced by distinct media information spheres, the increasing scourge of “viral” hoaxes and other kinds of misinformation that could propagate through those networks, and the Russian information ops agency.

But no one delivered the synthesis that could have tied together all these disparate threads. It’s not that this hypothetical perfect story would have changed the outcome of the election. The real problem—for all political stripes—is understanding the set of conditions that led to Trump’s victory. The informational underpinnings of democracy have eroded, and no one has explained precisely how.

In What Facebook Did to American Democracy, Alexis Madrigal traces Facebook’s impact on American democracy. — Joe

CRS report on the dark web

From Dark Web (Mar. 10, 2017 R44101):

The layers of the Internet go far beyond the surface content that many can easily access in their daily searches. The other content is that of the Deep Web, content that has not been indexed by traditional search engines such as Google. The furthest corners of the Deep Web, segments known as the Dark Web, contain content that has been intentionally concealed. The Dark Web may be used for legitimate purposes as well as to conceal criminal or otherwise malicious activities. It is the exploitation of the Dark Web for illegal practices that has garnered the interest of officials and policymakers.

Just as criminals can rely upon the anonymity of the Dark Web, so too can the law enforcement, military, and intelligence communities. They may, for example, use it to conduct online surveillance and sting operations and to maintain anonymous tip lines. Anonymity in the Dark Web can be used to shield officials from identification and hacking by adversaries. It can also be used to conduct a clandestine or covert computer network operation such as taking down a website or a denial of service attack, or to intercept communications. Reportedly, officials are continuously working on expanding techniques to deanonymize activity on the Dark Web and identify malicious actors online.

H/T beSpacific. — Joe

Balkin on free speech in the algorithmic society

Here’s the abstract for Yale Law Prof Jack Balkin’s Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, UC Davis Law Review (2018 Forthcoming):

We have now moved from the early days of the Internet to the Algorithmic Society. The Algorithmic Society features the use of algorithms, artificial intelligence agents, and Big Data to govern populations. It also features digital infrastructure companies, large multi-national social media platforms, and search engines that sit between traditional nation states and ordinary individuals, and serve as special-purpose governors of speech.

The Algorithmic Society presents two central problems for freedom of expression. First, Big Data allows new forms of manipulation and control, which private companies will attempt to legitimate and insulate from regulation by invoking free speech principles. Here First Amendment arguments will likely be employed to forestall digital privacy guarantees and prevent consumer protection regulation. Second, privately owned digital infrastructure companies and online platforms govern speech much as nation states once did. Here the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak.

The first part of the essay describes how to regulate online businesses that employ Big Data and algorithmic decision making consistent with free speech principles. Some of these businesses are “information fiduciaries” toward their end-users; they must exercise duties of good faith and non-manipulation. Other businesses who are not information fiduciaries have a duty not to engage in “algorithmic nuisance”: they may not externalize the costs of their analysis and use of Big Data onto innocent third parties.

The second part of the essay turns to the emerging pluralist model of online speech regulation. This pluralist model contrasts with the traditional dyadic model in which nation states regulated the speech of their citizens.

In the pluralist model, territorial governments continue to regulate speech directly. But they also attempt to coerce or co-opt owners of digital infrastructure to regulate the speech of others. This is “new school” speech regulation. Digital infrastructure owners, and especially social media companies, now act as private governors of speech communities, creating and enforcing various rules and norms of the communities they govern. Finally, end users, civil society organizations, hackers, and other private actors repeatedly put pressure on digital infrastructure companies to regulate speech in certain ways and not to regulate it in others. This triangular tug of war — rather than the traditional dyadic model of states regulating the speech of private parties — characterizes the practical ability to speak in the algorithmic society.

The essay uses the examples of the right to be forgotten and the problem of fake news to illustrate the emerging pluralist model — and new school speech regulation — in action.

As private governance becomes central to freedom of speech, both end-users and nation states put pressure on private governance. Nation states attempt to co-opt private companies into becoming bureaucracies for the enforcement of hate speech regulation and new doctrines like the right to be forgotten. Conversely, end users increasingly demand procedural guarantees, due process, transparency, and equal protection from private online companies.

The more that end-users view businesses as governors, or as special-purpose sovereigns, the more end-users will expect — and demand — that these companies should conform to the basic obligations of governors towards those they govern. These obligations include procedural fairness in handling complaints and applying sanctions, notice, transparency, reasoned explanations, consistency, and conformity to rule of law values — the “law” in this case being the publicly stated norms and policies of the company. Digital infrastructure companies, in turn, will find that they must take on new social obligations to meet these growing threats and expectations from nation states and end-users alike.

Interesting. — Joe

Cambridge Analytica and the rise of weaponized AI propaganda in political election campaigns

Cambridge Analytica, a data mining firm known for being a leader in behavioral microtargeting for election processes (and for bragging about its contribution to the successful Trump presidential campaign), is being investigated by the House Select Committee of Intelligence. See April Glaser, Congress Is Investigating Trump Campaign’s Voter Targeting Firm as Part of the Russia Probe, Slate Oct. 11, 2017. Jared Kushner, who ran the Trump campaign’s data operations, eventually may be implicated. See Jared Kushner In His Own Words On The Trump Data Operation The FBI Is Reportedly Probing, Forbes, May 26, 2017 and Did Russians Target Democratic Voters, With Kushner’s Help? Newsweek, May 23, 2017.

Before joining the Trump campaign, Steve Bannon was on the board of Cambridge Analytica. The Company’s primary financier is hedge fund billionaire and Breitbart investor Robert Mercer. Here’s a presentation at the 2016 Concordia Summit by Alexander Nix, CEO, Cambridge Analytica. Nix discusses the power of big data in global elections and Cambridge Analytica’s revolutionary approach to audience targeting, data modeling, and psychographic profiling for election processes around the world.

The Rise of the Weaponized AI Propaganda Machine discusses how this new automated propaganda machine is driving global politics. This is where big data meets computational psychology, where automated engagement scripts prey on human emotions in a propaganda network that accelerates ideas in minutes with political bots policing public debate. Highly recommended. See also Does Trump’s ‘Weaponized AI Propaganda Machine’ Hold Water? Forbes, March 5, 2017. — Joe

End note: In a separate probe, the UK’s Information Commissioner is investigating Cambridge Analytica for its successful Leave.eu campaign in the UK.

Russia’s interference in the 2016 US presidential election and information warfare

According to the declassified report, Assessing Russian Activities and Intentions in Recent US Elections: The Analytic Process and Cyber Incident Attribution, the CIA, FBI and NSA have “high confidence” that Russian President Vladimir Putin “ordered an influence campaign in 2016 aimed at the US presidential election” in order to “undermine public faith in the US democratic process, denigrate Clinton, and harm her electability and potential presidency.” The report also contends the Russian government “aspired to help President-elect Trump’s election chances when possible by discrediting Secretary Clinton and publicly contrasting her unfavorably to him.” See Russia and the U.S. Presidential Election (Jan. 17, 2017 IN10635) for the Congressional Research Service’s backgrounder.

Russian infomation warfare activities is the topic of Information Warfare: Russian Activities (Sept. 2, 2016 IN10563). From the report:

Russian doctrine typically refers to a holistic concept of “information war,” which is used to accomplish two primary aims:
•To achieve political objectives without the use of military force.
•To shape a favorable international response to the deployment of its military forces, or military forces with which Moscow is allied.

Tactics used to accomplish these goals include damaging information systems and critical infrastructure; subverting political, economic, and social systems; instigating “massive psychological manipulation of the population to destabilize the society and state”; and coercing targets to make decisions counter to their interests. Recent events suggest that Russia may be employing a mix of propaganda, misinformation, and deliberately misleading or corrupted disinformation in order to do so. And while Russian organizations appear to be using cyberspace as a primary medium through which these goals are achieved, the government also appears to potentially be using the physical realm to conduct more traditional influence operations including denying the deployment of troops in conflict areas and the use of online “troll armies” to propagate pro-Russian rhetoric.

These activities are placed in the larger context of US policy towards Russia in Russia: Background and U.S. Policy (Aug. 21, 2017 R44775). — Joe

Inspector General reports from across the federal government now available on a single website

The Council of the Inspectors General on Integrity and Efficiency announced the official launch of Oversight.gov. This new website creates a single home for thousands of Inspector General reports from across the federal government.

H/T to Gary Price’s InfoDocket post. — Joe

Google’s mobile search helps users find ebooks at some local public libraries

For detail’s see my Gesenhues’ Search Engine Land post. H/T to Gary Price’s InfoDocket post. — Joe

Search and seizure: law enforcement jurisdiction, international relations and the dark web

The use of hacking tools by law enforcement to pursue criminal suspects who have anonymized their communications on the dark web presents a looming flashpoint between criminal procedure and international law according to Ahmed Ghappour. In Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web, 69 Stanford Law Review ___ (April 2017). The practical reality of the underlying technologies makes it inevitable that foreign-located computers will be subject to remote searches and seizures. The result may well be the greatest extraterritorial expansion of enforcement jurisdiction in U.S. law enforcement history. From the abstract:

This Article examines how the government’s use of hacking tools on the dark web profoundly disrupts the legal architecture on which cross-border criminal investigations rest. These overseas cyberoperations raise increasingly difficult questions regarding who may authorize these activities, where they may be deployed, and against whom they may lawfully be executed. The rules of criminal procedure fail to regulate law enforcement hacking because they allow these critical decisions to be made by rank-and-file officials despite potentially disruptive foreign relations implications. This Article outlines a regulatory framework that reallocates decisionmaking to the institutional actors who are best suited to determine U.S. foreign policy and avoids sacrificing law enforcement’s ability to identify and locate criminal suspects who have taken cover on the dark web.

In Government Hacking to Light the Dark Web: What Risks to International Relations and International Law?, 70 Stanford Law Review Online 58 (2017), Orin Kerr and Sean D. Murphy challenge Ghappour’s framework in three ways. “First, it questions whether there are real international relations difficulties with the use of NITs to investigate Tor users engaged in criminal activities. Second, it questions whether government use of NITs to investigate crimes on the dark web violates international law. Third, it argues that the use of NITs on the dark web does not occur in a regulatory vacuum. We agree with Ghappour that government use of NITs raises significant technical, legal, and policy challenges. At the same time, we are unpersuaded that the threat to international relations caused by use of NITs to investigate criminal cases on the dark web is among them.” — Joe

Russian-sourced Facebook ads focused on amplifying divisive messages

In An Update On Information Operations On Facebook, Alex Stamos, Chief Security Officer for Facebook noted that “the ads and accounts appeared to focus on amplifying divisive social and political messages across the ideological spectrum — touching on topics from LGBT matters to race issues to immigration to gun rights.”

HT to Gary Price’s InfoDocket post. — Joe

Access to Broadband Networks: CRS report on the net neutrality debate

From the introduction to The Net Neutrality Debate: Access to Broadband Networks, August 15, 2017, R40616:

As congressional policymakers continue to debate telecommunications reform, a major discussion point revolves around what approach should be taken to ensure unfettered access to the Internet. The move to place restrictions on the owners of the networks that compose and provide access to the Internet, to ensure equal access and nondiscriminatory treatment, is referred to as “net neutrality.” While there is no single accepted definition of “net neutrality,” most agree that any such definition should include the general principles that owners of the networks that compose and provide access to the Internet should not control how consumers lawfully use that network, and they should not be able to discriminate against content provider access to that network.

— Joe

Who falls for fake news?

Here’s the abstract for Gordon Pennycock and David Rand’s Who Falls for Fake News? The Roles of Analytic Thinking, Motivated Reasoning, Political Ideology, and Bullshit Receptivity (August 21, 2017):

Inaccurate beliefs pose a threat to democracy and fake news represents a particularly egregious and direct avenue by which inaccurate beliefs have been propagated via social media. Here we investigate the cognitive psychological profile of individuals who fall prey to fake news. We find a consistent positive correlation between the propensity to think analytically – as measured by the Cognitive Reflection Test (CRT) – and the ability to differentiate fake news from real news (“media truth discernment”). This was true regardless of whether the article’s source was indicated (which, surprisingly, also had no main effect on accuracy judgments). Contrary to the motivated reasoning account, CRT was just as positively correlated with media truth discernment, if not more so, for headlines that aligned with individuals’ political ideology relative to those that were politically discordant. The link between analytic thinking and media truth discernment was driven both by a negative correlation between CRT and perceptions of fake news accuracy (particularly among Hillary Clinton supporters), and a positive correlation between CRT and perceptions of real news accuracy (particularly among Donald Trump supporters). This suggests that factors that undermine the legitimacy of traditional news media may exacerbate the problem of inaccurate political beliefs among Trump supporters, who engaged in less analytic thinking and were overall less able to discern fake from real news (regardless of the news’ political valence). We also found consistent evidence that pseudo-profound bullshit receptivity negatively correlates with perceptions of fake news accuracy; a correlation that is mediated by analytic thinking. Finally, analytic thinking was associated with an unwillingness to share both fake and real news on social media. Our results indicate that the propensity to think analytically plays an important role in the recognition of misinformation, regardless of political valence – a finding that opens up potential avenues for fighting fake news.

H/T to beSpacific. — Joe

Harvard’s Berkman Klein Center releases analysis of online media and social media coverage of 2016 presidential campaign

The Berkman Klein Center for Internet & Society released a comprehensive analysis of online media and social media coverage of the 2016 presidential campaign. The report, Partisanship, Propaganda, and Disinformation: Online Media and the 2016 U.S. Presidential Election (Aug. 16, 2017), documents how highly partisan right-wing sources helped shape mainstream press coverage and seize the public’s attention in the 18-month period leading up to the election.

From the Executive Summary:

In this study, we analyze both mainstream and social media coverage of the 2016 United States presidential election. We document that the majority of mainstream media coverage was negative for both candidates, but largely followed Donald Trump’s agenda: when reporting on Hillary Clinton, coverage primarily focused on the various scandals related to the Clinton Foundation and emails. When focused on Trump, major substantive issues, primarily immigration, were prominent. Indeed, immigration emerged as a central issue in the campaign and served as a defining issue for the Trump campaign.

We find that the structure and composition of media on the right and left are quite different. The leading media on the right and left are rooted in different traditions and journalistic practices. On the conservative side, more attention was paid to pro-Trump, highly partisan media outlets. On the liberal side, by contrast, the center of gravity was made up largely of long-standing media organizations steeped in the traditions and practices of objective journalism.

Our data supports lines of research on polarization in American politics that focus on the asymmetric patterns between the left and the right, rather than studies that see polarization as a general historical phenomenon, driven by technology or other mechanisms that apply across the partisan divide.

The analysis includes the evaluation and mapping of the media landscape from several perspectives and is based on large-scale data collection of media stories published on the web and shared on Twitter.

Recommended. — Joe

When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies

Here’s the abstract for Steven Goldfeder, Harry Kalodner, Dillon Reismany & Arvind Narayanan’s When the cookie meets the blockchain: Privacy risks of web payments via cryptocurrencies:

We show how third-party web trackers can deanonymize users of cryptocurrencies. We present two distinct but complementary attacks. On most shopping websites, third party trackers receive information about user purchases for purposes of advertising and analytics. We show that, if the user pays using a cryptocurrency, trackers typically possess enough information about the purchase to uniquely identify the transaction on the blockchain, link it to the user’s cookie, and further to the user’s real identity. Our second attack shows that if the tracker is able to link two purchases of the same user to the blockchain in this manner, it can identify the user’s entire cluster of addresses and transactions on the blockchain, even if the user employs blockchain anonymity techniques such as CoinJoin. The attacks are passive and hence can be retroactively applied to past purchases. We discuss several mitigations, but none are perfect.

H/T Freedom to Tinker post. — Joe

Forecasting global IP traffic growth for mobile and fixed networks by 2021

2021 looks like it will be a very good year if this forecast is on the money. CISCO’s Complete Visual Networking Index (VNI) forecasts the following for global IP traffic growth for mobile and fixed networks by 2021.

H/T to The Tech World Is Convinced 2021 Is Going to Be the Best Year Ever by Rachel Metz, MIT Technology Review, July 26, 2017. — Joe

Supreme Court website redesigned

Take a peak here. On SCOTUSblog, Andrew Hamm writes:

The court’s Public Information Office boasts that the site update includes “a more consistent menu structure, a more interactive calendar, faster access through Quick Links, improved page load times, and reduced page scrolling.” For example, instead of indicating only that the court will hear oral argument on a given day, the updated calendar provides case names for each argument day, with links to the docket entries and the questions at issue in each case.

The homepage also provides access to transcripts, audio and other case information.

Judging from the Twitter reactions of multiple Supreme Court practitioners and commentators, the most appealing element of the update – what John Elwood called a “tantalizing glimpse” – may be the light at the end of this newly-opened tunnel. According to the PIO, “the improvements will better support future digitization and the addition of electronic filing, and will enhance mobile access to information on the site.”

— Joe