Catherine A. Tremble attempts to answer that question in her forthcoming Fordham Law Review note, Wild Westworld: The Application of Section 230 of the Communications Decency Act to Social Networks’ Use of Machine-Learning Algorithms.
Here’s the abstract:
On August 10th, 2016, a complaint filed in the Eastern District of New York formally accused Facebook of aiding the execution of terrorist attacks. The complaint depicted user-generated posts and groups promoting and directing the incitement of terrorist activities. Under section 230 of the Communications Decency Act (CDA), Interactive Service Providers (ISPs), such as Facebook, cannot be held liable for user-generated content where the ISP did not create or develop the information. However, this case stands out because it seeks to hold Facebook liable not only for the content of third parties, but also for the effect its personalized machine-learning algorithms — or “services” — have had on the ability of terrorists to orchestrate and execute attacks. By alleging that Facebook’s conduct goes beyond the mere act of publication, and includes the actual services’ effect on terrorists’ abilities to more effectively execute attacks, the complaint seeks to prevent the court from granting section 230 immunity to Facebook.
This Note argues that Facebook’s services — specifically the personalization of social media pages through the use of machine-learning algorithms — constitute the “development” of content and as such do not qualify for immunity under section 230 of the CDA. Recognizing the challenge of applying a static statute to a shifting technological landscape, this Note analyzes recent jurisprudential evolutions in section 230 doctrine to revise the original analytical framework applied in early cases. This Framework is guided by congressional and public policy goals but evolves to demonstrate awareness of technological evolution and ability. It specifically tailors section 230 immunity to account for behavioral data mined for ISP use, and the effect the use of that data has on users — two issues that courts have yet to confront. This Note concludes that, under the updated section 230 framework, personalized machine-learning algorithms made effective through the collection of individualized behavioral data make ISPs co-developers of content and as such bar them from section 230 immunity.