While many scholarly articles warn about the potential consequences of AI-based administrative decision making, the following two recent articles point out potential benefits of algorithmic rule-making.
Regulating by Robot: Administrative Decision Making in the Machine-Learning Era, Georgetown Law Journal (Forthcoming) by Cary Coglianese and David Lehr (both University of Pennsylvania Law School).
Abstract: Machine-learning algorithms are transforming large segments of the economy, underlying everything from product marketing by online retailers to personalized search engines, and from advanced medical imaging to the software in self-driving cars. As machine learning’s use has expanded across all facets of society, anxiety has emerged about the intrusion of algorithmic machines into facets of life previously dependent on human judgment. Alarm bells sounding over the diffusion of artificial intelligence throughout the private sector only portend greater anxiety about digital robots replacing humans in the governmental sphere. A few administrative agencies have already begun to adopt this technology, while others have the clear potential in the near-term to use algorithms to shape official decisions over both rulemaking and adjudication. It is no longer fanciful to envision a future in which government agencies could effectively make law by robot, a prospect that understandably conjures up dystopian images of individuals surrendering their liberty to the control of computerized overlords. Should society be alarmed by governmental use of machine learning applications? We examine this question by considering whether the use of robotic decision tools by government agencies can pass muster under core, time-honored doctrines of administrative and constitutional law. At first glance, the idea of algorithmic regulation might appear to offend one or more traditional doctrines, such as the nondelegation doctrine, procedural due process, equal protection, or principles of reason-giving and transparency. We conclude, however, that when machine-learning technology is properly understood, its use by government agencies can comfortably fit within these conventional legal parameters. We recognize, of course, that the legality of regulation by robot is only one criterion by which its use should be assessed. Obviously, agencies should not apply algorithms cavalierly, even if doing so might not run afoul of the law, and in some cases, safeguards may be needed for machine learning to satisfy broader, good-governance aspirations. Yet in contrast with the emerging alarmism, we resist any categorical dismissal of a future administrative state in which key decisions are guided by, and even at times made by, algorithmic automation. Instead, we urge that governmental reliance on machine learning should be approached with measured optimism over the potential benefits such technology can offer society by making government smarter and its decisions more efficient and just.
Regulation by Machine by Benjamin Alarie, Anthony Niblett and Albert Yoon (all University of Toronto, Faculty of Law).
Abstract: Legal scholars investigating artificial intelligence are preoccupied with regulation. The literature has largely focused on the need for humans to regulate the behavior of automated systems. In this paper, we focus on the converse: how artificially intelligent systems can serve to regulate human behavior. The shortcomings of human-led regulation are clear. We argue that machine learning technology can address some of these limitations. We provide examples of how machine learning can predict how courts would decide legal disputes more cheaply and accurately than human regulators. This allows regulators to streamline operations, providing fast, accurate, consistent, and reliable ex ante regulatory advice and rulings. We further explore how machine learning technology might soon be used to refine laws and reduce errors.