🔬 Research Summary by Jeff Johnston, an independent researcher working on envisioning positive futures, AI safety and alignment via law, and Piaget-inspired constructivist approaches to artificial general intelligence.
[Original paper by Jeffrey W. Johnston]
Overview: How to ensure AI systems are safe and aligned with human values is an unsolved problem. This paper makes a case that effective legal systems are the best solution.
When people consider legal approaches to AI safety, Isaac Asimov’s Three Laws of Robotics often come to mind (i.e., robots must not harm humans, robots must obey humans, robots must protect their own existence). Although Asimov believed these simple laws were fundamental to the safety, effectiveness, and durability of all human artifacts (including robots), he and others recognized they were insufficient. Many of Asimov’s stories explored how these laws fail.
Fortunately, considering the application of law to AI safety did not end with Asimov. This paper (written loosely in the form of a legal brief) identifies others who have advocated for law-oriented solutions and provides insights into why Law–if appropriately framed, embraced, and implemented–can facilitate AI safety and value alignment between AIs and humans.
We argue that an essential equivalence between ethics and law makes law a natural solution for aligning human-AI values. The approach requires laws to be formally codified, available in open and authoritative repositories, and (most importantly) supported by effective and full-featured jurisprudence systems.
Law is the standard, time-tested, best practice for maintaining order in societies of intelligent agents
Law has been the primary way of maintaining functional, cohesive societies for thousands of years. It is how humans establish, communicate, and understand what actions are required, permissible, and prohibited in social spheres. Substantial experience exists in drafting, enacting, enforcing, litigating, and maintaining rules in contexts that include public law, private contracts, and the many others noted in the paper. The law will naturally apply to new species of intelligent systems and facilitate safety and value alignment for all.
Law is scrutable to humans and other intelligent agents
Unlike AI safety proposals, where rules are learned via examples and encoded in artificial (or biological) neural networks, laws are intended to be understood by humans and machines. Although laws can be quite complex, such codified rules are significantly more scrutable than rules learned through induction. The transparent (white box) nature of law provides a critical advantage over opaque (black box) neural network alternatives.
The law reflects consensus values
Democratically developed law is intimately linked and essentially equivalent to consensus ethics. Both are human inventions intended to facilitate the well-being of individuals and the collective. They represent shared values that are culturally determined through rational consideration and negotiation. They reflect the wisdom of crowds accumulated over time—not preferences that vary from person to person and are often based on emotion, irrational ideologies, confusion, or psychopathy. Ethical values provide the virtue core of legal systems and reflect the “spirit of the law.” Consequentialist shells surround such cores and specify the “letter of the law.” This relationship between law and ethics makes law a natural solution for human-AI value alignment.
Legal systems are responsive to changes in the environment and changes in moral values
Using legal mechanisms to consolidate and update values over time, human and AI values can remain aligned indefinitely as values, technologies, and environmental conditions change. Thus, the law provides a practical implementation of Yudkowsky’s Coherent Extrapolated Volition (2004) by allowing values to evolve that are wise, aspirational, convergent, coherent, suitably extrapolated, and correctly interpreted.
Legal systems restrict overly rapid change
Legal processes provide checks and balances against overly rapid changes to values and laws. Such checks are particularly important when legal change can occur at AI speeds. Legal systems and laws must adapt quickly enough to address the urgent issues that arise but not so quickly as to risk dire consequences. Laws should be based on careful analysis and effective simulation, and the system can quickly detect and correct problems found after implementation. New technologies and methods should be introduced to make legal processing as efficient as possible without removing critical checks and balances.
Laws are context-sensitive, hierarchical, and scalable
Laws apply to contexts ranging from international, national, state, and local governance to all manner of other social contracts. Contexts can overlap, be hierarchical, or have other relationships. Humans have lived under this regime for millennia and can understand which laws apply and take precedence over others based on contexts (e.g., jurisdictions, organization affiliations, contracts in force). Artificial intelligent systems will be able to manage the multitude of contexts and applicable laws by identifying, loading, and applying appropriate legal corpora for applicable contexts. For example, AIs (like humans) will understand that cross-checking is permitted in hockey games but not outside the arena. They will know when to apply the rules of the road versus the rules of the sea. They will know when the laws of chess apply versus the rules of Go. They will know their rights relative to every software agent, tool, and service they interface with.
AI Safety via Law can address the full range of AI safety risks, from systems that are narrowly focused to those having general intelligence or even superintelligence
Enacting and enforcing appropriate laws and instilling them in AIs and humans can mitigate risks spanning all levels of AI capability—from narrow to general to super AI. Effective detection and enforcement must occur if intelligent agents stray from the law.
This legal approach applies to all intelligent systems regardless of their underlying design, cognitive architecture, and technology. Whether an AI is implemented using biology, deep learning, constructivist AI, semantic networks, quantum computers, positronics, or other methods is immaterial. All intelligent systems must comply with applicable laws regardless of their particular values, preferences, beliefs, and how they are wired.
Between the lines
Although its practice has often been flawed, the law is a natural solution for maintaining social safety and value alignment. All intelligent systems–biological and mechanical–must know the law, strive to abide by it, and be subject to effective intervention when violated. The essential equivalence and intimate link between consensus ethics and democratic law provide a philosophical and practical basis for legal systems that marry values and norms (“virtue cores”) with rules that address real-world situations (“consequentialist shells”). Unlike other AI safety proposals, this approach requires AIs to “do as we legislate, not as we do.”
For the future safety and well-being of all sentient systems, work should occur in earnest to improve legal processes and laws. Hence, they are more robust, fair, nimble, efficient, consistent, understandable, accepted, and complied with. Technologies are emerging to help, such as Large Language Models capable of understanding existing legal corpora, Personal Agents, and public policy simulators.
Humans must (re)commit to the rule of law (which includes consensus ethics) to counter the dangers we pose to the biosphere and ourselves. It is unclear if advanced AI will be more or less dangerous than humans. Regardless, the law is critical for both.