Legal Alignment for Safe and Ethical AI
Alignment of artificial intelligence encompasses the normative problem of specifying how AI systems should act and the technical problem of ensuring AI systems comply with those specifications. To date, work on AI alignment has generally overlooked an important source of knowledge and practice for grappling with these problems: law.
Legal alignment aims to fill this gap by exploring how legal rules, principles, and methods can be leveraged to address problems of alignment and inform the design of safe and ethical AI systems. This emerging field focuses on three research directions: (1) designing AI systems to comply with the content of legal rules developed through legitimate institutions and processes, (2) adapting methods from legal interpretation to guide how AI systems reason and make decisions, and (3) harnessing legal concepts as a structural blueprint for confronting challenges of reliability, trust, and cooperation in AI systems.
These research directions present new conceptual, empirical, and institutional questions, which include examining the specific set of laws that particular AI systems should follow, creating evaluations to assess their legal compliance in real-world settings, and developing governance frameworks to support the implementation of legal alignment in practice. Tackling these questions requires expertise across law, computer science, and other disciplines, offering these communities the opportunity to collaborate in designing AI for the better.
► Link to the full paper: https://arxiv.org/abs/2601.04175
Noam Kolt* Hebrew University
Nicholas Caputo* Oxford Martin AI Governance Initiative
Jack Boeglin⁺ University of Pennsylvania
Cullen O’Keefe⁺ Institute for Law & AI, Centre for the Governance of AI
Rishi Bommasani Stanford University
Stephen Casper MIT CSAIL
Mariano-Florentino Cuéllar Carnegie Endowment for International Peace
Noah Feldman Harvard University
Iason Gabriel School of Advanced Study, University of London
Gillian K. Hadfield Johns Hopkins University, Vector Institute for Artificial Intelligence
Lewis Hammond Cooperative AI Foundation, University of Oxford
Peter Henderson Princeton University
Atoosa Kasirzadeh Carnegie Mellon University
Seth Lazar Australian National University, Johns Hopkins University
Anka Reuel Stanford University
Kevin L. Wei Harvard University
Jonathan Zittrain Harvard University, Berkman Klein Center for Internet & Society
*Lead authors. ⁺ Core contributors.
► Correspondence to: noam.kolt@mail.huji.ac.il
BibTeX citation:
@misc{koltcaputo2026legalalignmentsafeethical,
title={Legal Alignment for Safe and Ethical AI},
author={Noam Kolt and Nicholas Caputo and Jack Boeglin and Cullen O'Keefe and Rishi Bommasani and Stephen Casper and Mariano-Florentino Cuéllar and Noah Feldman and Iason Gabriel and Gillian K. Hadfield and Lewis Hammond and Peter Henderson and Atoosa Kasirzadeh and Seth Lazar and Anka Reuel and Kevin L. Wei and Jonathan Zittrain},
year={2026},
eprint={2601.04175},
archivePrefix={arXiv},
primaryClass={cs.CY},
url={https://arxiv.org/abs/2601.04175},
}