Document Type
Working Paper
Publication Date
2012
Abstract
Lethal autonomous machines will inevitably enter the future battlefield – but they will do so incrementally, one small step at a time. The combination of inevitable and incremental development raises not only complex strategic and operational questions but also profound legal and ethical ones. The inevitability of these technologies comes from both supply-side and demand-side factors. Advances in sensor and computational technologies will supply “smarter” machines that can be programmed to kill or destroy, while the increasing tempo of military operations and political pressures to protect one’s own personnel and civilian persons and property will demand continuing research, development, and deployment.
The process will be incremental because non-lethal robotic systems (already proliferating on the battlefield) can be fitted in their successive generations with both self-defensive and offensive technologies. As lethal systems are initially deployed, they may include humans in the decision-making loop, at least as a fail-safe – but as both the decision-making power of machines and the tempo of operations potentially increase, that human role will likely but slowly diminish. Recognizing the inevitable but incremental evolution of these technologies is key to addressing the legal and ethical dilemmas associated with them – U.S. policy for resolving those dilemmas should be built on these assumptions.
The certain yet gradual development and deployment of these systems, as well as the humanitarian advantages created by the precision of some systems, make some proposed responses – such as prohibitory treaties – unworkable as well as ethically questionable. Those features also make it imperative, though, that the United States resist its own impulses toward secrecy and reticence with respect to military technologies, recognizing that the interests those tendencies serve are counterbalanced here by interests in shaping the normative terrain – the contours of international law as well as international expectations about appropriate conduct – on which it and others will operate militarily as technology evolves. Just as development of autonomous weapon systems will be incremental, so too will development of norms about acceptable systems and uses be incremental. The United States must act, however, before international expectations about these technologies harden around the views of those who would impose unrealistic, ineffective or dangerous prohibitions – or those who would prefer few or no constraints at all.
Disciplines
Law | Law and Philosophy | Military, War, and Peace | Science and Technology Law
Recommended Citation
Kenneth Anderson & Matthew C. Waxman,
Law and Ethics for Robot Soldiers,
Policy Review, forthcoming; American University, WCL Research Paper No. 2012-32; Columbia Public Law Research Paper No. 12-313
(2012).
Available at:
https://scholarship.law.columbia.edu/faculty_scholarship/1742
Included in
Law and Philosophy Commons, Military, War, and Peace Commons, Science and Technology Law Commons
Comments
This paper has been significantly revised for publication in 2013. The finalized version is available as "Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can," The Hoover Institution National Security & Law Essay Series 2013.