|
Biography |
|
Ronald C. Arkin received a B.S. from University of Michigan, M.S. from Stevens Institute of Technology, and Ph.D. from the University of Massachusetts, Amherst. He is a Regents’ Professor in the College of Computing at the Georgia Institute of Technology, Director of the Mobile Robot Laboratory, and Associate Dean for Research and Space Planning of the College. In 1997-98, Prof. Arkin served as STINT visiting Professor at the Royal Institute of Technology (KTH) in Stockholm. In 2005, Prof. Arkin held a Sabbatical Chair at Sony IDL in Tokyo, served as a member of the Robotics and Artificial Intelligence Group at LAAS in Toulouse in 2005-06.
Dr. Arkin's research interests include behavior-based reactive control, action-oriented perception, deliberative/reactive architectures, robot survivability, multiagent systems, biorobotics, human-robot interaction, robot ethics, and learning in autonomous systems. He has over 170 technical publications. Prof. Arkin has written several books: Behavior-Based Robotics (1998), Robot Colonies (1997), and Governing Lethal Behavior in Autonomous Robots (2009). Dr. Arkin serves as an Associate Editor for numerous journals and is the Series Editor for the MIT Press book series Intelligent Robotics and Autonomous Agents. Prof. Arkin serves on the Board of Governors of the IEEE Society on Social Implications of Technology, served two terms on the AdCom of the IEEE Robotics and Automation Society, served as founding co-chair of the IEEE RAS Technical Committee on Robot Ethics, co-chair of the Society's Human Rights and Ethics Committe, and also served on the National Science Foundation's Robotics Council from. In 2001, he received the Outstanding Senior Faculty Research Award from the College of Computing at Georgia Tech, and in 2011 received the Outstanding Achievement in Research Award from the University of Massachusetts Computer Science Department. He was elected a Fellow of the IEEE in 2003.
|
|
|
|
Abstract |
|
|
People Behaving Badly, Robots Behaving Better? Embedding Ethics in Autonomous Robotic Systems |
|
|
|
Weaponized robotic systems are being introduced into the battlefield at an ever increasing pace. The consequences of this technological progress need to be examined carefully. In this talk, I outline the philosophical basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is a further contention that an autonomous robot capable of lethal force can ultimately be more humane in the battlefield than human soldiers. The design addresses issues where human warfighters may fail, including suppression of unethical behavior, inculcating ethical constraints from the onset, the use of affect as an adaptive component in the event of unethical action, and support for identifying and advising operators regarding their responsibility. This research was supported under a grant from the U.S. Army Research Office.
|
|
|
|