Staging the future: artificial intelligence and conflict
Leading defence and international affairs think-tanks, the Royal United Services Institute (RUSI) and the Atlantic Council recently co-produced an evening of theatre, art, futurism and policy debate at London’s Platform Theatre. The central theme of the event was the role of Artificial Intelligence (AI) in future conflict. Leading thinkers from the military, government, tech sector and academia debated key technical and ethical issues surrounding the legitimate use of artificial intelligence, robotics and automation in war. Each panel was preceded by a ‘mini-play’, an innovative approach to stimulating military and ethical policy debate.
“Decision making and legal responsibility in the age of Artificial Intelligence” was the opening panel. Dr. Ali Hossaini (visiting Research Fellow at King’s College London), Keith Dear, Oliver Lewis (Head of Defence and Public Policy at Improbable Worlds), Dr. Pippa Malmgren (author and co-founder of H Robotics) and Dr. Conrad Tucker (Associate Professor from Penn State) unpacked and debated key legal responsibility issues in intelligent automated systems.
Fully autonomous weapons systems are already deployed. They protect warships and other assets from incoming ballistic missiles. Necessary response time means that manual human consultation is infeasible. This can be contrasted with strikes from Unmanned Arial Vehicles (UAVs) where weapons release decisions currently rest with human operators. The locus of responsibility and agency of algorithms presents accountability challenges when autonomous decisions ‘go wrong’. How can this be contained?
Algorithm testing using an adapted model from drug trialling was suggested as a regulatory response. This could be a glimpse of future governmental intervention. Accountability frameworks that identify ‘programmed in’ ethical and technical requirements for weapons release could assist. In self-adapting systems however, this causal chain may be highly complex and tracing questions such as “who gave the order, what was their intention, who holds responsibility for the outcome” might become lost not only translation, but algorithmic adaption. Recently announced Defence Advanced Research Projects Agency (DARPA) projects on Explainable Artificial Intelligence (XAI) are positive examinations of the comprehensibility of AI systems.
A further panel discussed “Soldiers 2.0: Engineering the Perfect Fighter.” Dr. Conrad Tucker was joined by Grady Booch (IBM Fellow), Chris Lincoln-Jones (former British Army), Major General (Rtd) Andrew Sharpe (Director of the British Army’s Centre for Historical Analysis and Conflict Research) and Nick Yeung (Professor of Cognitive Neuroscience at the University of Oxford).
The opening play for this panel featured an AI ‘therapist’ in session with a traumatised soldier. The psychoanalytic AI is a fascinating concept, building deep emotional rapport with the ‘patient’. A simple ‘tweak’ to ‘AI as therapist’ and we arrive at ‘AI as interrogator’. An interrogator that never sleeps, with deep knowledge of the subject and with potentially deniable accountability for its handlers. This raises profound ethical questions concerning potential misuse of AI in enhanced interrogation and torture.
Conceiving a future when AIs fight alongside conventional forces, psychological and emotional bonds may develop in unexpected ways between ‘man’ and machine. The hyperrealism of cutting-edge robots is breath taking. For examples, see Soul Machines and Hanson Robotics. Does it then become conceivable that an AI develops higher operational and emotional value to its unit than a human counterpart? Would a commander sacrifice human assets to protect an AI?
Looking back through the telescope, Elon Musk’s start-up Neuralink describes itself as “developing ultra-high bandwidth brain-machine interfaces to connect humans and computers.” The scale of ambition is typical of Musk, but it does raise further philosophical questions about the meaning of being human. As a segue into the religious and philosophical, Simon Jacobson in his weekly Meaningful Life class recently examined the question “AI and the Future of the Human Race: Will Machines Enhance or Replace Us?” We must also ask if algorithms will fight our wars, become our bosses, assume the role of Leviathan.
—
Steve Nimmons is a freelance technology journalist, a Chartered Fellow of the British Computer Society, Chartered Engineer, Fellow of the Institution of Engineering and Technology, Royal Society of Arts, Linnean Society and Society of Antiquaries of Scotland. He is a member of the Royal United Services Institute, the Royal Institute for International Affairs, the Chartered Institute of Journalists and is a Life Member of the Aristotelian Society.