At the November meeting of member countries of the Convention on Conventional Weapons (CCW) at the United Nations in Geneva, attempts to regulate lethal autonomous weapon systems (LAWs) have ended in a stalemate due to disagreements about both ethical and security issues.
Autonomous weapons, known as “killer robots”, are technologies operated by artificial intelligence systems that attack targets without human control.
At least 28 countries demand a ban on AI weapons, while both the US and Russia object to the idea. China, Israel, South Korea and the UK are also racing to develop LAWs. China promised it would join the ban group, but clarified Beijing was not against their production.
Last year, the EU Parliament passed a resolution calling for an international ban on the development, production and use of weapons that kill without human involvement. But, according to experts, technological development might have progressed already too far to implement a full ban.
At the Paris Peace Forum, UN Secretary-General Antonio Guterres, called for an international ban on LAWs, calling them “politically unacceptable and morally despicable”. Legal experts still don’t know who to hold responsible if such a weapon broke international law.