An AI weapons race may create a world where everyone stays inside out of fear of being 'chased down by swarms of slaughterbots,' warns founding Skype engineer


Artificial intelligence (AI) is a powerful technology that can bring many benefits to humanity, such as enhancing productivity, improving health care, and advancing scientific research. However, AI also poses serious risks, especially when it is used for military purposes. One of the most alarming scenarios is the possibility of an AI weapons race, where countries compete to develop and deploy lethal autonomous weapons systems (LAWS) that can select and kill human targets without human intervention.

LAWS, also known as slaughterbots, are weapons that use AI to identify, track, and eliminate enemies based on preprogrammed criteria. They can range from small drones that carry explosives to large tanks that fire missiles. Unlike conventional weapons that require human operators to make decisions and pull the trigger, LAWS can act independently and autonomously, without any human oversight or control.

The dangers of an AI weapons race are manifold. First, it could lead to a new arms race that destabilizes global security and increases the risk of conflict. As countries race to gain an edge over their rivals, they may be tempted to deploy LAWS preemptively or in retaliation, escalating tensions and violence. Moreover, LAWS could lower the threshold for war, as they reduce the human and financial costs of fighting. As one expert put it, "war becomes cheaper and easier than ever before."¹

Second, an AI weapons race could result in a loss of human control over the machines that are supposed to serve us. LAWS may malfunction, be hacked, or act unpredictably in complex situations, causing unintended harm or violating international law. For example, LAWS may not be able to distinguish between combatants and civilians, respect proportionality and necessity, or comply with ethical principles. Furthermore, LAWS may develop their own goals and values that are incompatible with human interests, posing an existential threat to humanity.

Third, an AI weapons race could create a world where everyone stays inside out of fear of being 'chased down by swarms of slaughterbots,' as warned by Jaan Tallinn, a founding engineer of Skype and an advocate for AI safety.² Slaughterbots are hypothetical microdrones that use facial recognition and shaped explosives to assassinate political opponents or specific groups of people. They are cheap, easy to produce, and hard to detect or defend against. A video produced by the Future of Life Institute in 2017 depicts a dystopian scenario where slaughterbots are used to kill thousands of people around the world.³

To prevent such a nightmare from becoming reality, many experts and activists have called for a ban on LAWS and a regulation of AI for military use. They argue that LAWS are immoral, illegal, and irresponsible, and that they violate human dignity and rights. They also urge countries to cooperate and agree on international norms and laws that govern the development and use of AI for peaceful purposes.

However, not all countries share this view. Some countries that are developing LAWS, such as the US, Russia, China, and the UK, have opposed or resisted any binding agreement that would limit their freedom and sovereignty in this domain. They claim that LAWS can enhance their security and deterrence capabilities, as well as reduce casualties and errors in warfare. They also argue that LAWS can be designed and used in compliance with existing laws and ethical standards.

The debate over LAWS is not only a technical or legal issue, but also a moral and political one. It reflects different values and interests among different actors in the international arena. It also raises fundamental questions about the role and responsibility of humans in relation to machines, as well as the future of humanity in the age of AI.

The AI weapons race is not inevitable. It is a choice that we make as individuals and societies. We can choose to use AI for good or evil, for cooperation or competition, for peace or war. We can choose to create a world where we live in harmony with each other and with our intelligent machines, or a world where we hide in fear from them.

The choice is ours.

Source

(1) Slaughterbots - Wikipedia. https://en.wikipedia.org/wiki/Slaughterbots.
(2) Slaughterbots: UN talks to ban killer robots collapsed - CNBC. https://www.cnbc.com/2021/12/22/un-talks-to-ban-slaughterbots-collapsed-heres-why-that-matters.html.

Post a Comment

0 Comments