#1. LAWs aren’t completely safe to be used in warfare
#2. How AI embedded in Weapons can impact human lives
#3. Pros and Cons of AI embedded weapons

Lethal Autonomous Weapons (or LAWs as an abbreviation) has people divided over its future for some time now. The unpredictability and the agonizing anxiety it leaves in our nightmares make it one of the toughest issues to talk about today. Humanity is all too familiar with the heart-wrenching scars of war and everyone prefers to keep it light-years away from our world. However, almost always it seems nothing more than an adored sentiment because of the capriciousness of those in power.

Minds with malicious intents are always out there, trying to cause disharmony and dangerous unrest, usually for their own benefit. It can vary from a violent mass shooting inside a High School to the nightmare that was ISIS and its attacks on civilian targets worldwide. To combat these evils, people want to keep themselves ready at all times. US, China, and Saudi Arabia are the top 3 nations in their military expenditures. In 2016, the budget for Artificial Intelligence in the military was for $7.4 billion.

It seems the world is nearing another arms race; the race for weapons able to kill without human intervention. The proposal is being met with heavy criticism almost every time something new develops in this field. On March 26, 2019, Archbishop Ivan Jurkovic, a representative for the Vatican, told the U.N. representatives in Geneva to rethink their stance on autonomous weapons. Back in 2017, Stephen Hawking condemned the possibility of such weapons in the future and called it the ‘worst event in the history of our civilization’ if it becomes a reality. He, along with Elon Musk and more than 100 other researchers signed an open letter to the U.N., calling for a ban on such items of mass destruction.

But before choosing any side of the debate, let’s first try to get an idea of what autonomous weapons entail.

Definition

There are multiple names for it; Lethal Autonomous Weapons, Lethal Autonomous Robots, robotic weapons, and killer robots. An amalgamation of Artificial Intelligence and Machinery that programs itself for any task, usually involving deadly strikes against the enemy. The application of Artificial Intelligence in warfare makes many people question the humanity involved when machines are making life and death decisions.

These weapons are capable of attacking from land, water and even from the air. These are programmed to identify their targets based on a set of traits and inject a lethal force to destroy their targets. These traits could be virtually anything.

The term autonomous remains highly vague in this context, unlike in other areas such as philosophy, political science, and engineering where the word has a fixed definition. The definition in the context of military science can vary within nations or even between individual scholars and researchers. For this reason, a treaty is needed between nations explaining what an autonomous weapon is which has to be accepted by all the parties involved.

Read More: Trends in Artificial Intelligence and Machine Learning to Look for

Pros to the Idea

pros

Many believe that it would be wise to send a machine with Artificial Intelligence to work in place of human where the conditions aren’t too friendly. A place where a heavy bombardment of weapons is being carried or an area prone to severe radiation or even a region where there are land mines in very volatile conditions; these are places where human lives could be severely jeopardized and might not be the best places for people to go.

The economic cost of warfare is believed to reduce with the help of such innovations. It is estimated that each soldier in Afghanistan costs the Pentagon around $850,000 every year. If the job can be given to a robot instead, it might cost much less. Foster-Miller TALON (A combat robot ingrained with Artificial Intelligence) might cost $230,000 to do the same work in Afghanistan instead of humans. So, there is a significant reduction in the cost of military spending.

The Defense Science Board uses multiple robots, software agents or humans in which a single task is distributed. Tasks could be coordinated between people or systems. It is well beyond mere cooperation because it takes into account the understanding of each other’s abilities, shared commonly between all parties involved. This way they can approach the issue in a more humanlike way.

Professor Ronald C. Arkin who’s a well-known roboticist and roboethicist, argues that it might be ethical to use Artificial Intelligence weapons in combat. Since these robots will not have a self-preservation mode, they will not be inclined towards “Shoot First, ask questions later” attitude. Because of their lack of emotions, they will not be clouded by emotions of fear or anxiety which is a major reason for PTSD in many war veterans. Cases have surfaced of former army personnel, suffering from various forms of mental issues that can have a severely lethal effect on their overall health. Reducing humans from many of the areas of warfare can spare them some of the worst memories they would be forced to recount otherwise. He also argues that robots can be better at reporting ethical violations and taking bribery by the people in higher ranks due to their lack of fear or any need for incentives from hiding such acts for personal gain.

People can also attack out of vengeance which will lack when robots are used in combats. They will have no desire to get even if their own combatants are harmed. This can significantly spare the other party from attacks being caused due to anger issues.

Disadvantages to the Idea

disadvantages

Autonomous weapons are being described as the third revolution in warfare, after gunpowder and nuclear arms. The idea of leaving matters of life and death to a non-human is the primary concern of those who oppose the idea of lethal Artificial Intelligence weapons. During the war, even humans are sometimes unable to differentiate between combatants and civilians. When machines making those decisions come into the picture, the situation gets even shakier.

Computer scientist, Prof. Noel Sharkey stands in strict opposition to autonomous weapons. He suggests that this would a violation of the Principle of Distinction which is a crucial ethical aspect of any warfare discriminating between ordinary citizens and armed rebel groups. So, allowing machines to make the distinction might cause severe collateral damage.

International humanitarian law requires an organization or an individual to be held accountable for civilian deaths. Attacks committed with the use of autonomous weapons make it nearly impossible to identify the individual who will share responsibility in case of a mass casualty. This is because machines make such decisions on its own and there is no clear accountability in these cases. For example, if a driverless car crashes into other vehicles, it is unclear who should share the responsibility for it.

But some might argue even that with self-driving cars and AI-powered machines being used surgical procedures, machines are already taking life and death decisions for us. So, it might not be a far fetched idea to allow Artificial Intelligence to take decisions in cases of wars too.

Also, there are people who suggest that such technology will eventually make it to the hands of those with malignant purposes and will cause untold misery that will resemble a cataclysm. Right now countries are working with each other to reduce the threat of any nuclear arms. Imagine if a new technology is achieved that has the potential of causing harm on a global scale with no human accountability, it would be incredibly difficult to control them. The Nuclear Deals with Iran and the obscure North Korean use of nuclear products should be taken as reasons to avoid this fresh piece of artillery according to some.

Conclusion?

Conclusion

Unfortunately, this is a topic that will be devoid of any conclusion for a long time. It is because we are yet to see any effects of autonomous weapons, in reality, be it positive or negative. Whatever predictions we have regarding this technology stems only from the views of people who are experts in fields related to military and Artificial Intelligence.

But Artificial Intelligence is still a powerful tool that has shown promises in many other fields and has been welded into solid realities in the form of self-driving Cars, marketing strategies, and even space exploration. The field is attracting more and more people to work with it and almost all major companies are increasing their dependencies on AI and with their budget.

Read More: AI and Machine Learning: How Does The Future Look?

So, if you are interested in learning more about it, you can try out this course for Artificial Intelligence for beginners which will help you comprehend the essentials of AI and the vital terminologies.