So why is it important that the military (and society in general) should pursue the development of robots, but also investigate more into the implications from that development? Apart from the obvious reason that robots look cool. =)
First reason: the robots are already a reality and there is no way they will disappear. Lots of people still think that killer robots are fiction, but in fact they are already well presented on the battlefield. In 2003 the US military began its war in Iraq with just a few drones and zero ground robots. Today it has more than 7000 drones and 12000 ground robots. This year the US air force will have trained more distance-control pilots than actual pilots of fighter jets and bombers.
Today's robots are still of the first generation. The Predator drones and the Packbot ground robots (primarily used for transporting equipment, inspecting buildings, finding explosives), are like what the Ford Model-T was for the automobile industry in the early 20th century - namely, the beginning of a new epoch. While the Predator is still attracting the bulk of media attention, the military has long forgotten this prototype and moved on to newer versions, ones that fly faster and carry more cargo (i.e. weapons). 30% of all US military aircraft are now drones. At the moment the US air industry is reluctant to do research on new generations of manned aircraft. Instead it is focused on the drones. Because they have proven their effectiveness in Iraq, Afghanistan and Libya. And they will certainly do even better in the next war. Because I am sure it won't come late, sadly.
Reason two: the robots are already a global phenomenon. The revolution in robotics has stopped being a US dominion some time ago. Granted, the US is still way ahead - for the time being. Especially as far as military research is concerned. And this is no surprise, since the US military budget is 708 billion dollars, which is roughly 43% the world's military expenditures.
But the US are no longer a monopolist in this domain. At least 45 other nations are doing research of their own, buying and building robots for military purposes. From UK, France and Israel, to Iran, Russia and China. Some of the Chinese drones are equipped with German technology. Meanwhile, in Germany the military robotics research is usually presented as an American fetish, but the Heron drones in the Bundeswehr are multiplying exponentially. And we shouldn't believe the claims that these drones are only used for non-assault purposes.
There was a time when NATO members such as Italy, France and UK kept insisting in official statements that they were using their drones only for surveillance and patrol flights. Now they are openly using them in combat missions. So far in Afghanistan the British have fired over 200 missiles from such drones. The way in WW1 the manned aircraft were initially used for surveillance and only then they were included in combat operations.
So where does this development lead to? The fact that the US is currently the leader in a technical respect, does not mean much. As in the world of technology, at times of war the rule applies that it is not enough to be the first to produce some ground-breaking innovation - there is no guarantee that you would be the only one to pick its fruits. After all, very few people are still working with Commodore or IBM computers nowadays, and those used to dominate the computer industry at its first stages of development. The British invented the tank, but the Wehrmacht used it for its Blitzkrieg strategy.
So how would things develop in robotics? What will the consequences be from seeing the "Made in China" label on more and more weapons, including robots which have been assembled in China while their software usually comes from India? Hezbollah is already using drones packed with explosives against Israel, and in Taiwan some robbers stole jewels by using a small remote controlled helicopter.
Third reason: robotics already influences politics in a profound way. There are few Western democracies where military conscription is still in place. There are even fewer parliaments still officially declaring wars. The attitude of most "democracies" to war has changed. There is new technology in place, which allows military action without political debates - and the moral side is considered almost moot, since no actual living military personnel is involved in those actions. After all, isn't protecting the sons of daughters of the nation the number one priority of any commander in chief?
The implications from this change are profound. They are related to the question when and where do the so called civilized democracies cross the line into what used to be called "conventional war", and when their actions become atrocity. The US, for example, has performed 300+ air assaults on Pakistani territory during Obama's term alone. That is more than all the operations in the Kosovo war, a decade ago. But no one in Washington looks at these attacks in Pakistan and says "oh, war". Those are unmanned aircraft, after all. There was no congressional approval for performing those attacks, but still the media is not very interested about them. Besides, most of them are directed by the CIA, not the military.
In the case with Libya, president Obama didn't deem a Congress sanction for the bombing of Gaddafi-loyalist targets necessary, since America was only participating with unmanned aircraft... which at the end of the day launched 150+ missiles. The "advantage" of these systems became obvious when Gaddafi's air defence shot down a US drone helicopter. Had it ben a manned one, that would have possibly meant the death of Americans, which is a real nightmare for any commander in chief these days. In that case though, there were no victims on the US side and the incident received only a cursory mention in the news.
Fourth reason: we are not ready with the legislation. Entire armies from the generation of my father and yours used to possess the computing capacity of a remote control toy car, which my little son is now playing with in his afternoons, and is taking for granted. The rate of technical development has been exponential. Meanwhile, our laws and our policies are lagging far behind, and struggling to keep up with the pace.
The developed world not only produces weapons that shoot ever faster and with increasing power, we are creating killer robots, autonomous systems that move on their own and in many cases even take decisions on their own. They are increasingly intelligent, and this raises the issue to a new level. What if my technology takes a decision that would harm somebody - would I be responsible for that or not?
After all, there is a widening gap between what is technically possible, and what is morally "right". One of the key ethical questions is the distance (both in space and time) between cause and effect, which is inherent to remote controlled robotic systems. Because they are shifting the moment of implementation of the decision from the moment of taking that decision, both in terms of geography and time. When we discuss who is responsible for a certain decision and action, we often ask the question who was at the place of the event at the time it happened. But with the robotic systems, one could take decisions whose consequences would occur at thousands of miles away, and possibly many minutes and hours later. It is also possible that decisions that turn out of crucial importance, had been built into the system itself by some programmer. Who would be responsible for the results of that system's actions then? We lack a clear and adequate interpretation of all the possible situations stemming from the development of technology.
And the tough questions are multiplying as time passes and this type of technology develops exponentially. Somehow, it is like opening a Pandora box and not caring enough about the consequences. These questions affect both the research scientists who have to decide which type of research matches the ethical criteria and which doesn't (and whether they should care at all); it also affects the military officer who is supposed to be controlling a unit that is fighting thousands of miles away in Afghanistan, while he is comfortably sitting in his chair in front of a screen in the US. Some white blobs running on his screen - are they the enemy? Should they be killed? How many of them, at what range? What if some of them are civilians? Is my intel about their identity reliable enough? Should I even care about these questions? It all becomes like a videogame, and the element of standing face to face with the enemy disappears, and this brings all sorts of implications with it.
It is not like these sort of questions are insoluble, but it is not easy to find answers at this point, either. The problem is that the current framework of existing legislation (and the set of moral notions that necessitates them) are out ot date, and lagging behind reality, and fast.
The reality is that a future with an increasing presence of robots is inevitable, including (and especially) for military purposes. But we could at least try to prepare for that inevitability. Unless we want to be served some nasty surprises, and end up with a world where "anything goes". What I know for sure is that this would not be a world I would want my progeny to call their own.