Elon Musk along with some 100+ assistants – such as Lisa Randall, Stephen Hawking, Frank Wilczek, Max Tegmark, Noam Chomsky, Steve Wozniak – have sent an open letter to the United Nations
Except that it's not bad.
Civilized countries deal with lots of trivial armed enemies such as Daesh which could be easily defeated but no one really wants to do that. Some of the people who could fight Daesh are actually its allies. But even among those who aren't the allies of Daesh, no one really wants to sacrifice the lives of our soldiers in order to kill some pretty much irrelevant savages and bigots. The value of a Western soldier is several million dollars while the Daesh warrior is cheaper by many orders of magnitude. You surely don't want to sacrifice your boys on the man-against-man basis.
Similar situations could be easily solved with the help of some killer robots. If the Iraqi army were possessing a good enough technology of this sort, it could send them to Tall Afar. Some of these robots would be destroyed but the remaining ones would destroy the warriors of Daesh. And that would be a good thing.
Lives of the people who don't really want to murder or be murdered would be saved. Indeed, a new kind of an arms race could be kickstarted. But this arms race would be a good thing, too, for a simple reason: the technologically advanced countries would naturally become leaders in this new kind of a warfare. And that's a good thing because these nations are generally more civilized when it comes to the human rights and related social characteristics, too. After this "third revolution of warfare", wars could become a sort of technological and economic competitions which sounds much more humane than the lethal and heartbreaking wars we have known so far.
Israel is more advanced both in high-tech and in the organization of the society than the Muslim ocean surrounding her. So if high-tech and the economy mattered more than it does today, the balance of power would move in Israel's favor and that's a good thing, I think. Needless to say, some readers may be anti-Israel or disagree with my argumentation for related reasons.
But there's another reason why the bans would be counterproductive. Some groups in the world just wouldn't care about them. The bans always affect primarily the law-abiding or even shy individuals or nations. So this ban would really apply "mostly to the good side only". And that's bad. Rogue nations will be trying to get killer robots in coming years, anyway.
Even the Guardian – well, Philip Ball – wrote a text arguing that we can’t ban killer robots – it’s already too late. The article enumerates some weapons that already exist or are being developed, paid for, and built. Musk's demand is really detached from reality.
Yesterday, an obnoxious Greek reader argued once again that the Greeks should have the world monopoly over the production of and profits from the Greek yogurts because this kind of a yogurt is sometimes called "Greek". Great. So why don't I say something analogously ludicrous? As Ball points out, "robots" were invented by the Czech writer Karel Čapek and his play, RUR (Rossum's Universal Robots). Incidentally, the owner's name, Rossum, is an internationalized "Rozum" which means "reason" or "mind" or "rationality" in Czech (or "sense" in "common sense"). More importantly, the word "robot" comes from "robota" which is just "work" or "labor" in many Slavic languages including Slovak and Russian but it is specifically "drudgery" or "forced labor" (under serfdom) in Czech. And as Ball points out, even the original robots in RUR were actually revolting and killing the people.
So that's surely what robots should normally do. If they fail to do such things, you shouldn't have the right to call them robots at all! ;-) Also, true robots should be produced in Czechia.
A $10,000 killer robot, a comedy, 5 minutes.
Nothing substantial would change about the logic of wars if killer robots have superseded soldiers. If your country is attacked by killer robots, it's still attacked, and you can respond exactly as you would respond in the past when human soldiers were doing the dirty work. If a killer robot kills someone, there's still some human or nation that is responsible for sending the robot to the place where it can do harm. From the viewpoint of the relationship between the people and the nations, and from a legal and strategic viewpoint, a killer robot is still just a tool, much like any weapon. So the usual rules that apply to tanks or new types of explosives may apply to the killer robots, too.
There's no rational reason to remove them. Unlike chemical or nuclear bombs, killer robots could make the conflicts much more targeted and saving the civilian populations. In the distant enough future, the defending side would probably use killer robots as well, so the wars would literally become conflicts between robots which looks like a good development to me. As the Guardian also mentions, a robot may be much better in distinguishing an innocent civilian from a malicious warrior – by biometrics – than a frightened soldier using his instincts.
This portion of the technological progress seems unstoppable – the research may be done overtly or covertly – and it's very important for the "good people" and "good nations", whatever they are, not to be left behind. Indeed, as Ball's subtitle says, telling international arms traders they can’t make killer robots is like telling soft-drinks makers that they can’t make orangeade.
Surprisingly, Wired also ran a story saying that the ban isn't practical. The boundary between human-controlled and autonomous is blurry and pretending that it's sharp can't lead to good regulations. Instead, one should adapt to the fact that these machines will be increasingly widespread and increasingly subtle and develop good laws saying who is responsible for particular mishaps, e.g. for robots' mistakes when they kill a civilian. Right.