Robert Sparrow

Artificially intelligent weapon systems will thus be capable of making their own decisions, for instance, about their target, or their approach to their target, and of doing so in an ‘intelligent’ fashion. While they will be programmed to make decisions according to certain rules, in important circumstances their actions will not be predictable. However, this is not to say that they will be random either. Mere randomness provides no support for a claim to autonomy. Instead the actions of these machines will be based on reasons, but these reasons will be responsive to the internal states —‘desires’, ‘ beliefs’ and ‘values’— of the system itself. Moreover, these systems will have significant capacity to form and revise these beliefs themselves. They will even have the ability to learn from experience.

This entry was posted in information. Bookmark the permalink.

1 Response to Robert Sparrow

  1. shinichi says:

    Killer Robots

    by Robert Sparrow

    Journal of Applied Philosophy (2007)

    http://onlinelibrary.wiley.com/doi/10.1111/j.1468-5930.2007.00346.x/abstract

    The United States Army’s Future Combat Systems Project, which aims to manufacture a ‘robot army’ to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of the decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed: the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare.

Leave a Reply

Your email address will not be published.