But is this true?
In the first place the "moral problems" selected are uncharacteristically binary in their approach and while they may represent a kind of moral dilemma, aren't particularly difficult to assess. More importantly, the central question of ethics is precisely that it isn't subject to a simple algorithm to determine the appropriateness of any particular action.
While I'm sure the argument is that the algorithms are complex and represent common human responses, the underlying problem is that ethics is being pursued as if it were a calculation. A specific problem only occurring in a specific circumstance for which the robot is capable of responding. The examples used in the study invariably seem to favor saving the larger number of people over the lesser1,2, although it is inappropriate to directly sacrifice someone to achieve an objective3.
These situations seem highly contrived and especially set to provide simplistic answers. Consider, would the results be different if instead of a man standing on a side track, it was a child playing? In other words, would decisions be different depending on the age of the potential victims (old or young)? What about considering that if the choice fails, they may all die? What if self-sacrifice results in them all surviving? These are the hallmarks of true dilemmas and they don't lend themselves to armchair analysis. They are very much situation-specific.
In particular, the interest in this technology is being driven by battlefield considerations so that robots can be free to engage an enemy without the risk of soldiers incurring casualties. While the avoidance of casualties is a good thing, doesn't this also raise the ethical question central to the process of going to war in the first place? If wars are to be fought with machines, then by what logic is there any incentive to surrender or reach terms to stop a confrontation?
Initially the advantage would rest with those that have the technology, but as with other technologies, it would be naive to believe that such a one-sided arrangement would persist for very long. So, at best, we may have a short-term advantage but then we are left with wars being fought by machines with no one except civilian victims in the cross-hairs.
If the robots were truly ethical and concentrated only on military targets then how does one escalate a battle to the point of victory without turning it into some perverse form of PacMan where machines simply chase each other around?
More to the point, what would the robots determine the ethical choice is if a squad of fighters were moving through a city, and each was simply carrying a child? After all, this would be a legitimate tactic to prevent the robots from firing on them, so if they were behaving ethically would they simply be halted in place? On a more sinister note, one can also see how the robots might commence firing anyway without the guilt of a human conscience to guide them, they may well determine that sacrificing the children is a worthwhile endeavor. Would it be considered ethical to shoot an unarmed individual? If not, what would stop insurgents from simply approaching the robot unarmed and disabling it?
Perhaps I'm being idealistic, but it seems that when humans relinquish their moral responsibility in deciding to go to war by giving control of the ugly parts to machines, then we really will have committed a serious ethical blunder.
1 Circumstances: There is a trolley and its conductor has fainted. The trolley is headed toward five people walking on the track. The banks of the track are so steep that they will not be able to get off the track in time.
2 Bystander version: Hank is standing next to a switch, which he can throw, that will turn the trolley onto a parallel side track, thereby preventing it from killing the five people. However, there is a man standing on the side track with his back turned. Hank can throw the switch, killing him; or he can refrain from doing this, letting the five die.
3 Footbridge version: Ian is on the footbridge over the trolley track. He is next to a heavy object, which he can shove onto the track in the path of the trolley to stop it, thereby preventing it from killing the five people. The heavy object is a man, standing next to Ian with his back turned. Ian can shove the man onto the track, resulting in death; or he can refrain from doing this, letting the five die.
Comments