28 January 2017

The New Yorker: “If Animals have Rights, should Robots?”

Quarrels come at boundary points. Should we consider it immoral to swat a mosquito? If these insects don’t deserve moral consideration, what’s the crucial quality they lack? A worthwhile new book by the Cornell law professors Sherry F. Colb and Michael C. Dorf, Beating Hearts: Abortion and Animal Rights (Columbia), explores the challenges of such border-marking. The authors point out that, oddly, there is little overlap between animal-rights supporters and pro-life supporters. Shouldn’t the rationale for not ending the lives of neurologically simpler animals, such as fish, share grounds with the rationale for not terminating embryos? Colb and Dorf are pro-choice vegans (Our own journey to veganism began with the experience of sharing our lives with our dogs), so, although they note the paradox, they do not think a double standard is in play.

The big difference, they argue, is “sentience”. Many animals have it; zygotes and embryos don’t.

Nathan Heller

It always perplexes me how so many people get incredibly worked up protecting animals (like stray dogs which aren’t even endangered and are at best a nuisance and a source of filth and disease), but are completely indifferent to the hardships of other human beings or even take aggressive action against them – case in point the recent fearmongering about immigrants. The answer is pretty obvious, but telling for our capacity for abstraction and rational thought: people empathize strongly with what is closest to them, while happily ignoring what happens out of sight. 

A researcher named Kate Darling, with affiliations at M.I.T., Harvard, and Yale, has recently been trying to understand what is at stake in robo bonds of this kind. In a paper, she names three factors: physicality (the object exists in our space, not onscreen), perceived autonomous movement (the object travels as if with a mind of its own), and social behavior (the robot is programmed to mimic human-type cues). In an experiment that Darling and her colleagues ran, participants were given Pleos—small baby Camarasaurus robots—and were instructed to interact with them. Then they were told to tie up the Pleos and beat them to death. Some refused. Some shielded the Pleos from the blows of others. One woman removed her robot’s battery to “spare it the pain.” In the end, the participants were persuaded to “sacrifice” one whimpering Pleo, sparing the others from their fate.

Robot and animal rights
In relation to animals, we can conceive of ourselves as peers or protectors. Robots may soon face the same choice about us.
Illustration by Nishant Choksi

In my opinion, the arguments for abortion are equally flawed and biased. One argument is that women should have the right to control their bodies – I agree with that, but the fetus is not part of the women’s body, half of its genetic makeup coming from the father. If the fetus can be removed with as little ethical dilemma as a tumor it massively devalues human life. Another arguments says embryos are not sentient, or ‘alive’, so abortions are not really killing. But let’s think about this from another perspective: what does killing mean effectively? The killer is denying the victim all future experiences he or she might have had – isn’t that exactly what’s happening with a fetus? Left to ‘nature’, it would have developed into a human being and had at least a chance of living a full life; abortion is denying that. A more humane solution for everyone is better sexual education and contraceptives, and it’s arguably more effective at curbing abortions than bans.

The classic problem in the programming of self-driving cars concerns accident avoidance. What should a vehicle do if it must choose between swerving into a crowd of ten people or slamming into a wall, killing its owner? The quandary is not just ethical but commercial (would you buy a car programmed to kill you under certain circumstances?), and it holds a mirror to the harsh decisions we, as humans, make but like to overlook. The horrifying edge of A.I. is not really HAL 9000, the rogue machine that doesn’t wish to be turned off. It is the ethically calculating car, the military drone: robots that do precisely what we want, and mechanize our moral behavior as a result.

Post a Comment