Work on Robot Rights Doesn't Conflict with Work on Human Rights
Sometimes I write and speak about robot rights, or more accurately, the moral status of artificial intelligence systems -- or even more accurately, the possible moral status of possible future artificial intelligence systems. I occasionally hear the following objection to this whole line of work: Why waste our time talking about hypothetical robot rights when there are real people, alive right now, whose rights are being disregarded? Let's talk about the rights of those people instead! Some objectors add the further thought that there's a real risk that, under the influence of futurists, our society might eventually treat robots better than some human beings -- ethnic minorities, say, or disabled people.
I feel some of the pull of this objection. But ultimately, I think it's off the mark.
The objector appears to see a conflict between thinking about the rights of hypothetical robots and thinking about the rights of real human beings. I'd argue in contrast that there's a synergy, or at least that there can be a synergy. Those of us interested in robot rights can be fellow travelers with those of us advocating better recognition of and implementation of human rights.
In a certain limited sense, there is of course a conflict. Every word that I speak about the rights of hypothetical robots is a word I'm not speaking about the rights of disempowered ethnic groups or disabled people, unless I'm making statements so general that they apply to all such groups. In this sense of conflict, almost everything we do conflicts with the advocacy of human rights. Every time you talk about mathematics, or the history of psychology, or the chemistry of flouride, you're speaking of those things instead of advocating human rights. Every time you chat with a friend about Wordle, or make dinner, or go for a walk, you're doing something that conflicts, in this limited sense, with advocating human rights.
But that sort of conflict can't be the heart of the objection. The people who raise this objection to work on robot rights don't also object in the same way to work on flouride chemistry or to your going for a walk.
Closer to the heart of the matter, maybe, is that the person working on robot rights appears to have some academic expertise on rights in general -- unlike the chemistry professor -- but chooses to squander that expertise on hypothetical trivia instead of issues of real human concern.
But this can't quite be the right objection either. First, some people's expertise is a much more natural fit for robot rights than for human rights. I come to the issue primarily as an expert on theories of consciousness, applying my knowledge of such theories to the question of the relationship between robot consciousness and robot rights. Kate Darling entered the issue as a roboticist interested in how people treat toy robots. Second, even people who are experts on human rights shouldn't need to spend all of their time working on that topic. You can write about human rights sometimes and other issues at other times, without -- I hope -- being guilty of objectionably neglecting human rights in those moments you aren't writing about them. (In fact, in a couple of weeks at the American Philosophical Association I'll be presenting work on the mistreatment of cognitively disabled people [Session 1B of the main program].)
So what's the root of the objection? I suspect it's an implicit (or maybe explicit) sense that rights are a zero-sum game -- that advocating for the rights of one group means advocating for their rights over the rights of other groups. If you work advocating the rights of Black people, maybe it seems like you care more about Black people than about other groups -- women, or deaf people, for example -- and you're trying to nudge your favorite group to the front of some imaginary line. If this is the background picture, then I can see how attending to the issue of robot rights might come across as offensive! I completely agree that fighting for the rights of real groups of oppressed and marginalized people is far more important, globally, than wondering about under what conditions hypothetical future robots would merit our moral concern.
But the zero-sum game picture is wrong -- backward, even -- and we should reject it. There are synergies between thinking about the rights of women, disempowered ethnic groups, and disabled people. Similar dynamics (though of course not entirely the same) can occur, so that thinking about one kind of case, or thinking about intersectional cases, can help one think about others; and people who care about one set of issues often find themselves led to care about others. Advocates of one group more typically are partners with, rather than opponents of, advocates of the other groups. Think, for example, of the alliance of Blacks and Jews in the 20th century U.S. civil rights movement.
In the case of robot rights in particular, this is perhaps less so, since the issue remains largely remote and hypothetical. But here's my hope, as the type of analytic philosopher who treasures thought experiments about remote possibilities: Thinking about the general conditions under which hypothetical entities warrant moral concern will broaden and sophisticate our thinking about rights and moral status in general. If you come to recognize that, under some conditions, entities as different from us as robots might deserve serious moral consideration, then when you return to thinking about human rights, you might do so in a more flexible way. If robots would deserve rights despite great differences from us, then of course others in our community deserve rights, even if we're not used to thinking about their situation. In general, I hope, thinking hypothetically about robot rights should leave us more thoughtful and open in general, encouraging us to celebrate the wide diversity of possible ways of being. It should help us crack our narrow prejudices.
Science fiction has sometimes been a leader in this. Consider Star Trek: The Next Generation, for example. Granting rights to the android named Data (as portrayed in this famous episode) conflicts not at all with recognizing the rights of his human friend Geordi La Forge (who relies on a visor to see and who viewers would tend to racialize as Black). Thinking about the rights of the one in no way impairs, but instead complements and supports, thinking about the rights of the other. Indeed, from its inception, Star Trek was a leader in U.S. television, aiming to imagine (albeit not always completely successfully) a fair and egalitarian, multi-racial society, in which not only people of different sexes and races interact as equals, but so also do hypothetical creatures, such as aliens, robots, and sophisticated non-robotic A.I. systems.
[Riker removes Data's arm, as part of his unsuccessful argument that Data deserves no rights, being merely a machine]
------------------------------------------
Thanks to the audience at Ruhr University Bochum for helpful discussion (unfortunately not recorded in the linked video), especially Luke Roelofs.