Why We Might Have Greater Obligations to Conscious Robots Than to Human Strangers
A new short piece by me, released today in Aeon Opinions. From the piece:
[Most philosophers and researchers on artificial intelligence agree that] if someday we manage to create robots that have mental lives similar to ours, with human-like plans, desires and a sense of self, including the capacity for joy and suffering, then those robots deserve moral consideration similar to that accorded to natural human beings.
I want to challenge this consensus.... I think that, if we someday create robots with human-like cognitive and emotional capacities, we owe them more moral consideration than we would normally owe to otherwise similar human beings.
Here’s why: we will have been their creators and designers. We are thus directly responsible both for their existence and for their happy or unhappy state. If a robot needlessly suffers or fails to reach its developmental potential, it will be in substantial part because of our failure – a failure in our creation, design or nurturance of it. Our moral relation to robots will more closely resemble the relation that parents have to their children, or that gods have to the beings they create, than the relationship between human strangers.
Continued here.
Mara Garza and I also have full-length journal article on this topic forthcoming in a special issue of Midwest Studies -- final manuscript version here.