Discussion about this post

User's avatar
William S. Robinson's avatar

When animals exceed our power in various physical ways, we think we’re entitled to kill them if they are threats to humans (e.g., wolves in the American west, bacteria, disease-carrying mosquitoes) or confine them in fenced preserves, zoos, etc. (e.g., large African mammals). I don’t see why it would, or why it should, be any different with entities that exceeded our cognitive or sensitive capacities, if we could see that they were threats to human happiness. Imagine a mutation that turned cows into brilliant inventors and conspirators that we feared would allow them to stage an uprising that would amount to trading places in the food chain. Would we think we should morally refrain from removing the threat by extermination (or genetic modification)? I think not. If that’s right, then an obvious response to potential AI systems that we suppose might exceed us in cognitive abilities or conscious sensitivity would be that it’s immoral to bring them into existence, and moral to destroy them if they present a threat. And if the aliens land and we have good grounds for thinking they are a threat to humanity, it seems it would be immoral not to resist by every means possible.

Ronald Raadsen's avatar

Interesting argument. I am resistant to the framework itself. The imago dei tradition locates human dignity not in capacity but in a creative act, equal and non-negotiable. One cannot grade what is given. Ubuntu, Buddhism, and Daoism arrive at the same resistance from entirely different starting points. That convergence across traditions suggests the ranking framework itself may be the wrong move, however carefully constructed.

8 more comments...

No posts

Ready for more?