Weird Minds Might Destabilize Human Ethics
Intuitive physics works great for picking berries, throwing stones, and walking through light underbrush. It's a complete disaster when applied to the very large, the very small, the very energetic, or the very fast. Similarly for intuitive biology, intuitive cosmology, and intuitive mathematics: They succeed for practical purposes across long-familiar types of cases, but when extended too far they go wildly astray.
How about intuitive ethics?
I incline toward moral realism. I think that there are moral facts that people can get right or wrong. Hitler's moral attitudes were not just different from ours but actually mistaken. The twentieth century "rights revolutions" weren't just change but real progress. I worry that if artificial intelligence research continues to progress, intuitive ethics might encounter a range of cases for which it is as ill prepared as intuitive physics was for quantum entanglement and relativistic time dilation.
Intuitive ethics was shaped in a context where the only species capable of human-grade practical and theoretical reasoning was humanity itself, and where human variation tended to stay within certain boundaries. It would be unsurprising if intuitive ethics were unprepared for utility monsters (capable of superhuman degrees of pleasure or pain), fission-fusion monsters (who can merge and divide at will), AIs of vastly superhuman intelligence, cheerfully suicidal AI slaves, conscious toys with features specifically designed to capture children's affection, giant virtual sim-worlds containing genuinely conscious beings over which we have godlike power, or entities with radically different value systems. We might expect human moral judgment to be be baffled by such cases and to deliver wrong or contradictory or unstable verdicts.
For physics and biology, we have pretty good scientific theories by which to correct our intuitive judgments, so it's no problem if we leave ordinary judgment behind in such matters. However, it's not clear that we have, or will have, such a replacement in ethics. There are, of course, ambitious ethical theories -- "maximize happiness", "act on that maxim that you can at the same time will to be a universal law" -- but the development and adjudication of such theories depends, and might inevitably depend, on our intuitive judgments about such cases. It's because we intuitively or pre-theoretically think we shouldn't give all our cookies to the utility monster or kill ourselves to tile the solar system with hedonium that we reject the straightforward extension of utilitarian happiness-maximizing theory to such cases and reach for a different solution. But if our commonplace ethical judgments about such cases are not to be trusted, because these cases are too far beyond what we can reasonably expect human moral intuition to handle well, what then? Maybe we should kill ourselves to tile the solar system with hedonium (the minimal collection of atoms capable of feeling pleasure), and we're just unable to appreciate this fact with moral theories shaped for our limited ancestral environments?
Or maybe morality is constructed from our judgments and folkways, so that whatever moral facts there are, they are just the moral facts that we (or idealized versions of ourselves) think there are? Much like an object's being red, on a certain view of the nature of color, consists in its being such that ordinary human perceivers in normal conditions would experience it as red, maybe an action's being morally right just consists in its being such that ordinary human beings who considered the matter carefully would regard it as right? (This is a huge, complicated topic in metaethics, e.g., here and here.) If we take this approach, then morality might change as our sense of the world changes -- and as who counts as "we" changes. Maybe we could decide to give fission-fusion monsters some rights but not other rights, and shape future institutions accordingly. The unsettled nature of our intuitions about such cases, then, might present an opportunity for us to shape morality -- real morality, the real (or real enough) moral facts -- in one direction rather than another, by shaping our future reactions and habits.
Maybe different social groups would make different choices with different consequences for group survival, introducing cultural evolution into the mix. Moral confusion might open into a range of choices for moral architecture.
However, the range of legitimate choices is, I'm inclined to think, constrained by certain immovable moral facts, such as that it would be a moral disaster if the most successful future society constructed human-grade AIs, as self-aware as we are, as anxious about their future, and as capable of joy and suffering, simply to torture, enslave, and kill them for no good reason.
----------------------------------------------
Related posts:
Two Arguments for AI (or Robot) Rights (Jan. 16, 2015)
How Robots and Monsters Might Break Human Moral Systems (Feb. 3, 2015)
Cute AI and the ASIMO Problem (Jul. 24, 2015)
----------------------------------------------
Thanks to Ever Eigengrau for extensive discussion.