Ethics, Metaethics, and the Future of Morality
a guest post by Regina Rini
Moral attitudes change over generations. A century ago in America, temperance was a moral crusade, but you’d be hard-pressed to find any mark of it now among the Labor Day beer coolers. Majority views on interracial and same-sex relationships have swung from one pole to the other within the lifetimes of many people reading this. So given what we know of the past, we can say this about the people of the future: their moral attitudes will not be the same as ours. Some ethical efforts that now belong to a minority – vegetarianism, perhaps – will become as ubiquitously upheld as tolerance for interracial partnerships. Other moral matters that seem urgent now will fade from thought just as surely as the temperance movement. We can’t know which attitudes will change and how, but we do know that moral change will happen. Should we try to stop it – or even to control it?
Every generation exercises some control over the moral attitudes of its children, through the natural indoctrination of parenting and the socialization of school. But emerging technologies now give us unprecedented scope to tailor what the future will care about. From social psychology and behavioral economics we increasingly grasp how to design institutions so that the ‘easy’ or ‘default’ choices people tend to adopt coincide with the ones that are socially valuable. And as gene-editing eventually becomes a normal part of reproduction, we will be able to influence the moral attitudes of generations far beyond our own children. Of course, it is not as simple as ‘programming’ a particular moral belief; genetics does not work that way. But we might genetically tinker with brain receptivity for neurotransmitters that affect a person’s readiness to trust, or her preference for members of her own ethnic group. We won’t get to decide precisely how our descendants come to find their moral balance – but we could certainly put our thumb on the scale.
On one way of looking at it, it’s obvious that if we can do this, we should. We are talking about morality here, the stuff made out of ‘should’s. If we can make future generations more caring, more disposed to virtue, more respectful of rational agency, more attentive to achieving the best outcomes – however morality works, we should help future people to do what they should do. That is what ‘should’ means. This thought is especially compelling when we realize that some of our moral goals, like ending racism or addressing the injustices climate change will bring, are necessarily intergenerational projects. The people of the future are the ones who will have to complete the moral journey we have begun. Why not give them a head start?
But there are also reasons to think we should not interfere with whatever course the future of morality might take. For one thing, we ought to be extremely confident that our moral attitudes are the right ones before we risk crimping the possibilities for radical moral change. Perhaps we are that confident about some issues, but it would be unreasonable to be so sure across the board. Think, for example, about moral attitudes toward the idea of ownership of digital media: whether sampling, remixing, and curating count as forms of intellectual theft. Already there appear to be generational splits on this topic, driven by technology that emerged only in the last 30 years. Would you feel confident trying to preordain moral attitudes about forms of media that won’t be invented for a century?
More insidiously, there is the possibility that our existing moral attitudes already reflect the influence of problematic political ideologies and economic forces. If we decide to impose a shape on the moral attitudes of the future, then the technology that facilitates this patterning will likely be in the hands of those who benefit from existing power structures. We may end up creating generations of people less willing to question harmful or oppressive social norms. Finally, we should consider whether any attempt to direct the development of morality is disrespectful to the free agency of future people. They will see our thumbprint on their moral scale, and they may rightly resent our influence even as they cannot escape it.
One thing these reflections bring out is that philosophers are mistaken when they attempt to cleanly separate metaethical questions (what morality is) from normative ethical questions (what we should do). If this clean separation was ever tenable, our technologically expanded control over the future makes it implausible now. We face imminent questions about what we should do – which policies and technologies to employ, to which ends – that depend upon our answers to questions about what morality is. Are there objective moral facts? Can we know them? What is the relationship between morality and human freedom? The idea that metaethical inquiry is for dusty scholars, disconnected from our ordinary social and political lives, is an idea that fades entirely from view when we look to the moral future.
--------------------------------------
I’ve drawn on several philosophers’ work for some of the arguments above. For arguments in favor of using technology to direct the morality of future generations, see Thomas Douglas, “Moral Enhancement” in the Journal of Practical Ethics; and Ingmar Persson and Julian Savulescu, Unfit for the Future. For arguments against doing so, see Bernard Williams, Ethics and the Limits of Philosophy (chapter 9); and Jurgen Habermas, The Future of Human Nature.
image credit: ’Hope in a better future’ by Massimo Valiani