An action is morally good if a hypothetical congress of a diverse range of developmentally expensive, behaviorally sophisticated social entities would tend to approve of it
That’s a cool model, but doesn’t it get Euthyphroed? Either the intergalactic council has reasons for accepting a moral claim or they don’t. If they do, then those reasons explain the claim’s rightness. And if they don’t then why should we equate their judgments with moral truth?
This relates to my acknowledgement about apparently reversing the order of explanation (observation 2). I'm okay with Euthyphrizing in a certain way: We can, if we want, use the hypothetical convergence as a tool for getting to the reasons that the observers are responding to, then explain moral goodness in terms of those reasons. As long as (a.) there's nothing non-naturalistic involved, and (b.) the set of actions deemed good and bad are the same, I'd treat the matter (explaining in terms of the convergence or in terms of the reasons for convergence) as a metaphysical nuance. Regarding metaphysical nuance, I'm a pragmatist: We're free to choose the metaphysical frameworks that most effectively serve our purposes.
Of course they are real. That is not the point. They are partly constituted by the responses of subjects, which means that they are not real in the sense intended by genuine moral realists.
Think of the traditional distinction between primary and secondary properties. Instances of the former are response-independent, e.g. ordinary material objects on a realist construal; whereas instances of the latter are partly constituted by the fact that some subject or other responds in a certain fashion or is in a certain mental state, e.g. "material objects" as construed by phenomenalists. (Some are non-naturalists, like Berkeley, others naturalists, like J.S. Mill.)
Any usable taxonomy of meta-ethical positions needs this distinction, so that the whole enterprise can be carried out in an illuminating way.
Of the philosophers you mention, Firth's ideal observer theory offers a response-dependent (or maybe a better phrase is "response-constituted") view of the right and the good; whereas Railton offers his ostensibly similar ideal-advisor theory, not as an account of how moral properties are constituted, but rather as a discovery procedure for detecting the response-independent right and the good (which is why he characterizes his view as a "stark raving realism."
There is a lot more to be said about the distinction between realist and non-realist versions of meta-ethical naturalism, but I won't try to do so here.
Fair enough. I acknowledge that on this view moral properties aren't as robustly real as some moral realists would want. But I think there's a sense of realism in which this is sufficient for moral realism. There are moral facts that don't depend on what attitudes we happen to have (even if they depend on what attitudes a broad range of hypothetical creatures would have).
"There are moral facts that don't depend on what attitudes we happen to have (even if they depend on what attitudes a broad range of hypothetical creatures would have)."
Quite correct, and virtually no response-dependent meta-ethicist would disagree. Most who claim that the right and the good are to be identified with relational, secondary, response dependent properties would insist that the pertinent class of responses must be highly constrained by rational principles from epistemology and logic, not just any old responses. R.B. Perry and A. J, Ayer might have been exceptions, but they are in a very small minority. Most would agree with R.B. Brandt that the right and the good are to be identified with preferences, endorsements and the like that are purified by "the facts and logic," as Brandt put it.
You also say: "...on this view moral properties aren't as robustly real as some moral realists would want." Ok, I guess I see what you mean.... But what is gained by calling your account of moral properties in terms of the hypothetical responses of rational subjects a form of realism? Why not just call it what it is: Rationally constrained, naturalist, response-dependent subjectivism?
It's fine to taxonomize differently for different purposes. But your label invites grouping my view with relativist views, and I am not a relativist. What is morally good does not depend on what people actually judge to be morally good, but only on what a congress of hypothetical beings *would* judge to be morally good. By calling it "realist", I clearly align myself with those who think there are moral facts that don't depend on our actual judgments or even overall patterns of human judgment.
"What is morally good does not depend on what people actually judge to be morally good, but only on what a congress of hypothetical beings *would* judge to be morally good."
The obvious next question is what the meta-ethical implication would be if that "congress of hypothetical beings" did not converge on a single set of responses, endorsements, preferences.
Is it part of your view that non-convergence could not happen? If so, what is to rule it out? Stipulation? (Firth and Michael Smith, e.g., try this... to no avail, I think.) Some element in the specification of the rational constraints that somehow guarantees convergence? But what could that element be, the inclusion of which in the very specification of the rationality of the hypothetical choosers does not beg the question in favour of convergence?
Without some guarantee of convergence the spectre of "relativism," in a clear enough sense of that oft bruited-about term, does raise its head. For, if one holds that the good and the right are to be identified with the object of the preferences, endorsements, etc. of epistemically and logically rational subjects (as you do) and if those constraints of non-moral rationality do not guarantee convergence (as you may grant-- I am not sure), then one has to bite the bullet and acknowledge that the good and the right have to be relativized to sets of conflicting hypothetical, rational preferences, endorsements. You may not like that implication of your view, but short of that elusive, non-question-begging rational constraint that would guarantee convergence, you have to.
If humans had different senses, then (loosely speaking) green wouldn't be green. But if humans had different patterns of moral judgment, the moral truths would remain the same (because we are only one species in a large congress).
My worry about his book, and I'll put it to you, is that it seems very radical, given that for most of human history, we seem to have made morality mostly about issues of sex and ritual. So far as I can see, a consensus (on general wellbeing?) model would probably not include any rules about sex or ritual (except the most general 'don't do sex or ritual that hurt other people' kind), so a model like this does constitute a rejection of most morality to date.
Do you agree, and do you bite this bullet? I think I do, but it still makes me uncomfortable to think how much "wisdom" from ages past I have to just summarily reject.
I'm inclined to disagree that most of morality has been about sex and ritual, though those are aspects of morality that stand out cross-culturally because of their variability. I'd argue that even in the ancient Confucian tradition, most of morality concerns benevolence, kindness, honesty, trustworthiness, and care, and ritual is mainly the *means* to implement these in a well-regulated manner.
That said, I am inclined to agree that it has been moral progress to relax norms of sex and ritual (and also to condemn slavery, aggressive warfare, and jingoism), and my hope/expectation is that a hypothetical congress of the sort described would agree with me on this.
I am very curious how an idea like this ties in with the much broader concepts of “entropy” and “life”. Surely those are a common denominator for almost any being that would be capable of agreeing on moral ideas.
I'm pretty liberal about what counts as "life". I think C3P0 counts as alive, for example, despite being nonbiological, nonevolved, and having no life cycle. So yes, the entities in question would be "alive" in my liberal sense; and that requires maintaining homeostasis, which requires energy use to create local decreases in entropy.
I guess it’s just an oddly fascinating fact to me that being a “lower/neg entropy entity” seems to be a necessary condition for being a moral agent. Intuitively, that puts some wind in the naturalism sails to me.
I love this way of thinking. Besides being a thought experiment, I would love to actually find out what moral systems alien species, if they exist, in all their diversity have produced, and analyze what conditions tend to lead to one system or another. It saddens me that this will probably never be, or might be the exclusive privilege of infinitesimally few civilizations in the universe.
A less generic concern is that, by going alien, you create vast and potentially unbridgeable individuation divides. Humans tend to roughly agree on what counts as an organism. But aliens might have radically different ideas. They might think that our attempts to communicate with them would be as weird as me trying to communicate with one of your neurons. They might see us as parts of larger wholes. Or they might see our parts as the relevant moral agents. Or they might not be able to see us at all. For us the problem of personhood is mostly theoretical. But not for the intergalactic council.
One of my research interests right now is in thinking about how the human sense of sharp individuation between cognitive entities is a contingent fact about the organisms we happen to be. What happens to ethics and our understanding of consciousness when we try to think beyond this framework? So I welcome a broader perspective that recognizes this contingency. If an alien species doesn't see us at all, then that's a failure to be well informed that would presumably be repaired upon sufficient inquiry and discussion with us and observers like us -- and thus would be repaired in the course of convergence.
I’m intrigued. But here’s a little known fact: your question is basically Kant’s question. He too wanted to know what happens to ethics and our understanding of consciousness when we come to the realization that “reality” is a product of mental individuation. And the categorical imperative, believe it or not, is Kant’s solution to that realization (in essence it tells us to individuate acts and agents coarsely). That’s why he thought his ethics falls out of his metaphysics. It’s also why all three versions of the CI are equivalent. He’s profoundly misunderstood.
I like this, but I'm not sure you need the restrictions you mention on who gets included. If solitary and developmentally-cheap species are involved, they won't personally care about deception or about the sacrifice of life, but it seems very reasonable that they would recognize the problems that deception and the sacrifice of life are for the other species and thus see this as important moral norms along the lines of supplying oxygen to the weird Earthians.
Maybe so. I'm not committed to denying that. But I worry that a standard of rational convergence among intelligent agents in general might be too stringent a requirement. A bunch of singleton AIs might be sensible knaves, each of whom prefers the destruction of the world to the scratching of their finger (or the calculation of pi, or whatever). I don't want to have to work too hard to include them in the consensus.
That’s a cool model, but doesn’t it get Euthyphroed? Either the intergalactic council has reasons for accepting a moral claim or they don’t. If they do, then those reasons explain the claim’s rightness. And if they don’t then why should we equate their judgments with moral truth?
This relates to my acknowledgement about apparently reversing the order of explanation (observation 2). I'm okay with Euthyphrizing in a certain way: We can, if we want, use the hypothetical convergence as a tool for getting to the reasons that the observers are responding to, then explain moral goodness in terms of those reasons. As long as (a.) there's nothing non-naturalistic involved, and (b.) the set of actions deemed good and bad are the same, I'd treat the matter (explaining in terms of the convergence or in terms of the reasons for convergence) as a metaphysical nuance. Regarding metaphysical nuance, I'm a pragmatist: We're free to choose the metaphysical frameworks that most effectively serve our purposes.
If moral rightness is grounded in the endorsements of a class of rational creatures, then moral rightness is a response-dependent property.
On no plausible taxonomy of meta-ethical views can this count as a version of moral realism.
Response-dependent properties aren't real?
Of course they are real. That is not the point. They are partly constituted by the responses of subjects, which means that they are not real in the sense intended by genuine moral realists.
Think of the traditional distinction between primary and secondary properties. Instances of the former are response-independent, e.g. ordinary material objects on a realist construal; whereas instances of the latter are partly constituted by the fact that some subject or other responds in a certain fashion or is in a certain mental state, e.g. "material objects" as construed by phenomenalists. (Some are non-naturalists, like Berkeley, others naturalists, like J.S. Mill.)
Any usable taxonomy of meta-ethical positions needs this distinction, so that the whole enterprise can be carried out in an illuminating way.
Of the philosophers you mention, Firth's ideal observer theory offers a response-dependent (or maybe a better phrase is "response-constituted") view of the right and the good; whereas Railton offers his ostensibly similar ideal-advisor theory, not as an account of how moral properties are constituted, but rather as a discovery procedure for detecting the response-independent right and the good (which is why he characterizes his view as a "stark raving realism."
There is a lot more to be said about the distinction between realist and non-realist versions of meta-ethical naturalism, but I won't try to do so here.
Fair enough. I acknowledge that on this view moral properties aren't as robustly real as some moral realists would want. But I think there's a sense of realism in which this is sufficient for moral realism. There are moral facts that don't depend on what attitudes we happen to have (even if they depend on what attitudes a broad range of hypothetical creatures would have).
You say:
"There are moral facts that don't depend on what attitudes we happen to have (even if they depend on what attitudes a broad range of hypothetical creatures would have)."
Quite correct, and virtually no response-dependent meta-ethicist would disagree. Most who claim that the right and the good are to be identified with relational, secondary, response dependent properties would insist that the pertinent class of responses must be highly constrained by rational principles from epistemology and logic, not just any old responses. R.B. Perry and A. J, Ayer might have been exceptions, but they are in a very small minority. Most would agree with R.B. Brandt that the right and the good are to be identified with preferences, endorsements and the like that are purified by "the facts and logic," as Brandt put it.
You also say: "...on this view moral properties aren't as robustly real as some moral realists would want." Ok, I guess I see what you mean.... But what is gained by calling your account of moral properties in terms of the hypothetical responses of rational subjects a form of realism? Why not just call it what it is: Rationally constrained, naturalist, response-dependent subjectivism?
It's fine to taxonomize differently for different purposes. But your label invites grouping my view with relativist views, and I am not a relativist. What is morally good does not depend on what people actually judge to be morally good, but only on what a congress of hypothetical beings *would* judge to be morally good. By calling it "realist", I clearly align myself with those who think there are moral facts that don't depend on our actual judgments or even overall patterns of human judgment.
You say:
"What is morally good does not depend on what people actually judge to be morally good, but only on what a congress of hypothetical beings *would* judge to be morally good."
The obvious next question is what the meta-ethical implication would be if that "congress of hypothetical beings" did not converge on a single set of responses, endorsements, preferences.
Is it part of your view that non-convergence could not happen? If so, what is to rule it out? Stipulation? (Firth and Michael Smith, e.g., try this... to no avail, I think.) Some element in the specification of the rational constraints that somehow guarantees convergence? But what could that element be, the inclusion of which in the very specification of the rationality of the hypothetical choosers does not beg the question in favour of convergence?
Without some guarantee of convergence the spectre of "relativism," in a clear enough sense of that oft bruited-about term, does raise its head. For, if one holds that the good and the right are to be identified with the object of the preferences, endorsements, etc. of epistemically and logically rational subjects (as you do) and if those constraints of non-moral rationality do not guarantee convergence (as you may grant-- I am not sure), then one has to bite the bullet and acknowledge that the good and the right have to be relativized to sets of conflicting hypothetical, rational preferences, endorsements. You may not like that implication of your view, but short of that elusive, non-question-begging rational constraint that would guarantee convergence, you have to.
In that sense, they are less contingent than colors, on secondary-quality views of color.
How "less contingent"? Say a bit more, please.
If humans had different senses, then (loosely speaking) green wouldn't be green. But if humans had different patterns of moral judgment, the moral truths would remain the same (because we are only one species in a large congress).
I like this theory. My brother used something fairly similar to provide a legible moral underpinning, about which he could then describe how to do education. https://www.routledge.com/A-Theory-of-Moral-Education/Hand/p/book/9781138898547
My worry about his book, and I'll put it to you, is that it seems very radical, given that for most of human history, we seem to have made morality mostly about issues of sex and ritual. So far as I can see, a consensus (on general wellbeing?) model would probably not include any rules about sex or ritual (except the most general 'don't do sex or ritual that hurt other people' kind), so a model like this does constitute a rejection of most morality to date.
Do you agree, and do you bite this bullet? I think I do, but it still makes me uncomfortable to think how much "wisdom" from ages past I have to just summarily reject.
I'm inclined to disagree that most of morality has been about sex and ritual, though those are aspects of morality that stand out cross-culturally because of their variability. I'd argue that even in the ancient Confucian tradition, most of morality concerns benevolence, kindness, honesty, trustworthiness, and care, and ritual is mainly the *means* to implement these in a well-regulated manner.
That said, I am inclined to agree that it has been moral progress to relax norms of sex and ritual (and also to condemn slavery, aggressive warfare, and jingoism), and my hope/expectation is that a hypothetical congress of the sort described would agree with me on this.
I am very curious how an idea like this ties in with the much broader concepts of “entropy” and “life”. Surely those are a common denominator for almost any being that would be capable of agreeing on moral ideas.
I'm pretty liberal about what counts as "life". I think C3P0 counts as alive, for example, despite being nonbiological, nonevolved, and having no life cycle. So yes, the entities in question would be "alive" in my liberal sense; and that requires maintaining homeostasis, which requires energy use to create local decreases in entropy.
I guess it’s just an oddly fascinating fact to me that being a “lower/neg entropy entity” seems to be a necessary condition for being a moral agent. Intuitively, that puts some wind in the naturalism sails to me.
I love this way of thinking. Besides being a thought experiment, I would love to actually find out what moral systems alien species, if they exist, in all their diversity have produced, and analyze what conditions tend to lead to one system or another. It saddens me that this will probably never be, or might be the exclusive privilege of infinitesimally few civilizations in the universe.
The closest we can come is probably speculative science fiction. But there may be more weirdness in the world than is imagined in our fictions!
Yes! I'd love to read a nice treatment of this by a philosophy minded fiction writer in a sci-fi book!
A less generic concern is that, by going alien, you create vast and potentially unbridgeable individuation divides. Humans tend to roughly agree on what counts as an organism. But aliens might have radically different ideas. They might think that our attempts to communicate with them would be as weird as me trying to communicate with one of your neurons. They might see us as parts of larger wholes. Or they might see our parts as the relevant moral agents. Or they might not be able to see us at all. For us the problem of personhood is mostly theoretical. But not for the intergalactic council.
One of my research interests right now is in thinking about how the human sense of sharp individuation between cognitive entities is a contingent fact about the organisms we happen to be. What happens to ethics and our understanding of consciousness when we try to think beyond this framework? So I welcome a broader perspective that recognizes this contingency. If an alien species doesn't see us at all, then that's a failure to be well informed that would presumably be repaired upon sufficient inquiry and discussion with us and observers like us -- and thus would be repaired in the course of convergence.
For more on this see:
https://schwitzsplinters.blogspot.com/2024/06/conscious-subjects-neednt-be.html
https://philpapers.org/rec/SCHIIG-3
https://schwitzsplinters.blogspot.com/2020/10/slippery-in-between-persons-growing.html
https://schwitzsplinters.blogspot.com/2014/03/our-moral-duties-to-monsters.html
I’m intrigued. But here’s a little known fact: your question is basically Kant’s question. He too wanted to know what happens to ethics and our understanding of consciousness when we come to the realization that “reality” is a product of mental individuation. And the categorical imperative, believe it or not, is Kant’s solution to that realization (in essence it tells us to individuate acts and agents coarsely). That’s why he thought his ethics falls out of his metaphysics. It’s also why all three versions of the CI are equivalent. He’s profoundly misunderstood.
I agree that was Kant's *aim*! Whether he succeeded is another question.
I like this, but I'm not sure you need the restrictions you mention on who gets included. If solitary and developmentally-cheap species are involved, they won't personally care about deception or about the sacrifice of life, but it seems very reasonable that they would recognize the problems that deception and the sacrifice of life are for the other species and thus see this as important moral norms along the lines of supplying oxygen to the weird Earthians.
Maybe so. I'm not committed to denying that. But I worry that a standard of rational convergence among intelligent agents in general might be too stringent a requirement. A bunch of singleton AIs might be sensible knaves, each of whom prefers the destruction of the world to the scratching of their finger (or the calculation of pi, or whatever). I don't want to have to work too hard to include them in the consensus.