Deflating the Moral Thermostat
Ethics professors don't appear to behave any better than do non-ethicists. (See here for my most recent work on this topic, collaborative with Josh Rust.) On what I've been calling the "inert discovery" model of philosophical moral reflection, moral reflection nonetheless still tends to lead to the discovery of moral truths. It just has no improving effect on one's personal behavior. So, for example, philosophical moral reflection might lead one to think that eating meat is morally bad (which indeed a majority of ethicists do seem to think), but without any material effects on one's cheeseburger consumption rate (as my studies also seem to show).
Now, on the face of it, the inert discovery model has seemed to me empirically somewhat unlikely. If one discovers that something is morally bad, shouldn't that add to one's motivations not to do it? And shouldn't one then, on average, do it a bit less? Shouldn't coming to believe that eating meat is morally bad slightly reduce, at least on average, one's cheeseburger consumption rate?
Here's one possible way, though, to make the inert discovery model work. Plausibly, most people do not -- despite the classic Dixieland song -- actually wish to number among the saints. Instead, maybe, most people have what we might think of as a moral thermostat. We want to be about as morally good as our neighbors, or maybe (in our perhaps somewhat deluded self-conception) somewhat better, or at least not among the first-class jerks. (Different people might have different thermostat settings.) If we do have moral thermostats, that would explain the moral licensing effect -- the tendency to be more likely to do something a little morally bad after having done something a little morally good. Once you have met or exceeded your target level of moral goodness, you can then better justify getting away with some small violation. And to a large extent, for most people, this moral thermostat might be comparative: If other people are cheating, it is easier to justify allowing yourself to also cheat than if no one else is cheating. Who wants to be the sucker saint?
So maybe, then, if an ethicist who comes to accept high moral standards against eating meat, and in favor of donating large amounts of money to famine relief, etc., does not adjust her moral thermostat relative to others, her behavior will not change despite the moral discovery. She wants, say, to be morally better than 80% of those around her. And even before discovering the badness of eating meat, she conceptualized herself that way. Now, if she ceases eating meat while others do not, she will be (in her own conception) better than, say, 85% of others -- heading too far toward sucker-saint territory for her own tastes.
Thus, the result of an ethicist's accepting high moral standards would be not an improvement in personal behavior but rather the adoption of the view that both she and other people are morally worse, by absolute standards, than she had previously thought.