On Relying on Self-Report: Happiness and Charity
To get published in a top venue in sociology or social or personality psychology, one must be careful about many things -- but not about the accuracy of self-report as a measure of behavior or personality. Concerns about the accuracy of self-report tend to receive merely a token nod, after which they are completely ignored. This drives me nuts.
(Before I go further, let me emphasize that the problem here -- what I see as a problem -- is not universal: Some social psychologists -- Timothy Wilson, Oliver John, and Simine Vazire for example -- are appropriately wary of self-report.)
Although the problem is by no means confined to popular books, two popular books have been irking me acutely in this regard: The How of Happiness, by my UC Riverside colleague Sonja Lyubomirsky, and Who Really Cares, by Arthur Brooks (who has a named chair in Business and Government Policy at Syracuse).
The typical -- but not universal -- methodology in work by Lyubomirsky and those she cites is this: (A1.) Ask some people how happy (or satisfied, etc.) they are. (A2.) Try some intervention. (A3.) Ask them again how happy they are. Or: (B1.) Randomly assign some people to two or three groups, one of which receives the key intervention. (B2.) Ask the people in the different groups how happy they are. If people report greater happiness in A3 than in A1, conclude that the intervention increases happiness. If people in the intervention group report greater happiness in B2 than people in the other groups, likewise conclude that the intervention increases happiness.
This makes me pull out my hair. (Sorry, Sonja!) What is clear is that, in a context in which people know they are being studied, the intervention increases reports of happiness. Whether it actually increases happiness is a completely different matter. If the intervention is obviously intended to increase happiness, participants may well report more happiness post-intervention simply to conform to their own expectations, or because they endorse a theory on which the intervention should increase happiness, or because they've invested time in the intervention procedure and they'd prefer not to think of their time as wasted, or for any of a number of other reasons. Participants might think something like, "I reported a happiness level of 3 before, and now that I've done this intervention I should report 4" -- not necessarily in so many words.
As Dan Haybron has emphasized, the vast majority of the U.S. population describe themselves as happy (despite our high rate of depression and anger problems), and self-reports of happiness are probably driven less by accurate perception of one's level of happiness than by factors like the need to see and to portray oneself as a happy person (otherwise, isn't one something of a failure?). My own background assumption, in looking at people's self-reports of happiness, life-satisfaction, and the like, is that those reports are driven primarily by the need to perceive oneself a certain way, by image management, by contextual factors, by one's own theories of happiness, and by pressure to conform to perceived experimenter expectations. Perhaps there's a little something real underneath, too -- but not nearly enough, I think, to justify conclusions about the positive effects of interventions from facts about differences in self-report.
In Who Really Cares? Brooks aims to determine what sorts of people give the most to charity. Brooks bases his conclusions almost (but not quite) entirely on self-reports of charitable giving in large survey studies. His main finding is that self-described political conservatives report giving more to charity (even excluding religious charities) than do self-described political liberals. What he concludes -- as though this were unproblematically the same thing -- is that conservatives give more to charity than do liberals. Now maybe they do; it wouldn't be entirely surprising, and he has a little bit of non-self-report evidence that seems to support that conclusion (though how assiduously he looked for counterevidence is another question). But I doubt that people have any especially accurate sense of how much they really give to charity (even after filling out IRS forms, for the minority who itemize charitable deductions), and even if they did have such a sense I doubt that would be accurately reflected in self-report on survey studies.
As with happiness, I suspect self-reports of charitable donation are driven at least as much by the need to perceive oneself, and to have others perceive one, a particular way as by real rates of charitable giving. Rather than assuming, as Brooks seems to, that political conservatives and political liberals are equally subject to such distortional demands in their self-reports and thus attributing differences in self-reported charity to actual differences in giving, it seems to me just as justified -- that is to say, hardly justified at all -- to assume that the real rates of charitable giving are the same and thus attribute differences in reported charity to differences in the degree of distortion in the self-descriptive statements of political conservatives and political liberals.
Underneath sociologists' and social and personality psychologists' tendency to ignore the sources of distortion in self-report is this, I suspect: It's hard to get accurate, real-life measures of things like happiness and overall charitable giving. Such real-life measures will almost always themselves be only flawed and partial. In the face of an array of flawed options, it's tempting to choose the easiest of those options. Both the individual researcher and the research community as a whole then become invested in downplaying the shortcomings of the selected methods.