Psychology Research in the Age of Social Media
Popular summaries of fun psychology articles regularly float to the top of my Facebook feed. I was particularly struck by this fact on Monday when these two links popped up:
What Wealth Does to Your Soul (The Week)
Wheat People vs. Rice People (New York Times)
The links were striking to me because both of the articles reported studies that I regarded as methodologically rather weak, though interesting and fun. It occurred to me to wonder whether social media might be seriously aggravating the perennial plague of sexy but dubious psychological research.
Below, I will present some dubious data on this very issue!
But first, why think these two studies are methodologically dubious? [Skip the next two paragraphs if you like.]
The first article, on whether the rich are jerks, is based largely on a study I critiqued here. Among that study's methodological oddities: The researchers set up a jar of candy in their lab, ostensibly for children in another laboratory, and then measured whether wealthy participants took more candy from the jar than less-wealthy participants. Cute! Clever! But here's the weird thing. Despite the fact that the jar was in their own lab, they measured candy-stealing by asking participants whether they had taken candy rather than by directly observing how much candy was taken. What could possibly justify this methodological decision, which puts the researchers at a needless remove from the behavior being measured and conflates honesty with theft? The linked news article also mentions a study suggesting that expensive cars are more likely to be double-parked. Therefore, see, the rich really are jerks! Of course, another possibility is that the wealthy are just more willing to risk the cost of a parking ticket.
The second article highlights a study that examines a few recently-popular measures of "individualistic" vs. "collectivistic" thinking, such as the "triad" task (e.g., whether the participant pairs trains with buses [because of their category membership, supposedly individualistic] or with tracks [because of their functional relation, supposedly collectivistic] when given the three and asked to group two together). According to the study, the northern Chinese, from wheat-farming regions, are more likely to score as individualistic than are the southern Chinese, from rice-farming regions. A clever theory is advanced: wheat farming is individualistic, rice farming communal! (I admit, this is a cool theory.) How do we know that difference is the source of the different performance on the cognitive tasks? Well, two alternative hypotheses are tested and found to be less predictive of "individualistic" performance: pathogen prevalence and regional GDP per capita. Now, the wheat vs. rice difference is almost a perfect north-south split. Other things also differ between northern and southern China -- other aspects of cultural history, even the spoken language. So although the data fit nicely with the wheat-rice theory, many other possible explanations of the data remain unexplored. A natural starting place might be to look at rice vs. wheat regions in other countries to see if they show the same pattern. At best, the conclusion is premature.
I see the appeal of this type of work: It's fun to think that the rich are jerks, or that there are major social and cognitive differences between people based on the agricultural methods of their ancestors. Maybe, even, the theories are true. But it's a problem if the process by which these kinds of studies trickle into social media has much more do to with how fun the results are than with the quality of the work. I suspect the problem is especially serious if academic researchers who are not specialists in the area take the reports at face value, and if these reports then become a major part of their background sense of what psychological research has recently revealed.
Hypothetically, suppose a researcher measured whether poor people are jerks by judging whether people in more or less expensive clothing were more or less likely to walk into a fast-food restaurant with a used cup and steal soda. This would not survive peer review, and if it did get published, objections would be swift and angry. It wouldn't propagate through Facebook, except perhaps as the butt of critical comments. It's methodologically similar, but the social filters would be against it. I conjecture that we should expect to find studies arguing that the rich are morally worse, or finding no difference between rich and poor, but not studies arguing that the poor are morally worse (though they might be found to have more criminal convictions or other "bad outcomes"). (For evidence of such a filtering effect on studies of the relationship between religion and morality, see here.)
Now I suspect that in the bad old days before Facebook and Twitter, popular media reports about psychology had less influence on philosophers' and psychologists' thinking about areas outside their speciality than they do now. I don't know how to prove this, but I thought it would be interesting to look at the usage statistics on the 25 most-downloaded Psychological Science articles in December 2014 (excluding seven brief articles without links to their summary abstracts).
The article with the most views of its abstracted summary was The Pen Is Mightier Than the Keyboard: Advantages of Longhand over Laptop Note Taking. Fun! Useful! The article had 22,389 abstract views in December, 2014. It also had 1,320 full text or PDF downloads. Thus, it looks like at most 6% of the abstract viewers bothered to glance at the methodology. (I say "at most" because some viewers might go straight through to the full text without viewing the abstract separately. See below for evidence that this happens with other articles.) Thirty-seven news outlets picked up the article, according to Psychological Science, tied for highest with Sleep Deprivation and False Memories, which had 4,786 abstract views and 645 full text or PDF downloads (at most 13% clicking through).
Contrast these articles with the "boring" articles (not boring to specialists!). The single most downloaded article was The New Statistics: Why and How: 1,870 abstract views, 4,717 full-text and PDF views -- more than twice as many full views as abstract views. Psychological Science reports no media outlets picking this one up. I guess people interested in statistical methods want to see the details of the articles about statistical methods. One other article had more full views than abstract views: the ponderously titled Retraining Automatic Action Tendencies Changes Alcoholic Patients’ Approach Bias for Alcohol and Improves Treatment Outcome: 164 abstract views and 274 full views (67% more full views than abstract views). That article was only picked up by one media outlet. Overall, I found a r = -.49 (p = .01) correlation between the number of news-media pickups and the log of the ratio of full-text views to abstract views.
I suppose it's not surprising that articles picked up by the media attract more casual readers who will view the abstract only. I have no way of knowing that many of these readers are fellow academics in philosophy, psychology, and the other humanities and social sciences. But if so, and if my hypothesis is correct that academic researchers are increasingly exposed to psychological research based on Tweetability rather than methodological quality, that's bad news for the humanities and social sciences. Even if the rich really are jerks.