When should we believe social science findings?
Recently, some colleagues of mine forwarded me this article from the Weekly Standard concerning the use of social science to delegitimize conservatism. There are some valid points in this article that the author uses to question specific studies. However, I think the author fails to understand the breadth of evidence that underlies most social science findings.
Social scientists deal with a far more complex subject than scientists who work with rocks or chemicals. Specifically, human beings have free will. They can decide to do or not do things in response to a stimulus. Further, because we care about human beings in a way that we don’t care about rocks, we can’t always design studies perfectly, as we have to respect the wishes of others. As such, all social science has problems of sampling and generalizability.
But the fact that all social science research has flaws doesn’t mean you should ignore it. For example, presidential polls have flaws, even with the author’s preferred sampling method, as question wording, non-response, and weighting to correct for non-response all introduce bias. While each poll is imperfect, each poll still give us some understanding of what is going on in the population. Perhaps more critically, different polls have different flaws, which means that if you aggregate across measures (e.g. see Nate Silver’s five thirty eight blog), you can get something close to the truth (the same principle underlies the Wisdom of Crowds). Yes, a survey of yourmorals.org volunteers or undergraduates or mechanical turk participants or randomly selected households who will answer a survey, is imperfect. Yes, artificial experiments, neuroscience correlations, and self-report are all imperfect. But they are all imperfect in somewhat different ways, and if you find the same thing across each of these samples using a variety of different methodologies, then you can be pretty confident of your findings.
Personally, I don’t believe any single study or paper, and a I wait to see if there is confirmation across research groups, methodologies, and samples before believing any research. This is true in social science and in other sciences as well. Andrew Ferguson, who wrote the Weekly Standard piece, is capitalizing on an intuition we all likely share, that so many studies out there report so many facts, many of them contradictory (e.g. is alcohol good for your health?), that we can’t help but question them. And we should. Individual studies and papers are not proof, and we probably shouldn’t report them as such. But much of this research that relates to liberal and conservative differences has many studies using many methodologies and samples behind them, and that is where we can be more confident. It is for this reason that I increasingly find myself drawn to computer scientists and data scientists who work on questions of aggregation, and as technology starts to pervade social science, my guess is that social science will move more towards aggregation and also place less emphasis on individual papers.
I agree with Ferguson that pathologizing the other side isn’t helpful, but not because the science is wrong, but because the interpretation often is subject to bias. A lack of empathy can be thought of as an ability to make rational, competent decisions or heartlessness. Loyalty to one’s family can be thought of as noble or as nepotism. Reliance on one’s intuition can be thought of as indicative of common sense or of ignorance. But the fact that these things differ between liberals and conservatives are indeed facts, with as much evidence behind them as facts like cholesterol causes heart disease. The world’s knowledge graph will eventually encompass not just physical facts, but facts like these as well.
- Ravi Iyer