The conclusions I reach on this blog are, like all social science research, uncertain, as no research on human psychology can really be conclusive, given the nature of the subject. The best we can do is to provide evidence toward any conclusion, and if enough evidence accumulates, then the chances of our conclusion being right are greater. However, while I don't believe you can prove anything, that does not mean that all evidence is flawed in exactly the same way or to the same degree. The purpose of this page is to be clear of exactly what the flaws of the conclusions that I reach are. While some take flaws as an opportunity to discount research, especially if they don't like the conclusions, I think that is just as invalid as believing that research proves things definitively. Evidence about human psychology is valuable, even if it isn't conclusive. Hopefully, with enough evidence (each piece of which hopefully has different non-overlapping flaws) we can begin to understand ourselves better.
Specific flaws and limitations of my research are listed below. Not all posts have the same flaws, but below are some considerations to keep in mind.
- Sampling: The conclusions I reach are often based on results from internet samples, often the yourmorals.org dataset, which includes disproportionately well educated, liberal, and largely white respondents. This means that any evidence about mean values or the percentage of people in the population who believe an given thing are largely meaningless. As such, I rarely will even mention mean values. However, the relation between variables usually is more robust. For example, consider a question about support for a Democratic or Republican president. You will find that if you sample a lot of groups, the % who support either will vary wildly, but in each big group, you'll likely find that liberals will support the democrat and conservatives will support the republican. If you do not find that, then even that would be interesting. Height and income will vary wildly in any group sampled, but imagine finding a large group where women were taller than men or less educated individuals earned more than those with more education. Autism rates vary, but in almost every group, men will exhibit higher rates than women. Disgust levels vary in groups, but in almost any group, women and conservatives will have higher disgust sensitivity.
It is this fact that allows people to conduct experiments on groups that are far less representative. Experiments are not immune from this issue, as they still concern the relation between variables in a group, with the only difference being that that variable is manipulated, instead of measured. For example, if you conducted an experiment on the effects of being exposed to polio on a US sample vs. a developing country sample, you would find vastly different results and conclude very different things. Fortunately for those who conduct experiments, such interactions are relatively rare, and when they do exist, they are interesting in and of themselves as they tell us something else about the world.
The samples that I often use are better than most samples used in experimental work, in terms of representativeness, as internet samples are broader and more diverse than, for example, college student samples. However, it is always a good idea to consider the sample and whether one can generate a good hypothesis about why the particular sample that I am using might differ from other samples. I would welcome such comments as again, these sampling differences are interesting, in and of themselves.
- Measurement: Much of the data I present is self-report data. People aren't always truthful, though studies do find that motivated internet users are more truthful than people being called randomly (e.g. Gallup surveys). Internet users give reasonably good data, but it is always worth asking what the limitations of this data are. For example, asking about opinions about health care policy is very different than asking about personal drug use history. I would welcome comments relating to these issues in any specific post. However, even if a large % of people lie, it still doesn't necessarily invalidate a study, as it just makes it harder to statistically find the signal (truth) obscured by the noise (lying). As such, comments about participants who may not report accurately are more helpful when they point out ways that respondents are systematically untruthful, which is a problem, rather than being randomly untruthful, which generally isn't as big an issue.
Internet research generally is better at measurement than national polls. One advantage of internet research is that people leave a trail that they often do not realize. As such, it is often possible to tell the difference between survey satisficing compared to real honest responses by looking at that trail. Also, we have people's attention enough to often ask full well validated measures, using a battery of questions, rather than single item measures. For example, if I want to measure disgust sensitivity, it is helpful to measure it in an array of situations, as otherwise, your response will vary much more with my question wording. That is one reason why pundits argue so much over the question wording in national polls; because when you only ask one question, there is so much more measurement error based on the way you phrase the question.
Internet research is vastly inferior to measurement of a kind that often is not appreciated. Specifically, memoirs and books where people give in-depth accounts of what they did and why, are particularly good measurements of a very limited sample. Generally, one will find that all research makes tradeoffs between good measurement and good sampling and the best conclusions are reached from combining different research methodologies done by different teams of researchers. As such, I welcome comments about books or even observations of society and personal lives that either converge or diverge from conclusions I reach.
- Correlation and Causality: Any research reporting a relationship between X and Y does not mean that X causes Y. I tend to report more correlational studies, but some experimental ones. There is a very real danger in any X<->Y relationship that some other variable causes both X and Y, especially variables like ideology, and so I definitely welcome any comments pointing out possible associations. One of the benefits of having a large dataset is that I can often test such hypotheses.
However, I would also argue against over-interpreting experiments where random assignment shows that X causes Y, as researchers have such free reign to design the experiment, that another smart researcher could probably design a study where Y causes X, and with many trying at the same time, some researcher is bound to figure out a demonstration that fits their story. If neurons that fire together wire together, then you can cause activation of almost anything by activating an associated concept. As such, there are an infinite number of X causes Y priming studies that can be done. Ideally, if a study shows that chicken soup fights loneliness, another researcher should show that people who eat more chicken soup in their daily lives report less loneliness, to show not only that X can cause Y, but that X actually does affect Y in the world. As such, it is my personal belief that surveys are a necessary compliment to experiments.
- Free Will: Free will is perhaps an odd limitation of social science research, but I think it's the most important. My experience is that psychologists are actually really good scientists who know their methodology as well or better than their hard science counterparts. However, they are confounded by the fact that human beings are vastly different than rocks and atoms, which means that results are less 'scientific' in the sense that they won't replicate neatly. Human beings may decide not to react the same way to the same stimulus. And even if you average out this "error", you'll find that many people do not care about the average response, but rather care about what any given finding means for them personally. Just because a piece of research can tell you what happened to some significant sample, doesn't mean that that has any bearing on your life, as you likely differ in very particular ways. So comments about your own particular experience are welcome as well.
In conclusion, my personal opinion is to take social science research with a grain of salt, meaning that it is a tool to help you think about your life and goals, rather than a manual, to tell you explicitly how things work or how to do things better. One professor recently told me about a student in his class who questioned the utility of research, to which he responded that studies can be seen as parables, meant to illustrate a concept. Research, in the human domain, often illustrates what can happen or what does happen to other people. How that bears upon your own life is a question only you can answer.
- Ravi Iyer