Recently, the topic came up of whether values profiles (and Moral Foundation Scores more specifically) predict behavior. On the one hand, social and contextual factors often loom larger than individual factors in determining moral behavior. On the other hand, it seemed rather unlikely that something as central as a persons values would not predict their behavior. While the effects may be small and indirect in many cases, I would expect a person’s value profile to predict almost everything they do in life. As a test case, I decided to examine whether moral foundation scores, which measure how much a person cares about harming others, fairness, obeying authority, being loyal, and being pure, in the context of moral judgments, predict whether a visitor to YourMorals.org visited using a Mac vs. a PC. Below is the graph.
The Values Profile of Mac vs. PC Users
While all visitors to YourMorals.org are generally liberal, it looks as if Windows users are more conservative than Mac users, within this group. Note that while this isn’t a representative sample, in some ways it is better for answering this question as the users in this sample have such similar characteristics that many variables are naturally controlled for. Windows users appear to value harm less and purity more.
The take home message for me is that while context certainly matters, so to does a person’s values, even for relatively unrelated decisions, such as which computer to use in daily life.
- Ravi Iyer
Over the past couple years, Jon Haidt has had press articles from various liberal leaning press organizations, including these articles from ThinkProgress, Alternet, Daily Kos, and the New York Times.
One of the great things about doing internet research is that web servers automatically collect information that makes it very easy to do cross-sample validation. This information can also be used to compare the people who visited us from these articles. Which group is the most liberal and how do they compare on their moral foundations scores?
First, I thought do a simple comparison of these groups.
There are fewer people from the Daily Kos to be able to be sure about conclusions (hence the larger error bars), but it looks like (unsurprisingly) all of these groups are liberal, compared to people who find us via search engines, who tend to be only slightly liberal. Their moral foundations scores show a similarly more liberal pattern with higher Harm/Fairness scores and lower Ingroup/Authority/Purity scores. Daily Kos readers are the most liberal followed by ThinkProgress & Alternet and then NY Times readers and finally people who found yourmorals.org via a search engine.
To me, the most interesting results are where groups appear to be equally liberal (ThinkProgress & Alternet), but have differences. ThinkProgress visitors appear esepcially low on Purity scores, while Alternet visitors appear significantly higher on Harm/Fairness scores.
An even stronger test of the kinds people who use these websites is to control for how liberal (slight, moderate, or extreme) individuals at these sites report themselves to be and examine individuals within each group of liberals. Those results are below.
This is the graph for people who said they were “very liberal”.
These are the results for people who said they were “liberal”.
These are the results for people who said they were “slightly liberal”. Interestingly, there weren’t enough slight liberals in the Daily Kos sample to include them in this graph.
The pattern seems fairly robust in that ThinkProgress visitors care less about Purity. Perhaps they are less religious? Alternet visitors seem to care more about Harm/Fairness. Perhaps they are more empathically motivated and ThinkProgress visitors are more rationally oriented. I don’t know enough about the liberal blogosphere to theorize well about why these differences exist, but I’m hopeful that by sharing these differences, others will be able to enlighten me. At the very least, I hope readers of these sites will find it interesting.
Would you be interested in seeing how your group compares to others on the moral foundations questionnaire? Or visitors to your website? You may have noticed a small “create a group” link on our explore page of yourmorals.org which lets you create a custom URL, whereby each visitor’s graphs will not only let them compare their individual scores to other liberals/conservatives, but also to members of their group, and to compare their group scores to the average liberal/conservative. Once you create those URLs, you can put them into blog posts, articles, or emails targeting your group. We are still beta testing the feature, but would welcome anyone who wants to try it out and who perhaps has feedback on how we can improve it.
- Ravi Iyer
I was immediately attracted to Moral Foundation Theory (MFT) due to the utility of breaking down partisan and policy differences into questions of what one values. The idea that different people believe in different moral principles is one of those obvious ideas that is yet still under appreciated in every day life, where we attribute differences to ignorance, stupidity, or evil, rather than to underlying value differences.
However, I have never been convinced that there are specifically five foundations or even that the idea of thinking of moral concerns as categorically ‘foundational’ is better than thinking of them in some other less categorical way. Fortunately, those that originally conceived of Moral Foundations Theory do not require such homogenous thinking and even welcome the idea that the five foundation model is likely to undergo changes. I have outlined a few changes I would make previously, as well as the criteria that one might use to posit a new moral category. Even if one does not believe in the categorical distinction that some moral concerns are ‘foundations’, while others are not, it would seem clear that some moral concerns are more common, distinct, and important. I would now like to make that case for honesty.
Honesty is common.
One of the distinctive traits of MFT is the evolutionary focus. People moralize various things (e.g. eating pork or driving while using a cellphone) in various cultures, but the purpose is to identify those moral concerns that appear cross-culturally and have an innate quality. Innate, in this instance, means “organized ahead of experience”, such that people can make intuitive judgments beyond their socialization. Put more concretely, if concern about honesty is innate and universal, one might expect individuals to be able to intuitively signal and detect honesty in others, as this study, where participants are fairly successful in figuring out who will cooperate or cheat, shows. The idea that concern about honesty is universal enough that one might posit an evolutionary story is almost self-evident, but this paper provides evolutionary models about how honesty might evolve. If one subscribes to the evolution of groups that out-compete other groups, one can witness the evolution of honesty in modern society as nations that have low levels of corruption tend to have better economies than countries with high levels of corruption, acheter cialis, mirroring the evolutionary processes theorized.
Honesty is distinct.
The same paper I cited above has some evidence for this, but from the perspective of Moral Foundations Theory, it would be useful to show that honesty is distinct from other moral concerns. We asked users on YourMorals 4 questions about honesty (alpha=.69, .76 if we remove the relevance question) in addition to the standard Moral Foundations Questionnaire that measures the existing five foundational concerns. Factor analyses tell the same story, but examining the correlations tells the story more simply. Specifically, the highest correlation between endorsement of honesty and any other foundation is .31 (with Purity), while all other foundations have fairly high inter-correlations with other foundations (e.g. Purity/Authority/Ingroup inter-correlate >.5, Harm/Fairness inter-correlation = .57). Concern about honesty is empirically distinct from other moral concerns.
Honesty is important.
The pragmatic utility of using the moral foundations to predict ideological differences is perhaps the primary contribution of MFT to date. Are questions about honesty also pragmatically useful?
On a 7 point scale, those who are more conservative endorse questions about honesty more than those who are liberal, but the amount of variance in political attitudes predicted by endorsement of honesty is smaller, though significant, compared to other foundations (beta = .10 vs. other foundations which range from .12 (ingroup) to .33 (purity)). However, if we look at economic conservatism, we do find that endorsing honesty does predict identification as being economically conservative (beta = .13) as well as authority, ingroup & purity concerns (betas = .10, .09, &.11).
I looked at some political attitude variables and the predictive power of endorsing honesty was not impressive. However, endorsement of honesty is a strong negative predictor (in a regression equation, including the other five foundations) of psychopathy (beta = -.23) and utilitarianism (beta = -.26, e.g. willingness to sacrifice one life to save five others). Measurement of endorsement of honesty may have important pragmatic utility, but not for political outcomes.
- Ravi Iyer
Gratitude has been theorized to be a moral emotion, yet it has largely been studied for it’s hedonic benefits rather than it’s effect on moral reasoning. I had done some previous analyses on our data at yourmorals.org where scores on the Gratitude quotient scale were positively related to most all measures of moral reasoning. By itself, this isn’t particularly interesting as there are so many possible interpretations of this. People who have nice things happen to them may feel grateful and also be nice people. Nicer, more moral people may do good things in life and may receive benefits for them, for which they are grateful. The numerous interpretations make any conclusion difficult.
As such, I decided to put a simple gratitude manipulation where participants were asked to write about something they were grateful for, before the moral foundations questionnaire. I attempted to test the effects of gratitude on moral reasoning by running an experiment where participants were asked to write about 5 things they were grateful for, 5 hassles from their life, or 5 neutral events. Below are the results of ~1500 participants. Generally, it seems gratitude makes people more morally liberal and when I examined the standard liberal/conservative moral split (Harm & Fairness minus Authority, Ingroup, & Purity), there was a marginally significant relationship (p=.06) between being in the gratitude condition and having a greater liberal split. The effect sizes are obviously small, but those in the gratitude condition appear to endorse the fairness foundation (p<.01) more and the authority foundation less (p<.05).
I’m not sure how to interpret this result. It may just be random error. To explore the result further, I looked at the individual fairness questions.
The fact that the gratitude manipulation has a fairly homogenous effect at the question level is promising. Fairness can be thought of in many different ways. It can be thought of as a concern for equality or for people not getting what they deserve. The “RICH” and “TREATED” questions appear to show the biggest effect and they are most indicative of a concern for equality (see question text below). I could imagine a theoretical argument for this link as being grateful and satisfied with a situation allows one the luxury of being generous and worrying about equal treatment. There is research indicating that being grateful motivates prosocial behavior (also see this article).
Here is a list of fairness questions:
TREATED – Whether or not some people were treated differently than others
UNFAIRLY – Whether or not someone acted unfairly
RIGHTS – Whether or not someone was denied his or her rights
FAIRLY – When the government makes laws, the number one principle should be ensuring that everyone is treated fairly.
JUSTICE – Justice is the most important requirement for a society.
RICH – I think it’s morally wrong that rich children inherit a lot of money while poor children inherit nothing.
Still, I’m not 100% convinced of these results given the small effect sizes and will likely have to do more studies to confirm if this effect is replicable or is just an effect of noisy data. Another way to look at the reliability of these effects is to examine whether these effects are consistent across groups. It does appear that the effect is consistent across groups for increasing fairness.
The robustness of this effect less consistent for the Authority foundation, though it is perhaps worth considering why grateful libertarians may endorse authority less. Perhaps the only reason for libertarians to value authority is out of a sense of insecurity. For example, the libertarian party does espouse the idea that the only role of government is to provide security for property rights. If that security is provided, perhaps libertarians see no need for any authority?
I’m not sure if I have enough evidence for a paper. All research is somewhere between a zero and 1 in terms of it’s conclusiveness and these results may be too preliminary to reach the somewhat arbitrary standard of paper-hood. I could clearly strengthen these results with a regression analyses of our large correlational dataset that confirms these patterns. I’ll have to get feedback from more objective parties.
A few years ago, I was fortunate to catch a talk by Jon Haidt at the Gallup Positive Psychology Summit where he gave a wonderful talk about moral foundation theory, which seeks to determine the fundamental systems of morality. I sought to use his scale in my work and using that scale eventually grew into our current collaboration (along with Jesse Graham, Pete Ditto, and Sena Koleva) of yourmorals.org, where the main instrument used in moral foundation theory, the moral foundations questionnaire, is available.
The moral foundations questionnaire measures 5 foundations. The below descriptions are taken from the moral foundations theory webpage.
1) Harm/care, related to our long evolution as mammals with attachment systems and an ability to feel (and dislike) the pain of others. This foundation underlies virtues of kindness, gentleness, and nurturance.
2) Fairness/reciprocity, related to the evolutionary process of reciprocal altruism. This foundation generates ideas of justice, rights, and autonomy.
3) Ingroup/loyalty, related to our long history as tribal creatures able to form shifting coalitions. This foundation underlies virtues of patriotism and self-sacrifice for the group. It is active anytime people feel that it’s “one for all, and all for one.”
4) Authority/respect, shaped by our long primate history of hierarchical social interactions. This foundaiton underlies virtues of leadership and followership, including deference to legitimate authority and respect for traditions.
5) Purity/sanctity, shaped by the psychology of disgust and contamination. This foundation underlies religious notions of striving to live in an elevated, less carnal, more noble way. It underlies the widespread idea that the body is a temple which can be desecrated by immoral activities and contaminants (an idea not unique to religious traditions).
According to Jon Haidt, ”Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make social life possible.”
Perhaps one of the most compelling parts of the theory is that it invites people to try and posit a 6th foundation. There was even a prize offered by Jon to those who succeeded and a number of possible candidates are listed here.
How can we determine what is or is not a foundation? Some of the criteria are listed on the above webpage. Borrowing from a recent lecture I attended on approaches to develop foundations of ‘personality’, I would list the below criteria as important.
- Factor analysis/Conceptual Distinction – Factor analysis is the most common way that people empirically determine distinct constructs. The idea is that if two constructs are distinct, questions about these constructs should inter-correlate to form a separate factor from questions about a separate construct. So if questions about Harm load on a separate factor versus questions about Fairness, we can conclude they are separate constructs. I would argue that this is a necessary, but not sufficient test of any new foundation. It is possible to ask questions with enough specificity that anything can be a separate factor. Five questions about harm using a knife will likely load on a separate factor versus five questions about harm by drowning, yet does that mean they are separate foundations. Furthermore, work on moral confabulation and moral intuition leads many researchers to believe that individuals are fundamentally naive about what drives their moral reasoning. As such, direct questions may not be able to illuminate all possible moral systems.
- Cluster analysis – One of the most important applications of moral foundations theory is that it successfully describes the differences between liberals and conservatives in a fairly robust manner. Some personality scale developers take the notion that if a question successfully differentiates classes of people, it’s a good question. This is true for the moral foundation questionnaire to a point, but more work could certainly be done. 5 foundations should conceivably posit 5 classes of people (individuals who value each foundation over the other four) and the co-occurrence of many of these foundations is evidence that some current foundations may share a moral system or that these clusters have yet to be identified.
- Evolutionary explanation – One of the most important aspects of moral foundation theory is that it contains a plausible evolutionary explanation of all systems. Evolutionary evidence should include both cross-cultural universality and a coherent evolutionary explanation. The current foundations are well described in terms of their evolutionary roots, having grown out of anthropological field work, and future foundation candidates should be equally well described in terms of evolutionary theory and equally universal cross-culturally.
- Beyond Self Interest – I often think that people who are in front of me in traffic are jerks. Why don’t they just get out of the way? If you catch me on a particularly bad day, I may even consider them to be immoral people. But is ‘getting out of my way’ a moral system? Human beings are notoriously clever at moralizing their self-interest and any candidate foundation needs to go beyond self interest. The relevant question would be whether I would judge the other people to be at fault from the perspective of a neutral third party. Given that I don’t routinely chastise drivers for being in the way of other drivers, I would say that my beliefs in this example are not the result of a moral system, but rather my personal self-interest.
- Beyond Harm – There are lots of different ways to harm another person. Some would argue that Harm is too broad a moral category, but as long as Harm is included as a moral foundation, any subsequent candidate foundation will necessarily be forced to answer the question “Is this reducible to harm?”. The question which would need to empirically be asked is whether individuals would judge an act to be wrong even if nobody were harmed. This may seem like an easy test, but consider the case of liberty, which is an often brought up criticism of moral foundation theory as something that has been left out. Most people would think that it is wrong for someone to deprive somebody else of their freedom. It’s conceptually distinct from physical harm, potentially describes a class of people (libertarians), has an evolutionary explanation (the need for groups to encourage explorers?), and is not just self-interest as I care about other people’s liberty, not just my own. However, would I care about somebody else’s liberty if they didn’t want to be free? It’s a difficult question as I think the intuitive reaction is to assume that the person doesn’t know any better and really would be better off being free. But what if I was absolutely convinced that they enjoyed captivity…or what if I thought that they actually benefited from captivity. Should they be free? It’s a more complex question than one initially might think and shows some of the complexity of developing foundations. Ideally, we should be able to find cases where any foundation is generally used, even in cases where the use of that foundation causes harm.
With that in mind, I would offer these potential modifications of our initial foundations.
- Fairness is a notoriously ambiguous word and can mean many things to many people. Current questions focus too much on fairness as equality, which is possibly motivated concern for the harm experienced by those who experience less equal outcomes. In order to separate it further from harm, I would focus this foundation more on the principle of equity, where people get what they deserve. Equity is motivationally tied to the desire for productivity and so this foundation would then possibly encompass ideas of property rights, sloth and waste, which have been missing from the current taxonomy.
- Concerns about liberty, equality and rights would be moved to the Harm foundation. All of these constructs are things which could relate to the harm caused to another individual, whether it is the psychological harm due to being controlled, the emotional harm due to receiving an unequal share, or the harm to self-esteem when one does not feel like one has any rights.
- Ingroup and authority foundations have tended to predict similar things and co-occur in individuals such that one might doubt the independence of these two factors. As they are currently measured, respecting authority and being loyal could both be considered subsets of a system that might be labelled “being a good group member”. Some items which measure authority concern the desire for things to stay the same and a resistance to change, which has been shown to be indicative of conservative thought. Changing authority to this conception and labeling it ‘conservation’ while allowing ingroup loyalty to encompass other aspects of being a good group member might improve the discriminant validity of the authority and ingroup foundations.
- Many of the other candidate foundations that have been proposed deal with truth, wisdom, honesty, and authenticity. Telling the truth is a moral principle which might survive all of the above tests as it is conceptually distinct, describes a class of people (see The Dignity of Working Men), has an evolutionary explanation (trustworthiness), and is observed when it is contradictory to self-interest and causes harm to others. In conceptualizing this foundation, I might consider including things like simplicity, directness, and being a stand-up guy. This might explain why conservatives have a disdain for liberal academics who are too complex to be trusted and lack practical intelligence that is indicative of being a ‘stand-up’ guy.
These are merely hypotheses and opinions, so take them for what it’s worth. It is also important to note that the fact that it is possible to refine a theory doesn’t reduce the importance or contribution of the theory. In fact, the fact that I (and many others) posted about refining it means that this theory has had a significant impact on public discourse and is worthy of refining.