Thanks to the publicity which moral psychology (and specifically Jon Haidt's work) has begun to receive, along with the average person's insatiable appetite for knowledge about themselves, facilitated by the internet, we have collected a truly unique dataset at yourmorals.org. It is a large community sample and includes some reaction time data. It is non-representative (skewed liberal and educated), but includes individuals from diverse trackable sources such that some robustness analysis is possible. However, even if we wanted to (an open question), it would be impossible for those of us who collected this data to formally publish all the results. Hence, we would like to potentially solicit your help.
Academic publishing is not easy. In psychology (though we'd be happy to publish outside of psychology), it's not enough just to have a valid results, but the results often have to be novel as well. Therefore, many replication studies may not be publishable or may only be publishable in lesser known journals or just on this blog. That doesn't necessarily make that endeavor unworthwhile, as replication, or the failure to replicate, is an essential part of the scientific method, but we want people to know what they are getting into. We're open to anyone who is motivated to publish in peer reviewed journals, and there is no inherent reason that limits this to academics. However, it's a labor intensive process with no monetary reward, so it's quite possible that only those with an eye toward building an academic CV might be interested.
Here is a running list of potentially publishable results which are in our publication queue, but there are many more possibilities. We are open to proposals on a variety of topics. Some of you might be interested in a specific topic and might find this list of measures useful in determining if we have data on that topic. Data might potentially serve as the 1st study in a 3 study package where a community sample reinforces the results of a lab experiment, or as convergent evidence in something you already are working on. In rare cases, we may even be willing to collect new data using additional measures, even including experimental methods, if your ideas are compelling enough. However, there are only so many resources we have and the degree of effort required is definitely a consideration, balanced against the contribution which could be made. Also bear in mind that some number of papers are already in progress, and it may be possible that your idea is already being worked on.
If you are interested, please use this form to contact me as it has important questions to be answered. Beginning any publication process is a commitment and we would obviously like to work on projects that have successful conclusions. Thanks for your potential interest.
- Ravi Iyer
Unfortunately, I think the answer is no. For the last week, I've been attempting to update a 'candidate calculator' website that I helped create for the 2008 presidential election, votehelp.org. Candidate calculators are a term for quizzes or surveys which ask you questions about issues (sometimes weighted by issue importance) and then match you with candidates. They were extremely popular during the 2008 election as people do not have the time to pay attention to every politician's stance on every issue. Votehelp.org was one of many candidate calculators during the 2008 election, and certainly not the most popular (see also VAJoe, GlassBooth, and there are more...). Even so, we had a lot of traffic and press....below are our traffic stats.
VoteHelp served hundreds of thousands of visitors, so I'm guessing many millions took similar surveys when you combine traffic from all 2008 election calculators. Traffic spiked noticeably during decision making periods (Jan-Feb primary and November election) with a low bounce rate, indicating that it served it's purpose of educating the electorate. There is clearly demand for such time saving services.
The ironic thing is that people know far more about presidential candidates compared to other elections. In 2010, how many people know much about local judges, state senators, or even our congressmen. People have better things to do, even political junkies like me, and it is understandable that people rely on partisanship rather than issue positions when making voting decisions. As much as votehelp was useful in 2008, it could be even more useful in 2010 if it could change the equation, such that becoming informed on individual issues was simpler.
However, the task of assembling data was difficult in 2008. We had some funding, but even for one election, the expense of the research was not small. Repeating those methods, even just for congressional races, would be prohibitively expensive. I was hopeful that the convergence of new data sharing technologies (APIs, XML, the semantic web) and databases (open government data sources) might facilitate this process. I subscribe to mailing lists about parsing political data, follow the Sunlight foundation on facebook, and am aware of few organizations like OnTheIssues and Project VoteSmart which track issues, some of which have APIs. Could I combine these projects into a mashup of data that would inform 2010 voters?
Unfortunately, a few days later, I have to admit defeat. There is tons of data out there. But it just isn't complete or meaningful enough. For example, VoteSmart has a wonderful service where they have interest group ratings for candidates.Theoretically, these interest groups could take some of the open government data on votes and create composite viewpoints, based on their issue perspective and reflected in their ratings. However, ratings only exist for prominent politicians like Barbara Boxer and not for challengers like Carly Fiorina (her likely opponent in the California Senate race) or Steve Poizner. Fiorina may not have much of a record as a businesswoman, but Poizner certainly should have some ratings from his other official offices. Further, below is a graph of the interest group ratings which exist for Boxer.
The vast majority of ratings are either 100 or 0, which leaves little room for nuance. The increasing partisanship we see in washington is reflected in these ratings such that there is little predictive power beyond whether someone is a democrat or republican. Perhaps interest groups, which are necessarily partisan, aren't the best aggregators of knowledge as their views are necessarily extreme and therefore their opinions of legislators are equally extreme.
I don't think the world needs more open government data, at least for informing the electorate in voting decisions. Maybe that helps the press uncover corruption, but what seems more important are objective ways to aggregate data and create meaning out of the tidal wave of public data. Political scientists and psychologists can play a role in objectively extracting meaning from this data, along with web developers and data architects who make this data available. If anybody has ideas on how I might be able to do this for 2010, I'd love to hear them as I would love to work with smart, resourceful people on these issues. Please drop me an email or a comment. Until then, it looks like votehelp will have to wait til 2012.
- Ravi Iyer