Can open government data inform voters in the 2010 election?

Unfortunately, I think the answer is no. For the last week, I’ve been attempting to update a ‘candidate calculator’ website that I helped create for the 2008 presidential election, votehelp.org. Candidate calculators are a term for quizzes or surveys which ask you questions about issues (sometimes weighted by issue importance) and then match you with candidates. They were extremely popular during the 2008 election as people do not have the time to pay attention to every politician’s stance on every issue. Votehelp.org was one of many candidate calculators during the 2008 election, and certainly not the most popular (see also VAJoe, GlassBooth, and there are more…).  Even so, we had a lot of traffic and press….below are our traffic stats.

VoteHelp 2008 Election Visitors

VoteHelp served hundreds of thousands of visitors, so I’m guessing many millions took similar surveys when you combine traffic from all 2008 election calculators.  Traffic spiked noticeably during decision making periods (Jan-Feb primary and November election) with a low bounce rate, indicating that it served it’s purpose of educating the electorate. There is clearly demand for such time saving services.

The ironic thing is that people know far more about presidential candidates compared to other elections. In 2010, how many people know much about local judges, state senators, or even our congressmen. People have better things to do, even political junkies like me, and it is understandable that people rely on partisanship rather than issue positions when making voting decisions. As much as votehelp was useful in 2008, it could be even more useful in 2010 if it could change the equation, such that becoming informed on individual issues was simpler.

However, the task of assembling data was difficult in 2008. We had some funding, but even for one election, the expense of the research was not small. Repeating those methods, even just for congressional races, would be prohibitively expensive. I was hopeful that the convergence of new data sharing technologies (APIs, XML, the semantic web) and databases (open government data sources) might facilitate this process. I subscribe to mailing lists about parsing political data, follow the Sunlight foundation on facebook, and am aware of few organizations like OnTheIssues and Project VoteSmart which track issues, some of which have APIs. Could I combine these projects into a mashup of data that would inform 2010 voters?

Unfortunately, a few days later, I have to admit defeat. There is tons of data out there. But it just isn’t complete or meaningful enough. For example, VoteSmart has a wonderful service where they have interest group ratings for candidates.Theoretically, these interest groups could take some of the open government data on votes and create composite viewpoints, based on their issue perspective and reflected in their ratings. However, ratings only exist for prominent politicians like Barbara Boxer and not for challengers like Carly Fiorina (her likely opponent in the California Senate race) or Steve Poizner. Fiorina may not have much of a record as a businesswoman, but Poizner certainly should have some ratings from his other official offices. Further, below is a graph of the interest group ratings which exist for Boxer.

Interest Group Ratings for Barbara Boxer

The vast majority of ratings are either 100 or 0, which leaves little room for nuance. The increasing partisanship we see in washington is reflected in these ratings such that there is little predictive power beyond whether someone is a democrat or republican. Perhaps interest groups, which are necessarily partisan, aren’t the best aggregators of knowledge as their views are necessarily extreme and therefore their opinions of legislators are equally extreme.

I don’t think the world needs more open government data, at least for informing the electorate in voting decisions. Maybe that helps the press uncover corruption, but what seems more important are objective ways to aggregate data and create meaning out of the tidal wave of public data. Political scientists and psychologists can play a role in objectively extracting meaning from this data, along with web developers and data architects who make this data available. If anybody has ideas on how I might be able to do this for 2010, I’d love to hear them as I would love to work with smart, resourceful people on these issues. Please drop me an email or a comment. Until then, it looks like votehelp will have to wait til 2012.

- Ravi Iyer

Comments

comments

Also read...