Problem retrieving data from Twitter

CivilPolitics.org comments on Hollande’s Political Strategy for BBC World

Reposted from this post on the Civil Politics Blog

Earlier today, I appeared on BBC World’s Business Edition to comment on Francois Hollande’s efforts to unite union and business interests in working to improve the lagging French economy.  I provided the same advice that I often do to groups that are looking to leverage the more robust findings from social science in conflict resolution, specifically that rational arguments only get you so far and that real progress is often made when our emotions are pushing us toward progress, as opposed to working against us.  Accordingly, it often is better to try to get the relationships working first, in the hopes that that opens minds for agreement on factual issues.  As well, it is often helpful to emphasize super-ordinate goals, such as improving the economy as a whole in this case, as opposed to competitive goals such as hiring mandates.  Lastly, hopefully Hollande, as a socialist who is fighting for business interests, can help muddy the group boundaries that can make conflicts more intractable, providing an example of someone who is indeed focused on shared goals.

Below is the segment, and my appearance is about 2 minutes into the video.

- Ravi Iyer

Go to Source

Comments

comments

Ridiculously Good Looking Celebs Are Licking Popsicles, It Must Be Summer!

Reposted from this post on the Ranker Data Blog

The Best Internet Reactions to Jeremy Meeks’s Sexy Mug Shot
Sexy mugshot photos reached a new high with the booking of the ridiculously hot felon Jeremy Meeks. If you somehow missed this gem of a story last week, you need to check these out.

Who Will Win The 2014 World Cup?
It’s official: the Ranker office has World Cup fever. And so do all of you apparently! Thousands have voted on who they think will win. You may be surprised at who’s currently on top.

24 Crazy Sexy Photos of Celebrities Eating Popsicles
It’s finally summer! When it’s hot, here’s the best possible way to cool off: with pictures of ridiculously great looking celebrities sucking down on cold, delicious, refreshing popsicles.

What Guys REALLY Talk About On Boys’ Night Out
Hint: it’s sex. Also, sex.

73 Rare Photos From Behind the Scenes of Star Wars
These leaked photos capture images of Yoda before he was finished, the building of the actual, on-set Millennium Falcon and, of course, the entire cast flirting with Princess Leia.

11 Wedding Themes That Are Just a Bad Idea
You’d assume that if someone is that big of a Game of Thrones fan, they’d know what happens at weddings, right?

The 47 Greatest Pun-tastic Restaurant Names
It doesn’t make any sense, but food that comes from restaurants with funny names always tastes better. Pulled pork sandwich from KFC? Gross. Pulled pork sandwich from Forrest Rump? Awesome.

The 26 Craziest 2014 World Cup Hair Cuts
Bonus: here is the single most important World Cup ranking you will see all day.

The post Ridiculously Good Looking Celebs Are Licking Popsicles, It Must Be Summer! appeared first on The Ranker.com Blog.

Go to Source

Comments

comments

Pew Research highlights Social, Political and Moral Polarization among Partisans, but more people are still Moderates

Reposted from this post on the Civil Politics Blog

A recent research study by Pew highlights societal trends that have a lot of people worried about the future of our country.  While many people have highlighted the political polarization that exists and others have pointed to the social and psychological trends underlying that polarization, Pew’s research report is unique for the scope of findings across political, social, and moral attitudes.  Some of the highlights of the report include:

  • Based on a scale of 10 political attitude questions, such as a binary choice between the statements “Government is almost always wasteful and inefficient” and  ”Government often does a better job than people give it credit for”, the median Democrat and median Republicans’ attitudes are further apart than 2004 and 1994.
  • On the above ideological survey, fewer people, whether Democrat, Republican, or independent, are in the middle compared to 1994 and 2004.  Though it is still worth noting that a plurality, 39% are in the middle fifth of the survey.
  • More people on each side see the opposing group as a “threat to the nation’s well being”.
  • Those on the extreme left or on the extreme right are on the ideological survey are more likely to have close friends with and live in a community with people who agree with them.

 

The study is an important snapshot of current society and clearly illustrates that polarization is getting worse, with the social and moral consequences that moral psychology research would predict when attitudes become moralized.  That being said, I think it is important not to lose sight of the below graph from their study.

 

Pew Survey Shows a Shrinking Plurality holds Moderate Views

Pew Survey Shows a Shrinking Plurality holds Moderate Views

 

Specifically, while there certainly is a trend toward moralization and partisanship, the majority of people are in the middle of the above distributions of political attitudes and hold  mixed opinions about political attitudes.  It is important that those of us who study polarization don’t exacerbate perceived differences, as research has shown that perceptions of differences can become reality.  Most Americans (79%!) still fall somewhere between having consistently liberal and consistently conservative attitudes on political issues, according to Pew’s research.  And even amongst those on the ends of this spectrum, 37% of conservatives and 51% of liberals have close friends who disagree with them.  Compromise between parties is still the preference of most of the electorate.  If those of us who hold a mixed set of attitudes can indeed make our views more prominent, thereby reducing the salience of group boundaries, research would suggest that this would indeed mitigate this alarming trend toward social, moral, and political polarization.

- Ravi Iyer

Go to Source

Comments

comments

Comparing World Cup Prediction Algorithms – Ranker vs. FiveThirtyEight

Reposted from this post on the Ranker Data Blog

Like most Americans, I pay attention to soccer/football once every four years.  But I think about prediction almost daily and so this year’s World Cup will be especially interesting to me as I have a dog in this fight.  Specifically, UC-Irvine Professor Michael Lee put together a prediction model based on the combined wisdom of Ranker users who voted on our Who will win the 2014 World Cup list, plus the structure of the tournament itself.  The methodology runs in contrast to the FiveThirtyEight model, which uses entirely different data (national team results plus the results of players who will be playing for the national team in league play) to make predictions.  As such, the battle lines are clearly drawn.  Will the Wisdom of Crowds outperform algorithmic analyses based on match results?  Or a better way of putting it might be that this is a test of whether human beings notice things that aren’t picked up in the box scores and statistics that form the core of FiveThirtyEight’s predictions or sabermetrics.

So who will I be rooting for?  Both methodologies agree that Brazil, Germany, Argentina, and Spain are the teams to beat.  But the crowds believe that those four teams are relatively evenly matched while the FiveThirtyEight statistical model puts Brazil as having a 45% chance to win.  After those first four, the models diverge quite a bit with the crowd picking the Netherlands, Italy, and Portugal amongst the next few (both models agree on Colombia), while the FiveThirtyEight model picks Chile, France, and Uruguay.  Accordingly, I’ll be rooting for the Netherlands, Italy, and Portugal and against Chile, France, and Uruguay.

In truth, the best model would combine the signal from both methodologies, similar to how the Netflix prize was won or how baseball teams combine scout and sabermetric opinions.  I’m pretty sure that Nate Silver would agree that his model would be improved by adding our data (or similar data from betting markets that similarly think that FiveThirtyEight is underrating Italy and Portugal) and vice versa.  Still, even as I know that chance will play a big part in the outcome, I’m hoping Ranker data wins in this year’s world cup.

– Ravi Iyer

Ranker’s Pre-Tournament Predictions:

FiveThirtyEight’s Pre-Tournament Predictions:

The post Comparing World Cup Prediction Algorithms – Ranker vs. FiveThirtyEight appeared first on The Ranker.com Blog.

Go to Source

Comments

comments

Comparing World Cup Prediction Algorithms – Ranker vs. FiveThirtyEight

Reposted from this post on the Ranker Data Blog

Like most Americans, I pay attention to soccer/football once every four years.  But I think about prediction almost daily and so this year’s World Cup will be especially interesting to me as I have a dog in this fight.  Specifically, UC-Irvine Professor Michael Lee put together a prediction model based on the combined wisdom of Ranker users who voted on our Who will win the 2014 World Cup list, plus the structure of the tournament itself.  The methodology runs in contrast to the FiveThirtyEight model, which uses entirely different data (national team results plus the results of players who will be playing for the national team in league play) to make predictions.  As such, the battle lines are clearly drawn.  Will the Wisdom of Crowds outperform algorithmic analyses based on match results?  Or a better way of putting it might be that this is a test of whether human beings notice things that aren’t picked up in the box scores and statistics that form the core of FiveThirtyEight’s predictions or sabermetrics.

So who will I be rooting for?  Both methodologies agree that Brazil, Germany, Argentina, and Spain are the teams to beat.  But the crowds believe that those four teams are relatively evenly matched while the FiveThirtyEight statistical model puts Brazil as having a 45% chance to win.  After those first four, the models diverge quite a bit with the crowd picking the Netherlands, Italy, and Portugal amongst the next few (both models agree on Colombia), while the FiveThirtyEight model picks Chile, France, and Uruguay.  Accordingly, I’ll be rooting for the Netherlands, Italy, and Portugal and against Chile, France, and Uruguay.

In truth, the best model would combine the signal from both methodologies, similar to how the Netflix prize was won or how baseball teams combine scout and sabermetric opinions.  I’m pretty sure that Nate Silver would agree that his model would be improved by adding our data (or similar data from betting markets that similarly think that FiveThirtyEight is underrating Italy and Portugal) and vice versa.  Still, even as I know that chance will play a big part in the outcome, I’m hoping Ranker data wins in this year’s world cup.

- Ravi Iyer

Go to Source

Comments

comments

Intuitionism in Practice: How the Village Square puts Relationships First

Reposted from this post on the Civil Politics Blog

Our friends at the Village Square recently wrote an article about how they have been able to bridge partisan divides in their community, based on their experiences at numerous community dinners they put on in their neighborhoods.  Their experience dovetails nicely with what has been found in academic psychology, specifically that any type of attitude change requires appealing to the intuitive side of individuals, in addition to the rational side.  Accordingly, their “irreverently named programs are part civic forum, part entertainment” where they seek first to build relationships to open people’s minds, before attempting to get people to rationally understand the other sides’ arguments.  From the article:

In “The Big Sort: Why the Clustering of Like-minded America is Tearing Us Apart,” Bill Bishop documents how, in nearly all aspects of life, we’ve become less connected to those who don’t share our views – in the churches we go to, the clubs we join, the neighborhoods we live in.

No longer engaging across the aisle with neighbors, there’s little to mitigate the human tendency toward tribalism. Once we’ve demonized each other, the simple act of talking is tantamount to negotiating with evil.

To address this challenge, our irreverently named programs are part civic forum, part entertainment. Each event is casual (the stage is set up to feel like the facilitator’s living room) and involves sharing food. As we begin, we give out two “civility bells,” ask that the audience avoid tribal “team clapping,” and share a quote to inspire our better angels. We welcome fluid audience participation and always try to laugh.

Since we first imagined The Village Square, we have repeatedly returned to the same conclusion: We can’t wait around for Washington to lead on this. It’s in our hometowns, where we carpool to softball games and borrow cups of sugar, where we can most easily have the conversations democracy requires of us.

Recently, there has been a lot of re-examination of social science findings that may or may not replicate, especially in real-world environments.  The fact that social science research that emphasizes the importance of personal relationships in changing attitudes has found real world application and validation is comforting for those of us who would like to leverage this research in reducing morally laden conflicts.  Those of us who would like to mitigate the natural animosity that arises when competing groups are formed would do well to follow the Village Square’s lead and put relationships first.

- Ravi Iyer

Go to Source

Comments

comments

Cantor Loss shows Crowdsourcing, not Polling, is the Future of Prediction

Eric Cantor, the 2nd most powerful person in the House of Representatives, lost in the Republican Primary today to the relatively unknown Dave Brat. While others have focused on the historical nature of the loss, given Cantor’s position in his party, or on the political ramifications, I was most intrigued by the fact that polls conducted recently predicted Cantor would win by 34 points or 12 points.  In the end, Cantor lost by more than 10 points.

How did the polls get it so wrong?  In an age where people are used to blocking out web ads, fast forwarding through commercials, and screening their calls, using automated phone technology to ask people who they will vote for and assuming that you’ll get an unbiased sample (e.g. people who answer such polls don’t differ from those who do not answer automated polls) seems unwise.  The first banner ad got 44% clickthrough rates, but now banner ads are only clicked on by a small minority of internet users.  As response rates fall, bias is inevitable.

Pollsters may try to weight their polls and use new techniques to produce more perfect polls, but non-response bias will only get worse as consumers learn to block out more and more solicitations using technology.  On the other hand, a good crowdsourcing algorithm, such as the algorithm we use to produce Ranker lists, does not require the absence of bias.  Rather, such an algorithm will combine multiple sources of information, with the goal being to find sources of uncorrelated error.  In this case, polling data could have been combined with the GOP convention straw pollthe loss of one of his lieutenants in an earlier election, and the lack of support from Republican thought leaders, to form a better picture of the election possibilities as the non-response bias in regular polling is a different kind of bias than these other measurements likely have, and so aggregating these methods should produce a better answer.

This is easy to say in hindsight and it is doubtful that any crowdsourcing technique could have predicted Cantor’s loss, given the available data.  But more and more data is being produced and more and more bias is being introduced into traditional polling, such that this won’t always be the case, and I would predict that we will increasingly see less accurate polls and more accurate use of alternative methods to predict the future.  The arc of history is bending toward a world where intelligently combining lots of imperfect non-polling measurements are likely to yield a better answer about the future than one attempt to find the perfect poll.

- Ravi Iyer

Comments

comments

Men and Women Both Lie—But They Do It For Different Reasons

Reposted from this post on the Ranker Data Blog

lying-girlfriend-1085358-TwoByOne

We all tell white lies now and then (yes you do, don’t lie!) but did you know that men and women lie for different reasons? The data from our list of Things People Lie About All the Time shows a pattern that may hint at this difference.

The poll lists 49 common lies and asks respondents to vote “yes” if they’ve lied about that in the past 6 months or “no” if they have not. According to votes cast by over 350 people, women are more likely to lie about things that “keep the peace socially” while men are more likely to lie over matters of “self-preservation.”

On the list, women are 8 times more likely than men to lie about “being too swamped to hang out” and 4 times more likely to claim that their “phone died.” These results imply that women may be more likely to feel guilty about canceling on friends or having alone time.

In contrast, men were 2 times more likely to admit to saying things like “Oh yeah! That makes sense!” when they did not understand something and 5 times more likely to say, “No officer, I do not know why you pulled me over,” when, presumably, they did know why. These types of lies could point to men’s desire to show themselves in the best possible light and cover up wrongdoing.

Differences aside, both men and women voted similarly on many items on this list. In fact, the top 3 most popular lies were the same for both men and women.

The Top 3 Lies for BOTH Men and Women Are:

1. I’m Fine

2. I’m 5 Minutes Away

3. Yeah, I’m Listening.

Which goes to show that men and women may be able to see eye-to-eye after all… just as long as they don’t ask each other how they are doing, where they are and whether or not they are listening.

The post Men and Women Both Lie—But They Do It For Different Reasons appeared first on The Ranker.com Blog.

Go to Source

Comments

comments

Prediction: The Replication Crisis will be Solved by Market Forces, not Academics

Social science is in an interesting place, especially my home discipline of social psychology.  Traditionally, it has been practiced by scholars at academic institutions who are relatively unaffected by market forces, which meant that it didn’t really matter to people’s careers if what was discovered in social science was actually used, but rather that those discoveries were published in the right journals.  A few enterprising scholars used this fact to build lucrative careers by simply inventing discoveries that nobody was going to check.  Still others do things that are not fraud, but certainly increase the chances of finding positive results, that may or may not be generally true.  Indeed, I think every scholar engages in some form of this, including things that aren’t currently seen as biased like trying different stimuli/materials when your initial stimuli doesn’t work (I’ve done this).

There is no top-down cure for this.  We could (and perhaps should) strive for ways to make research more perfect, and worthy organizations are working on that.  But social science research is never perfect, always requiring a sample that is biased in some way and some compromise as far as control, ecological validity, and measurement, and perhaps this is where the metaphor of trying to emulate “hard” sciences like physics fails.  It doesn’t fail because the researchers are less intelligent or careful…indeed working with human subjects requires more ingenuity.  But rather it fails because while an experiment done on one rock most likely replicates on the next rock, human beings vary to much greater degrees.  We don’t all react the same to profound stimuli like the end of Romeo and Juliet, the election of a black President, or the sight of violence, so should we expect us all to react so predictably to subconscious primes or invented tasks?

I work as a data/social scientist because I believe in the utility and power of working using data on human behavior, but it is fundamentally different than experimenting on rocks or chemicals, in that findings are always probabilistic.   You can make a lot of money and do a lot of good things based on probability…but it is not the same kind of knowledge as the knowledge that allows my car to start in the morning, my refrigerator to run on electricity, and your computer to translate my words into pixels on your screen, where near-absolute predictability allows those items to function.

Probabilistic knowledge is better dealt with in markets, as compared to the current peer review journal system.  Social scientists actually study this.  The journal system is not well equipped to deal with things that are not black and white as all it’s constituent parts (what gets published, who gets authorship, who gets hired) are black and white.  Markets let people make bets, hedge their bets, and come to some probabilistic version of truth.

Social science, whether social scientists are a part of it or not, is moving toward being a market, as far more data on human thought, communication, and behavior is captured by the tech industry than is captured by academics, and the tech industry is fully responsive to market forces.  Statisticians and data scientists now publish far more knowledge than is available in scholarly journals.  Most tech companies do thousands of experiments each year and bet real money on the outcomes.

There is absolutely a place for the well designed academic study in this world, as there certainly are gaps in what is understood by industry processes.  But the insertion of market forces into social science is bound to change academics away from publication for publication sake and toward creating knowledge that is useful, as there will be real money at stake.  Publishing a paper will no longer be the end accomplishment, but rather the productive use of the knowledge gained by one’s research.  In that world, it won’t really matter what you think of the methods, statistics or claims of another researcher.  If you really don’t believe in a particular phenomenon, you can bet against it, or if you believe in a particular fact about human nature, you can bet on it.  And if there is no market for that bet, then maybe the question wasn’t that important to begin with.

- Ravi Iyer

Comments

comments

Predicting the Movie Box Office

Reposted from this post on the Ranker Data Blog

The North American market for films totaled about US$11,000 million in 2013, with over 1300 million admissions. The film industry is a big business that not even Ishtar, nor Jaws: The Revenge, nor even the 1989 Australian film “Houseboat Horror” manages to derail. (Check out Houseboat Horror next time you’re low on self-esteem, and need to be reminded there are many people in the world much less talented than you.)

Given the importance of the film industry, we were interested in using Ranker data to make predictions about box office grosses for different movies. The ranker list dealing with the Most Anticipated 2013 Films gave us some opinions — both in the form of re-ranked lists, and up and down votes — on which to base predictions. We used the same cognitive modeling approach previously applied to make Football (Soccer) World Cup predictions, trying to combine the wisdom of the ranker crowd.

Our basic results are shown in the figure below. The movies people had ranked are listed from the heavily anticipated Iron Man 3, Star Trek: Into Darkness, and Thor: The Dark World down to less anticipated films like Simon Killing, The Conjuring, and Alan Partridge: Alpha Papa. The voting information is shown in the middle panel, with the light bar showing the number of up-votes and the dark bar showing the number of down-votes for each movie. The ranking information is shown in the right panel, with the size of the circles showing how often each movie was placed in each ranking position by a user.

This analysis gives us an overall crowd rank order of the movies, but that is still a step away from making direct predictions about the number of dollars a movie will gross. To bridge this gap, we consulted historical data. The Box Office Mojo site provides movie gross totals for the top 100 movies each year for about the last 20 years. There is a fairly clear relationship between the ranking of a movie in a year, and the money it grosses. As the figure below shows, a few highest grossing movies return a lot more than the rest, following a “U-shaped” pattern that is often found in real-world statistics. If a movie is the 5th top grossing in a given year, for example, it grosses between about 100 and 300 million dollars. if it is the 50th highest grossing, it makes between about 10 and 80 million.

We used this historical relationship between ranking and dollars to map our predictions about ranking to predictions about dollars. The resulting predictions about the 2013 movies are shown below. These predictions are naturally uncertain, and so cover a range of possible values, for two reasons. We do not know exactly where the crowd believed they would finish in the ranking list, and we only know a range of possible historical grossed dollars for each rank. Our predictions acknowledge both of those sources of uncertainty, and the blue bars in the figure below show the region in which we predicted it was 95% likely to final outcome would lie. To assess our predictions, we looked up the answers (again at Box Office Mojo), and overlayed them as red crosses.

Many of our predictions are good, for both high grossing (Iron Man 3, Star Trek) and more modest grossing (Percy Jackson, Hansel and Gretel) movies. Forecasting social behavior, though, is very difficult, and we missed a few high grossing movies (Gravity) and over-estimated some relative flops (47 Ronin, Kick Ass 2). One interesting finding came from contrasting an analysis based on ranking and voting data with similar analyses based on just ranking or just voting. Combining both sorts of data led to more accurate predictions than using either alone.

We’re repeating this analysis for 2014, waiting for user re-ranks and votes for the Most Anticipated Films of 2014. The X-men and Hunger Games franchises are currently favored, but we’d love to incorporate your opinion. Just don’t up-vote Houseboat Horror.

The post Predicting the Movie Box Office appeared first on The Ranker.com Blog.

Go to Source

Comments

comments