I am currently the Managing Director of the Psychology of Technology Institute, which is a project of USC's Neely Center. Before that, I led data science, research, and product teams across Facebook toward improving the societal impact of social media. I began that work focused on policy-based content moderation, but learned the painful lesson that a focus on content moderation was a dead end and often a distraction from the product design and algorithmic changes that would truly make a difference.
We clearly did not solve things, but across many teams and individual efforts, we did launch:
• content neutral algorithmic changes to reduce engagment incentives for important topics
• content neutral reputation signals
• UI changes and subjective measurement to enable more value aligned algorithms
• content-neutral break-the-glass measures for elections and crises
• removal of (some) anger from base incentives
• auditing of viral content internationally
• numerous other research studies, product changes and experiments that can advance our understanding.
These changes collectively measurably reduced misinformation, reduced content that experts consider dangerous, and improved the user experience. However, a lot of work still needs to be done and needs the outside world to be engaged productively, in order to do it well. The principles behind the changes we made apply not just to Facebook, but to a host of other companies that build online products where engagement and algorithms interact. I write about those principles on Substack.
Before working at Facebook, I had the privilege of a unique career across technology and academic roles. I helped co-found and build the initial algorithms for Ranker.com, which is a profitable publisher of crowdsourced lists that serves tens of millions of unique visitors monthly and employs 125+ people. I have a PhD in Social Psychology from the University of Southern California and have coauthored of dozens of highly cited empirical articles about individual values, political opinions, polarization, and technology, which have collectively been cited thousands of times. I have worked with numerous non-profits fighting polarization. My work across tech and academia has been featured in the Wall Street Journal, New York Times, The Atlantic, SXSW and numerous other venues (see links).
My unique experience as both a technical leader within tech companies as well as a researcher on social impact was why Facebook recruited me to work on improving the platform. This enabled me to make the platform a bit better, but I also know there is so much more work to be done and I left to contribute the knowledge and experience I've gained toward a wider societal conversation about how we can improve technology's impact on society. It is why I joined the Psychology of Technology Institute that enables me to be part of this wider conversation. If we share that goal and I can be of help in your efforts, please do not hesitate to get in touch (me at ravi iyer dt cm).
|Selected Press / Links|
|A Wall Street Journal article that details how reducing engagement incentives for political content leads to reduced anger, bullying, and misinformation|
|A podcast appearance on The Gist where I discuss how social media does not have to be divisive|
|A Daily Beast Article I wrote using Ranker Data to show that Red and Blue america are more alike than they may think|
|A Washington Post Article including changes our team helped push toward deprioritizing the Angry emoji|
|A Wall Street Journal Article on efforts I worked on regarding polarization and divisiveness|
|A KD Nuggets Interview on how to build better algorithms from crowdsourced data|
|A selection of studies we did with groups bringing people together across divides via Civil Politics|
|A 2012 talk I gave at SXSW on how more and more "Big Data" will be about serving our Values|
|A video I made of my academic work to understand libertarian morality|
|A Wall St. Journal article I co-authored with Jon Haidt on how we may get beyond polarized politics|
|A Wired article that quotes an internal document I wrote: "We are Responsible for Viral Content"|
|A post I wrote on the limits of content moderation|
|New York Times references to some of my academic work|
Use this form if you think we have complementary goals and would like to work together.