Malleability vs Serendipity, Yelp Facebook Twitter, and Distributed Totalitarianism
At the Center for Civic Media and the Berkman Center for Internet and Society, Nathan designs and researches civic technologies for cooperation across diversity. At the Berkman Center, he applies data analysis and design to the topics of peer-based social technologies, civic engagement, journalism, gender diversity, and creative learning.
Nathan's current projects include Open Gender Tracker, Thanks.fm, and NewsPad. A full project list is at natematias.com.
Nathan regularly liveblogs talks and events. He also publishes data journalism with the Guardian Datablog and PBS IdeaLab. He also facilitates #1book140, The Atlantic's Twitter book club, and frequently hosts live Twitter Q&As with prominent writers. He coordinated the Media Lab Festival of Learning in 2012 and 2013.
Before MIT, Nathan completed an MA in English literature at the University of Cambridge, where he was a Davies Jackson scholar. In earlier years, he was Riddick Scholar and Hugh Cannon Memorial Scholar at the American Institute of Parliamentarians. He won the Ted Nelson award at ACM Hypertext 2005 with a work of tangible scholarly hypermedia. He was made a fellow of the Royal Society for the Arts, Sciences, and Manufacturing in 2013 and was an intern at Microsoft Research Fuse Labs in the summer of 2013.
Malleability vs Serendipity, Yelp Facebook Twitter, and Distributed Totalitarianism
This is the third session in the conference on Social Media and Behavioral Economics at Harvard.
Session Three: Malleability vs Serendipity in Social Choice
Our moderator, Cass Sunstein, starts out by talking about the food pyramid. "We got rid of the pyramid and replaced it with a simple icon called the food plate." If you make half your plate fruits and veg, you're on the right track. If you talk to people in terms of a pyramid- a complicated, vague, and ambiguous way-- you often get stymied and nothing happens. But if you give people clarity and a choice, then you get a response which may be agreeable and won't be "I don't get it and will go do something else."
Cass talks about a study by Michael Salganik, Peter Dodds, and Duncan Watts on music downloads, sorting people into groups and seeing who downloads which songs. Does quality win in these groups? The serendipitous fact of the early downloaders mattered hugely. The best songs were never flops and the flops never topped these "mini-charts," but almost anything could happen. Maybe the Mona Lisa is serendipitously the most famous painting in the world. Maybe politicians are serendipitously successful.
Malleability is the sibling to unpredictability, Sunstein tells us. If some songs are getting popular early, or if researchers say that some downloads are getting popular early, then you can sometimes influence what's popular. That's an opportunity.
Sunstein calls false dichotomy on the binary of default rules and active choices. In all of Germany, only two communities show high usage of green energy-- places where they have to opt out. In the last session, speakers discussed the difference between defaults and active choices. Whenever we interact with private and public institutions, we are always living with default rules in the "choice architecture of life." These are typically mass rules-- affecting everyone in the population. In some domains, we have personalised default rules. On the basis of our past choices or people relatively like us, a default rule is set for us based on our situation.
"If it is know that you are pro privacy, then you could be directed toward privacy-enabled defaults," Sunstein argues.
If people know about what you like and choose, even without knowing your name. Even if we don't know your name, we can gain from the judgments of others. A highly privacy-protecting regime has the downside of depriving the public of information-- not about us in particular-- but data about what people like and do. While we should have control over our privacy, there is a complicated tradeoff between personal privacy and information flow, which is also an important value. In the best cases, that can save lives.
First up is Elliot Schrage, Vice President, Communications and Public Policy, Facebook. Elliot tells us that too little attention has been paid to the opportunities opened by the data collected by social networks.
Remember the beginning of Facebook, Elliot tells us. Five years ago, people saw it as a silly place where people poke each other and throw sheep. When Facebook people told them that the site would be creating a revolution in human society. They had to stop saying that after the Arab Spring. Facebook's vision is to unlock the potential governance and public opportunities created by connections, the knowledge of connections, social power and the power of friends.
Over a billion people have signed up to Facebook. 600 million people return every day. Mobile is used more widely on Facebook than desktop. People have mapped online the social graph of their relationships and interests in a manner that lets people share what they want to share when they want to share it. Elliot points out that people can control what they share. In 2007, Facebook created platform that enables anyone to tap into that social graph. Organ donation is just the first step.
Which of these are good or bad ideas? Now is the time to figure out how to realise these opportunities. He encourages government to procure social technologies, create seed funds, and offer grants that build on the Facebook social graph and other social networks.
Next up is Mike Luca, Assistant Professor at Harvard Business School.
How can government use Yelp.com? Mike started by asking how consumers learn about product quality in the Internet age. The Internet is full of sites like Yelp, Angies List, and Tripadvisor. This is part of the broader crowdsourcing movement, where individuals share recommendations with each other online. Mike argues that since searches for the Michelin Guide is down, compared to Yelp, that Yelp is becoming more and more influential.
Mike recently conducted a study of Seattle Restaurant reviews(pdf). Yelp covers 70% of Seattle restaurants, compared to Zagat, the Seattle Times, and the Food and Wine website, which cover far less than 10% of restaurants. Mike shows us Yelp reviews for restaurants near MIT, which often show price, location, contact information, hours, and individual reviews.
Mike collected the entire history of Yelp reviews since 2005. He also has data on all the restaurant characteristics. He worked together with the Washington State Department of Revenue to create a comprehensive dataset of Seattle restaurants from 2003-2009, matching it up with the Yelp data.
Is Yelp affecting behavior? That's not easy to determine. Maybe a restaurant has lots of reviews on Yelp because people already like it. Then Mike noticed that Yelp rounds its ratings. a 3.24-rated restaurant is rounded down to 3 stars, but a 3.25 rated restaurant is rounded up to 3.5 stars. Using that data, Mike compared the sales of restaurants which were very close in ratings, but whose rating was displayed very differently. Mike found that Yelp matters more when there are lots of reviews. It also matters less for chain restaurants than independent ones. 1 star difference led to a 5-9% increase in revenue for independent restaurants.
Could Yelp reviews be used to inform policy around restaurant hygiene? Mike talks to us about a successful Los Angeles policy in 1990, requiring restaurants to post hygiene results to the front of doors. Can we do better than this in 2013? Mike has been working with Yelp and a series of cities to incorporate public hygiene grades onto Yelp. Mike argues for better public datsets. If cities make health data more available, it will be easier for companies like Yelp to pust it to consumers.
Mike goes further to encourage cities to use Yelp data to allocate hygiene inspectors. In addition to looking at Yelp reviews after there's a problem, might it be possible to predict future violations based on Yelp reviews?
Mike concludes by arguing that governments who plan to work with social media should develop a two-way, nuanced approach. In addition to sharing data, they should also be learning from what companies do with their data.
Alex Macgillivray, General Counsel at Twitter speaks next.
We don't have static preferences. What we choose is up for grabs, depending on how we are asked. This puts incredible moral pressure on the people who design interfaces. Companies like Twitter and Facebook collect data on people's choices in the knowledge that those choices are not static; this puts a tremendous amount of pressure on companies and government alike.
How does this play out? Alex talks to about the history of the retweet. In the early days of Twitter, they wanted information flow to be democratic, making it possible for information to propagate through the network as easily as possible. So Twitter simplified retweets by making them as simple as possible. He points us to a recent example where a new UK Twitter user's video of a helicopter crash was able to spread widely.
Next up is Jonathan Zittrain, Professor, Harvard Law School, and Faculty Co-Director, Berkman Center for Internet and Society.
When Jonathan thinks about behavioural economics, he often associates it with the thought, "Who knew!?" a world where we are influenced in ways that we don't know.
Social Media is harder to encapsulate in a catchphrase, but it's roughly grounded in the idea of peer-to-peer, the idea that you're drawing information, advice, and insight from people who are on the same plane as you in the same way-- a fellow little guy or gal who's giving you information, when you might traditionally get it from broadcasters.
There's a tension between "who knew?" and peer-to-peer. What intermediating institution should we talk about? In the early days of the Internet, this was un-owned. People would create things like protocols and give them away for free. Anyone can join the community of non-nations that is the collective hallucination of the Internet. Email is un-owned. There is no one you can talk to in order to change email. Instant messaging, the Internet, and the WWW itself were un-owned phenomena which had no solid institution running them.
JZ fast forwards to Twitter. Alex acknowledges a vast responsibility beyond those held by most companies. The function and algorithm of Twitter are very simple, and Twitter mostly stays out of that conversation. Then it starts getting complicated. JZ shows us Top Tweets, Featured Tweets, and other ways that the information flows are shaped by companies. He points out the UVA study which encourages companies like Facebook to protect the privacy of users. As soon as a company takes that gatekeeping power, they take on very large responsibilities.
If I have a single Take Down This Facebook Wall, Mr Gorbachev, JZ says, I would ask how to unfriend multiple people at once. Companies make this very hard, because they want people to keep connections.
Jonathan next talks about "the least subtle product placement ever," where Barnes & Noble replaced every instance of the word "Kindle" with the word "Nook" in ebooks.
Zappos recently found that people don't respect reviews which aren't written well. So they have been paying people to correct the grammar and spelling of their reviews. This is another subtle change in information flows which we will probably see more frequently.
Jonathan next talks about the Facebook study on voting records. What would happen if someone at Facebook wanted to get people to vote in a particular way? Might they be paid, regulated, or forced to shape their platform to influence that behaviour? What advice to we give those companies? If we use algorithms, do we release the function of those algorithms?
Jonathan responds to the claims of The Surprising Power of Neighborly Advice. Not only can we give people good advice, but we can also use Rotten Neighbor to tell everyone that you're a terrible neighbor. He also directs us to research by Strahilevitz on the possibility of linking neighbor reports of driving behaviour with insurance rates. Maybe that works, but maybe neighbors just dislike your politics. Perhaps we all need a tricorder which tells us who to talk to based on what people think. If it's bad enough that people change their behaviour to get Mayor on Foursquare or get the extra airline miles, what will happen if we're always rated on friendliness, helpfulness, or behaviour in bars?
Jonathan also points out reports that Facebook is monitoring chats for possible criminal activity, and a microsoft patent for a consumer detector that uses the Kinect to charge more based on how many people are watching the show.
One fear is that systems would misjudge us. Jonathan shows us a screenshot of Amazon recommending the Official Lego Creator Kit with a book on American Jihad.
Another fear is the influence of SEO. Jonathan shows us a Craigslist listing which offers to pay people to show up at a meeting wearing pro-coal T-shirts. The power of social media gets lost, JZ tells us, once we understand why they work.
How can we deal with these worries about the institutions that mediate between the public and social media. Jonathan tells us about the Online Interest-Based Advertising Accountability Program, which Facebook has joined. Jonathan suggests that we offer something like social bankruptcy-- an option to reset our data every once in a while. As we consider the malleability of society, we should also value and preserve the surprise.
Cass talks about the social architecture of choice versus the social architecture of serendipity. On one hand, you might have a network which you entirely control. You can also imagine one, perhaps inspired by Jane Jacobs, which is designed for serendipity. Sunstein suggests systems where control is the norm, with occasional space for serendipity.
While sunlight is a great disinfectant, one of the downsides is entrenchment. When people are presented with their behaviour, they can double down. Finally, Sunstein talks about the power of choice architecture that we experience in supermarkets and cafeterias. Even those who try to maintain neutrality are always making choices.
An audience member imagines a world of augmented reality, where people are always giving feedback about their peers. Aren't we all becoming big brother unwittingly? How do we avoid unwitting totalitarianism?
"It's a weird form of distributed totalitarianism," JZ responds. The presence of cameras changes things. By the time you ask people, they will say they don't mind and they do desire attention. Talking about it may dispel the effects, as we reflect on who we want to be. We should also make it easy for people to shut down and turn off. This is different from local gossip in communities because it's data, and because something can go viral in a way that sparks continuous outrage against a person.
Elliot is fascinated by the tension between neutrality and intervention. The organ donor issue is a clear example of intervention. Facebook struggles very much to maintain a neutral platform. If there are opportunities for intervention, who should be the intervenors?
JZ argues for something different from neutrality: the need for transparency on how algorithms work. He also encourages social networks to offer an opportunity for the person who appears in media to share a message about their preferences for it to be shared.
Alex talks about the way that Internet companies came out of their own neutrality to participate in the SOPA/PIPA action. Platforms like Facebook and Twitter stayed out of it, while many people in LA took the lesson that Internet platforms could turn against them. The panelists conclude with some chat about the fairness doctrine in broadcast and whether it has any relevance to the Internet.