Creating Technology for Social Change

Iyad Rahwan on Scalable Civics

photo1.jpg

Iyad Rahwan is giving a special talk today at the MIT Media Lab. Here’s his bio from the event announcement:

 

Iyad Rahwan, a native of Aleppo, Syria, is an associate professor at

Masdar Institute, a research institute in Abu Dhabi established in cooperation with MIT. He is an Honorary Fellow at the University of Edinburgh, and holds a PhD from the University of Melbourne. Iyad won the US State Department’s Tag Challenge, in which he mobilized thousands of people world-wide to find target individuals in under 12 hours using only mug shots. This led The Economist magazine to coin the term “6 degrees of mobilization” to describe how social media is making Milgram’s “6 degrees of separation” actionable.

 

Liveblogging contributed by Catherine D’Ignazio, Erhardt Graeff and Dalia Othman.

 

Deb Roy introduces Iyad Rahwan who spent time at MIT Media Lab previously.

 

Iyad introduces the idea that there is a crisis in civics – the idea that citizens don’t feel empowered to effect change. While some of this has been attributed to lack of civic education, although and he quotes Ethan Zuckerman on saying that this is a lack of agency, not feeling empowered enough to bring about change. And Iyad believes this is caused by a lack in scalability.

 

He notes the irony of talking about civics while hailing from one of the most screwed up countries in the world—Syria.

 

He wants to return to the roots of civic participation – the earliest record of this is from the city of Ebla in Syria—50km south of Iyad’s hometown of Aleppo—where archeologists discovered tablets that described how they were running the city. There was an elected kingship and a popular assembly, a council of elders and there was even civic education recorded on the tablets.

 

He fast-forwards 1000s of years to Athens where these original ideas produce what is often thought of as the ideal version of direct democracy.

 

He shows a slide of contemporary Athens and says that it is ironic that today people there feel disempowered to effect change. We have high population concentration in megacities, we have a complex financial system that no individual can understand let alone regulate.

We also have systems of governance that are so complex that they themselves are much more complex [as a process] than the problems they are trying to solve.

There are cognitive limits to our abilities to cooperate with others. He references Hobbes who says that large-scale cooperation is only possible if you surrender your rights to the Leviathan – the ultimate power- who will mediate that cooperation for you.

 

He is arguing today that technology, enables us for the first time to scale up our ancient civic practices. This comprises three elements in his mind:

  1. Scalable Mobilization

  2. Scalable Cooperation

  3. Scalable Sensemaking

 

His main question is, “What are the laws and design principles that help or hinder scalable civics?”

 

Scalable Mobilization

He will show examples from his work which teach us how to do this. He starts by describing the US State Dept.’s Tag Challenge. The goal being to have a community find a prospective thief somewhere in the world. You need to try and find this person within 12 hours. The made-up thief was photographed at 8am in Bratislava, and they (Iyad’s team) found him at 11:15am in Dubai.

 

This represents unprecedented ability to track people across language barriers in short periods of time. They had an incentive scheme that helped people find the targets but also to propagate to their friends. Their system used financial incentives, $100 for each photo your friend uploaded.

 

MIT Tech Review wrote that this is a new notion of distance, rather than Milgram’s 6 hops from anyone, we can now talk about it temporally: we are only 12 hours away from anyone on earth provided we can mobilize sufficient people to search for them.

 

We learn from anecdotes and from data. The Washington DC suspect was found by the David Alan Grier who is the President of the IEEE Computer Society and also wrote a book on ancient crowdsourcing.

 

“I could tell  by the tweets that most of the crowd was looking for her in the city’s grand public places…” i.e. the crowd was looking in the wrong place. He surmised that a young woman might spend her day in a cafe if she needed to be in public all day.

in Computer (http://www.computer.org/csdl/mags/co/2013/04/mco2013040116.html)

 

Another thing they learn from the experience is that people were not broadcasting the message, there was convergence towards the cities, or convergence recruitment or targeted recruitment in the city of interest.  

 

This was the result of micro-decisions made by people who recruited specific friends in specific places.

 

What about spreading the message as far as possibe? Many here know about the Red Balloon Challenge in which DARPA placed red balloons all over the US. Sandy’s team found the balloons in under 9 hours using a similar incentive mechanism of financial rewards.

 

Iyad was a visiting scholar at the MIT Media Lab immediately following the victory by Sandy Pentland’s team allowing him to analyze the data behind their success.

 

There were 100K visitors and 4500 signups to participate in this effort. He shows a recruitment tree network diagram to show the relationships that helped bring people into the system. It is interesting to see that people were recruiting outside of the US, although the people recruited outside of the US still played an important role because they were recruiting people within the US.

 

There are “super-mobilizers” who recruit far more than the average and play a crucial role. What they found was that people were recruiting friends further than what was expected. They wrote up their results in Science (http://www.sciencemag.org/content/334/6055/509.abstract). An interesting observation is that it contrasted the convergence recruitment that happened in the other example (the tag challenge).

 

These are good stories but we learn more when things go wrong. What about the limits of social mobilization? Because they couldn’t keep doing the sponsored challenges given different experimental conditions so they need to experiment with a model. In a different article (http://www.pnas.org/content/110/16/6281.abstract?sid=461ec413-18f7-4089-bbf7-fdd90a37fd74), they published research in which they ran high-resolution simulations which used models for geographic spreading of social networks, human mobility, temporary dynamics of messages and so on. They ran lots of searches for balloons using this system. The probability of success was zero. What they needed was to incorporate passive participants. You have active participants who want to look for the balloon but then there are the people who just see the tweet or pass by it.

 

Somewhere in between sparse ability to recruit but high contrast setting (desert) and easy ability to recruit but low contrast/complex setting or background (Times Square), there is a sweet spot for search and recruitment.

 

Another problem is the problem of misinformation. He shows a distribution of geographic information with misinformation in red. There is a lot of it. In this case there is no basis for telling which information is good and which is bad. The MIT team asked people to submit photos. Others figured this out and started sending false pictures of balloons. They even had the GPS data in the images to throw off the other teams. Then the team asked people to send a photo of the balloon with their own picture in it. But the DARPA people who took care of the balloons wore yellow jackets. So then people started taking photos with yellow jackets on AND holding the red balloon to throw other teams off. This could be quite serious if there were higher stakes.

 

In the TAG Challenge, there was another source of misinformation when people went on Facebook and misidentified people, and went as far as to find those people’s personal contact information and harassing their families. Iyad draws the connection to the reddit vigilantism that resulted in the tragic mistaken targeting Sunil Tripathi and his family.

 

Some strategies to avoid this:

– Corroboration

– Verification incentives – penalize people for misinformation submitted by the people they recruit

 

He feels that we can design incentive systems to help mitigate this problem.

 

Scalable Cooperation

 

Iyad was part of another DARPA Challenge called the “shredder challenge,” (http://archive.darpa.mil/shredderchallenge/) wherein they used industrial shredders to progressively shred documents. They scanned in all of the shreds and asked people to piece the documents back together.

 

They built a system to recruit friends and collaboratively work together to solve the puzzles. 1500 man-hours with 3000+ people working on it. The crowd has worked to decipher 3 of the documents and were at one-time ranked 2nd, but then things started to go wrong.

Iyad claims that the openness of the system was it’s biggest weakness. His slide says “Sabotage”. Then there was a crowdsourced attack. The second attack was using a VPN to select the pieces and put them on top of each other. Then they tried to move pieces in important positions so people wouldn’t notice. He shows a video of the attacks. The attackers’ efforts slowed the solving of the puzzle but also affected people’s efficiency. People ceased to recruit others and weren’t able to overcome the errors. But this was just an example of a controlled for scale in terms of time and space.

Scaling Up Cooperation

He wants to talk about problems at the scale of the planet like pollution – a problem caused at microlevels by individual actions. How can we effect change at the micro scale if each person’s behavior doesn’t really affect the system overall?

 

He shows a slide about public goods. Free riders benefit from the system without paying the cost. If everyone does the free-riding then the commons is destroyed. Also known as the tragedy of the commons. The traditional solution to the problem was peer punishment.

The tragedy solution to this problem is peer-punishment. If you’re not nice to me, then I’m going to punish, bad-mouth, etc. you. and this maintains cooperation. But these mechanisms – peer punishment and shaming – do not scale.

The other problem with peer punishment is that when this grows, they contribute to the public good but they don’t participate in the punishment. This is called second-order free riding.

Theorists argue that this is why we have institutions (Sigmund — http://www.ncbi.nlm.nih.gov/pubmed/20631710). Like a police force – everyone puts in money and the police punish those that don’t cooperate. You outsource your ability to punish and blame, centralize power, and they are responsible for maintaining order. Pool punishment will dominate models – this becomes the most stable way to maintain the system and nothing can displace it.

Coming from Syria, he is suspicious of these institutions because they can become corrupt when they reach a certain size. Cooperation completely falls apart. Counterintuitively, when you weaken the institution, you strengthen cooperation because vigilantes step in and work together on social enforcement.

How do we use this insight to design policies and design effective institutions? He quotes Elinor Ostrom who says that governments unwittingly destroyed social processes by taking over their role.

Paper: Inducing Peer Pressure to Promote Cooperation.

What he is suggesting is that they need to cultivate peer pressure through creation of social capital in the creation of community around people that perform something of social good. This works in theory, but also seems to work in practice. He worked with ETH Zurich to promote energy conservation in community, allowing them to work together as buddies/teams which paid out in greater subsidies to participants. Peer-pressure worked to produce better outcomes.

Scalable Sense-Making

We know that social media gives us more diversity, more serendipity and instant communication. But how does social media make us more reflective about civic matters?

They asked people questions linked to their ability to reflect.

A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?

This question seems simple intuitively (ball costs $0.10), but upon reflection you realize the right answer (ball costs $.05).

 

Paper: Analytical reasoning task reveals limited of social learning in networks

 

Their finding was that the position in the learning network matters a great deal. The fact that you can challenge yourself doesn’t make you more reflective the next time you answer a question. They call it the “Unreflective Copying Bias,” defined as the “Tendency to copy what others do as a result of successful analytic processing, without…

 

He was involved in the creation of a tool to facilitate large scale conversations called “ArguBlogging”. In the tool you could say “I agree because…” and state a reason. Not just a simple for or against. He wants to create an argumentative overlay on content on the web. If we can succeed in a system like this people can start navigating websites by ? .

 

Lessons

  1. Leviathans do not always scale

  2. Leviathans are not inevitable

  3. Gaming the system is inevitable – we need to design for it

  4. Harness cooperative instincts

  5. Cater for active participants

  6. Cater for transient participation

  7. Harness expertise and local knowledge

  8. Reflection requires more than exposure [and serendipity]

 

Verily (covered in Foreign Policy http://ideas.foreignpolicy.com/posts/2013/05/01/can_critical_thinking_be_crowdsourced).  is a system they are now building to test these principles by testing veritability of information on social media. This was born out of his own experience of following news about the war in Syria, where he was struggling to figure out what to believe about the news, because his understanding of the system would be tied to whatever actions he would make. And he wants to take effective action in a time-critical manner, which is a hard problem in this information ecosystem.

 

He is also interested in governing the commons via human-machine ecologies. Algorithms are increasingly deciding which route the car takes and which temperature to set the building at. How can we engineer our systems so that we can promote cooperation in these mixed human-machine environments? They are exploring the ideas of humans regulating machines and machines regulating humans.

 

QUESTION & ANSWER

 

Catherine: The designs of the systems you are describing are political processes and I think it’s important to interrogate who is building the system, and whether they include the views of a wide group of people and potential users. Are we creating another expert political class of people that designs the participatory systems?

Iyad: First off, most current systems are built by “experts.” And I think systems must be open and accountable on principle. The best we can do is experiment with these systems. But we must build them not just like we build an OS but we must bake in examples from social scientific and psychological research that reveal biases. Next these systems have to be actually tried out—in all of the examples I showed, the systems performed in different ways in practice—and this is important to see and iterate on the usefulness of the design.

Cesar: You have on the one hand a system of norms and human behavior. As their behavior changes as part of a consequence of human institutions. For example, everyone at the ML is fighting. And Nicolas comes and says “Shut up”. But then as trust evolves maybe you need a system that evolves. How do the systems and rules adapt when the punishment is no longer necessary? Your systems are mostly static.

 

Iyad: Larry Lessig says you need to give things a certain lifetime and then open them up ot revision. We need to try to build systems that have some measure of flexibility. If you look at the Shredder Challenge. Once a problem is recognized then you try to block it, place boundaries in different locations. But one of the really different problems is preventing kludginess. It becomes too complicated to change. This is like government now. You can just go set up an alternative.

 

Henry Lieberman: If somebody doesn’t cooperate and you punish them you hope they cooperate after learning their lesson, but the punishment may cause them to lose trust and defect in the future. You want people to embody the values of cooperation, and perhaps there is an educational component of acculturating people to seeing that cooperation is more effective than defection.

 

Iyad: You need to allow people the flexibility to create alternate worlds for them. If people are forced to cooperate it’s very difficult to maintain cooperation. If people can create their own universes then cooperation works. If you allow people to choose their partner then cooperation is maintained because people are able to retain some measure of autonomy over their worlds.

 

Christopher Fry: To respond to one minor point – “forced cooperation” – that’s an oxymoron. I appreciate you doing work in this important area. I want to talk about peer pressure aspect. We have examples of the Ku Klux Klan here where I bet they have a lot of peer pressure to do what I consider to be the wrong thing. Peer pressure can be a bad thing.

 

Iyad: Of course, peer pressure is how armies and gangs organize. You need to allow people to receive support from defecting from this role. If you provide people with social mobility then you can provide for some of these problems.

 

The system in Syria was so centralized. The people were living in serenity for centuries and it was only when the Ba’ath party came in that is started to fall apart. The corrupt also have more tools at their disposal.