Betraying Expectations in User Generated Censorship
Chris Peterson joins CMS on leave from MIT's Office of Undergraduate Admissions, where he spent three years directing digital strategy and communications. In addition to overseeing all web and new media activities for MITAdmissions, Chris liaised with FIRST Robotics and had a special focus on subaltern, disadvantaged, and first-generation applicants.
Before MIT Chris worked as a research assistant at the Berkman Center for Internet and Society at Harvard Law School and as a Senior Campus Rep for Apple. He currently serves on the Board of Directors of the National Coalition Against Censorship, as a Fellow at the National Center for Technology and Dispute Resolution, and as the sole proprietor of BurgerMap.org. He holds a B.A. in Critical Legal Studies from the University of Massachusetts at Amherst, where he completed his senior thesis on Facebook privacy under Professors Ethan Katsh and Alan Gaitenby. He is interested generally in how people communicate within digitally mediated spaces and occasionally blogs at cpeterson.org.
Betraying Expectations in User Generated Censorship
I've blogged before about my thesis, which is a study of what I'm calling "user generated censorship." I'm writing the case studies for my thesis right now. They include, among others:
- The Digg Patriots, a group of conservative Digg users who coordinated through a Yahoo!Group to downvote posts on Digg
- LibertyBot, a script which enrolled users in a botnet that downvoted people who said mean things about Ron Paul on reddit
- NegativeSEO, the practice of aiming thousands of spammy links at competitors in order to get them to drop in Google rankings
- Flagging on Facebook, a practice through which people try to get stuff removed from Facebook by reporting it (sometimes in earnest, sometimes not) as spammy or abusive
But this widespread agreement raises a much more interesting set of questions. Why is it a problem? Why is it “gaming” Digg to coordinate votes? Why is it “cheating” reddit to write a bot which follows users around? Why is it “altering” Google search results to try and create many links to a page “unnaturally”? Altered relative to what? What makes a link “unnatural” - or, for that matter, “natural”? Or, in my own framing, why are any of these activities “subversive” or “surprising”? What should have been there which was subverted? If this behavior was “unexpected” to us, what did we expect to see, and why did we expect it?
As I think more about the sorts of questions I am asking, I've begun to think that if you, like me, found yourself surprised by these case studies, or nodding along with them as variations on a familiar theme of “gaming the system,” then it is because we came to the cases expecting or wanting something else. Some baseline behavior which ought to have been in action, some baseline norm which ought to have been observed, but wasn’t, rendering the results of the process suspect, “altered,” “manipulated,” “unnatural,” relative to some unaltered, unmanipulated, natural baseline, which would exist but for sin and sinners. But what are these baselines? Where do they come from?
These are not, as they say, purely academic questions. The answers not only explain why we find some behavior problematic: they create the very possibility of the problem itself. The opportunity for subversion arises from the gap between intended and actual use of a system. James Grimmelmann once wrote that “if we must imagine intellectual property law, it must also imagine us.” When engineers design social systems they must imagine both possible uses and devise methods to make sure the uses are producing desired results, endlessly iterating upon the real to move it closer to the ideal. Like the programmer who frustratedly remarks that his code is broken, the object-oriented observation is that the code isn’t broken at all, it just isn’t doing what the programmer so desperately wants it to do.
We have become so accustomed to expecting certain results from algorithms that we tend to treat unexpected output as the algorithm being broken or manipulated. Tarleton Gillespie has provocatively asked “can an algorithm be wrong?” to illuminate this a priori odd assumption. “There is an important tension,” Gillespie wrote, “between what we expect these algorithms to be, and what they in fact are...a [curated] list whose legitimacy is based on the presumption that it has not been curated.” But, again, where do these presumptions, baselines, and expectations come from?
Consider two dominant theories of information production, the language and concepts of which are commonly deployed in order to explain, defend, and champion social media. One is the “Wisdom of Crowds”, made famous by James Surowiecki, which argues that social media, under certain conditions, constitutes a kind of attention market through which collective wisdom can be efficiently aggregated. The second is the “Networked Public Sphere” , advanced principally by Yochai Benkler, which argues that, under certain conditions, social media create the possibility for the emergence of a public sphere normatively preferable to mass-media. While these theories are by no means the only ways to think about what social media could or should be, they are perhaps the most influential and familiar ones.
However, despite their frequent deployment in support of social media, I'm wondering if neither theory can coherently account for the emergent behavior documented in these case studies for the same reason: their shared undergirding of liberal theory, which relies upon users as naive, independent individuals acting from behind a Rawlsian veil. But this idealization dissolves when confronted by actual behavior, which is often strategic, allied, and bent around achieving particular ends of influence, not serving as inputs into a larger system.
To the extent that these case studies in user generated censorshop surprise us with unexpected behaviors, have they in fact betrayed us, or only our expectations?