I’ve blogged before about my thesis, which is a study of what I’m calling “user generated censorship.” I’m writing the case studies for my thesis right now. They include, among others:
- The Digg Patriots, a group of conservative Digg users who coordinated through a Yahoo!Group to downvote posts on Digg
- LibertyBot, a script which enrolled users in a botnet that downvoted people who said mean things about Ron Paul on reddit
- NegativeSEO, the practice of aiming thousands of spammy links at competitors in order to get them to drop in Google rankings
- Flagging on Facebook, a practice through which people try to get stuff removed from Facebook by reporting it (sometimes in earnest, sometimes not) as spammy or abusive
When I began talking with people about these case studies I usually framed my inquiry as an attempt to understand how people subvert social media in surprising ways. One of the reasons I constructed it as such is that most people seemed to intuitively grasp what I meant. When I began describing reporting campaigns to a former Facebook engineer she interrupted me and said “ah, you’re talking about people abusing the spam button.” Similarly, a former Twitter developer characterized it as a “misuse” of the reporting features. The Terms of Use of Digg, reddit, and Google all warn their users against attempting to “manipulate” or “alter” results. Even the actors themselves often use such language: the blogger who broke the story of the Digg Patriots accused them of “gaming” Digg while the Patriots themselves counter-accused him of the same thing; the TrafficPlanet team who NegativeSEO’d a competitor did so primarily to prove that “the problem was real.”
But this widespread agreement raises a much more interesting set of questions. Why is it a problem? Why is it “gaming” Digg to coordinate votes? Why is it “cheating” reddit to write a bot which follows users around? Why is it “altering” Google search results to try and create many links to a page “unnaturally”? Altered relative to what? What makes a link “unnatural” – or, for that matter, “natural”? Or, in my own framing, why are any of these activities “subversive” or “surprising”? What should have been there which was subverted? If this behavior was “unexpected” to us, what did we expect to see, and why did we expect it?
As I think more about the sorts of questions I am asking, I’ve begun to think that if you, like me, found yourself surprised by these case studies, or nodding along with them as variations on a familiar theme of “gaming the system,” then it is because we came to the cases expecting or wanting something else. Some baseline behavior which ought to have been in action, some baseline norm which ought to have been observed, but wasn’t, rendering the results of the process suspect, “altered,” “manipulated,” “unnatural,” relative to some unaltered, unmanipulated, natural baseline, which would exist but for sin and sinners. But what are these baselines? Where do they come from?
These are not, as they say, purely academic questions. The answers not only explain why we find some behavior problematic: they create the very possibility of the problem itself. The opportunity for subversion arises from the gap between intended and actual use of a system. James Grimmelmann once wrote that “if we must imagine intellectual property law, it must also imagine us.” When engineers design social systems they must imagine both possible uses and devise methods to make sure the uses are producing desired results, endlessly iterating upon the real to move it closer to the ideal. Like the programmer who frustratedly remarks that his code is broken, the object-oriented observation is that the code isn’t broken at all, it just isn’t doing what the programmer so desperately wants it to do.
We have become so accustomed to expecting certain results from algorithms that we tend to treat unexpected output as the algorithm being broken or manipulated. Tarleton Gillespie has provocatively asked “can an algorithm be wrong?” to illuminate this a priori odd assumption. “There is an important tension,” Gillespie wrote, “between what we expect these algorithms to be, and what they in fact are…a [curated] list whose legitimacy is based on the presumption that it has not been curated.” But, again, where do these presumptions, baselines, and expectations come from?
Consider two dominant theories of information production, the language and concepts of which are commonly deployed in order to explain, defend, and champion social media. One is the “Wisdom of Crowds”, made famous by James Surowiecki, which argues that social media, under certain conditions, constitutes a kind of attention market through which collective wisdom can be efficiently aggregated. The second is the “Networked Public Sphere” , advanced principally by Yochai Benkler, which argues that, under certain conditions, social media create the possibility for the emergence of a public sphere normatively preferable to mass-media. While these theories are by no means the only ways to think about what social media could or should be, they are perhaps the most influential and familiar ones.
However, despite their frequent deployment in support of social media, I’m wondering if neither theory can coherently account for the emergent behavior documented in these case studies for the same reason: their shared undergirding of liberal theory, which relies upon users as naive, independent individuals acting from behind a Rawlsian veil. But this idealization dissolves when confronted by actual behavior, which is often strategic, allied, and bent around achieving particular ends of influence, not serving as inputs into a larger system.
To the extent that these case studies in user generated censorshop surprise us with unexpected behaviors, have they in fact betrayed us, or only our expectations?