During the 2012 election, The ~2000 members of an anti-Ron Paul subreddit discovered that anything they posted, anywhere on reddit, was being rapidly, repeatedly downvoted. They created a diagnostic subreddit and began posting otherwise meaningless text to verify this otherwise odd behavior.
Another redditor posted a “Java program for reddit liberty lovers” in a Libertarian subreddit. According to its alleged creator, the program it allowed Ron Paul supporters to enter in their login information and voluntarily enroll their account in a downvoting botnet. The botnet would automatically follow members of the anti-Ron Paul subreddit and downvote them.
Most reddit users found this wrong. Automatically downvoting certain users, without regard for what they posted, is clear violation of reddiquette. And most users who learned of “LibertyBot” said they thought it was wrong. Some drew comparisons to the Digg Patriots. One Libertarian user and Ron Paul supporter even offered to buy reddit’s premium gold features for affected users.
A more interesting question is this: did the fact that bots, rather than people, did the downvoting have anything to do with how wrong it was?
At first the answer might seem to be obviously yes. Bots are not people. They can do things people can’t do, like follow other users around 24/7, downvoting them within minutes and all at once. They can’t read or evaluate a post independently. All they can do is vote as they were programmed to.
But I think it’s more interesting than that. In this case, there appear to have not been any accounts created for people who didn’t exist, but rather a 1-to-1 mapping of Libertarian users to the botnet. And these users were not, by and large, considering the content of the anti-Ron Paul posts before downvoting before they enrolled in the botnet. Instead, as bot-like humans, they were simply downvoting anything that these posters posted. The botnet didn’t change what they wanted to do it. It extended what they were able to do.
Bruno Latour calls this delegation: work translated from one actor to another. If you want to open a door you can open it yourself. Or you can hire a person to open it for you, but then you would have to incentivize them with money or discipline them by force. Or you can build a machine which opens a door for you, which you have to discipline with design and maintenance. But the door opener embodies human intention and vision.
Stuart Geiger has written a fascinating history of HagermanBot, a bot which assists editors in Wikipedia by adding a small note as to whether or not an edit was unsigned. But such a small edit was brutally contested by other users who did not want their edits to be signed, or to be signed as HagermanBot wanted them to be signed, or who otherwise thought HagermanBot was acting wrongly. HagermanBot followed and enforced many of the abstract norms of Wikipedia, but when it followed and enforced those norms at a breadth and pace beyond what humans could achieve, it created quite a fuss!
One critical insight for Geiger is not only that bots amplify and assist human activity, but also that they actively contest consensus norms. These norms “make sense” if the world is imagined of human actors – with their implicit inefficiencies – and quickly break down when perfectly enforced. The point, though, is not that the bot was “wrong.” The bot was doing what it was designed to do. It was the human norms which were “wrong,” or rather, they were anthropocentric: they only worked if and when the only possible actors were imperfect humans.
HagermanBot didn’t break the norms of Wikipedia. What HagermanBot did was show that the norms of Wikipedia didn’t actually exist. They only appeared to be a consensus because no one had taken the time or effort to put that consensus sufficiently to the test. Once it was tested, the consensus was shown to have never been a consensus at all.
I think LibertyBot reveals a similar insight about reddit. The “problem” with LibertyBot was not the fact that it was a bot, nor the fact that it perfectly enforced a type of behavior. The “problem” is that, like the illusory Wikipedia consensus, there is a divergence between redditors who believe in reddiquette and redditors who don’t. When the latter delegated their politics to LibertyBot they did not break a reddit consensus: they showed that no consensus actually existed.
That’s a much more interesting finding because it reveals these spaces to be much more culturally and socially dynamic than they often appear at first glance. It’s easy to look at reddit and say “upvotes/downvotes, democracy of content, distributed filtering, got it.” But once you begin digging in to actual behavior, actual discussion, actual norms, the apparently smooth landscape quickly becomes rugged, pitted with so many exceptions and counterexamples that one begins to wonder whether the original characterization was accurate at all.