Creating Technology for Social Change

Online Vigilantes, the Wikipedia GamerGate Controversy, Ethics of Bots at AOIR 16

I’m here at the 16th AOIR conference liveblogging a session on ethics. You can see the abstracts and papers here.

To start out Mathias Klang gives a talk about “online vigilantism,” On The Internet Nobody Can See Your Cape: The ethics of online vigilantism. What is online vigilantism? Mathias talks about large-scale online responses to the Justine Sacco case, the infamous smiling selfie from Auschwitz, the dentist who shot cecil the lion, the woman who put a cat in a bin in Coventry. Most of these events don’t go to court; they are actions that lead us to be annoyed somehow, says Mathias.

Public writing about these issues sometimes talks about “outrage porn” (Doughterty 2014). Mathias describes it as “The disproportionate social (over)reaction to the mundane actions of a non-celebrity.” Mathias differentiates this from other kinds of bad things online: when non-celebrities experience this vigilantism, they often face very serious consequences. He differentiates it from trolling, where people are doing things for the lulz rather than out of outrage, because it’s something that’s important for them. Matthias also differentiates them from hate speech or revenge porn.

Why vigilantes? It’s a word from the 19th century, where “vigilance committees” kept informal rough order on the U.S. frontier where official authority was imperfect. These patrolling groups focused on stopping bad actors. Mathias tells us about the Montana Vigilante Oath.

In his research, Mathias is trying to ask “what *was* the vigilante before the Internet, and what have they become now?” Before the Internet, vigilantes were seen as a “wild west” idea of people taking justice in their own hands. He describes Brian Garfield’s Death Wish, V for Vendetta and The Dark Knight.

In academic research, criminology research has asked, What is Vigilantism?, offering a framework of things that are included in vigilantism: planning, citizen involvement, social movements, use of force, norm transgression, regulation.

Internet vigilantism, in contrast, is centered around information cascades and ad hoc activity. Although hateful comments are sometimes targeted at people, they’re also often comments that call on retribution on the person, statements that “people who behave badly should be punished.”

What happens at moments of Internet vigilantism? Firstly, there’s a cognitive liberation, as the barriers of apathy are broken, and people take actions they wouldn’t ordinarily take. Secondly, people see the transgressor as fair game; consequently, overreactions to the transgressor are seen as okay. Finally, Mathias argues that vigilantism is about power rather than anonymity. In many cases, people are using their primary accounts with their legal names to say, “you’re horrible and you deserve to die.” It’s more about power than anonymity, he argues. Furthermore, he argues that instead of focusing on shaming, we should focus on the outrage that people experience. When people express outrage, nobody seems to be interested in the shame of the person or changing their behaviour; it seems like people are focusing on their own anger.

Mathias is hoping to move forward with further research to collect more data and interview people for further research.

Adieu Wikipedia: Understanding the Ethics of Wikipedia after Gamergate
Next up, Andrew Famiglietti shares early-stage research on the ethics of wikipedia and how that ethics are informed by certain policies. He argues that the encounter between gamergate and wikipedia demonstrates a failure point of the idea of Habermas’s Ideal Speech Situation.

At stake for Andrew is whether Wikipedia represents a public sphere, even if it’s not discursive (he mentions two pieces on rational discourse and algorithms on Wikipedia). These authors suggest that wikipedia’s policies try to limit what Habermas called “strategic or self-interested action” and support “communicative action.”

According to Andrew, there are two policies that do the most work towards these ends. Firstly, the “battleground policy” of Wikipedia urges people not to engage in battleground topics that they are part of. Secondly, it requires people to verify and cite material, deferring the decision of truthness to third parties. Andrew argues that it’s this reliance on sources that offers Wikipedia a chance to try to establish some kind of shared agenda or purpose even in conflicts.

The encounter with gamergate represents a failure of this, says Andrew. In the encounter between Wikipedia and Gamergate, the highest profile issue was the ban of high profile authors who contributed to gamergate and “gender and gender-related” issues. Andrew argues that it was over-reported as a “feminist purge of Wikipedia”– it was bad, but maybe not as bad as that, he claims. At the same time, Andrew admits that GamerGate had identified and targeted 5 editors who were defending the site from GamerGate activity, all of which were then sanctioned by the site, which he admits as pretty bad.

Looking at the ArbCom decision to sanction these editors, the debate depended primarily on the tensions in the neutrality and verifiability policies. When some editors insisted on the importance of basing its articles on media sources, other editors argued that maintaining a neutral point of view required mistrust of those sources. As those disputes escalated, it then became possible to bring in the battleground policy in order to advocate for sanctions.

(unfortunately, I missed the last bit where Andrew drew from Levinas to think about the limitations of Habermas for making sense of these things)

Stuart asks how the Arbitration’s role plays into the Habermas/Levinas debate? Andrew responds.

Making Ethical Decisions: Seeing Twitter Bots as (Non) ‘Human Subjects’ when Including Them as Research Participants

Next up, two digital rhetoricians, Estee Beck and Leslie Hutchinson, offered some early-stage theoretical work on ethics issues related to online bots. They’re offering some thoughts on the ethics of social bots. Estee argues that these bots are persuasive, rhetorical systems. She argues that ethical frameworks can be integrated into machine processes like social bots.

Another question is: do bots constitute research subjects, and should AOIR update its ethics guidelines to include bots. A stance of “ethical pluralism” provides a direction for how people in different cultures come to values and beliefs while realizing how they differ. Algorithms may not be fully free agents, but they also have implications in people’s lives.

Next up, Leslie tells us the story of the @horse_ebooks Twitter account. What made horse_ebooks unique was that it was never clear if it was human or code. That never bothered us until we discovered that our perceptions about horse_ebooks were false. She refers to work by Intronas that uses Levinas and Derrida and an “ethics of hospitality” to understand the personhood of robots in the Star Trek episode, The Measure of the Man.

How might an ethics of hospitality lead us to change how we treat bots in our research? If we follow the tenets of an ethics of hospitality (suspension of law, letting the other speak, undecidability and impossibility, and justice for all others), we more clearly see how we might reimagine ethical interactions with bots. She tells us about the experience of someone who got visited by the police because a death threat made by their twitter bot as an example of the very real consequences that bot experiences can have.

Stuart asks wondered if the researchers are thinking about the role of the developer/operator/caretaker, and the human/nonhuman coupling involved. Estee responds that from a rhetorical perspective, the bot is seen as a co-creator with a human, and that co-creation needs to be considered.

A participant talks about the idea of “distributed responsibility” in law, where one might hold responsible a wider group of people for what the bot does. He also talks about the idea of the “moral status” of non-sentient or machine-like objects, where harming a bot might be harming the memories or histories of people as as acquired by a bot.