Creating Technology for Social Change

Bot-Based Collective Blocklists in Twitter: The Counterpublic Moderation of a Privately-Owned Networked Public Space

Here at the 16th conference of the Association of Internet Researchers, I attended a talk by Stuart Geiger, who is doing helpful work to theorize the role of block bots in conversation on the Internet. Over the years, Stuart’s thinking has been deeply influential to my own approach. I’ve written about his work twice before, in my Atlantic article about how people work to fix broken systems that aren’t theirs to repair. I’ve also liveblogged a great talk he gave on supporting change from the outside platforms.

Stuart opens by saying that block bots are systems where anti-harassment activists have developed algorithmic software agents to deal with harassment, relatively independently from Twitter. Blockbots involve different kinds of gatekeeping than what we typically think about. It’s different from algorithmic gatekeeping (Tufekci), network gatekeeping (Nahon), or filter bubbles (Pariser). How can we make sense of it?

We often think of software engineers as sovereigns of social platforms, with code as law, an idea developed by Larry Lessig. In Stuart’s work on Wikipedia, we see all sorts of way that participants on platforms carry out some of this governance work through their own bots and code.

In this research on Wikipedia, Stuart came across the Gamergate controversy and the associated harassment campaigns that flooded their targets with messages. On Twitter, one response to that harassment is to block individual users, but it’s hard to block large numbers of accounts in an coordinated attack. Block bots help users block large lists of accounts at once.

What are block bots? They are information infrastructures that support community curated collective blocklists. People who were having to block the same kinds of people were doing much of the same work. Using a block bot, when one account is added to this community blocklist, everyone who is already subscribed will get all of those blocks as well. Whenever someone subscribes to a list, an automated software agent effectively signs up as the user and automatically blocks everyone on the list. These blocklists were created as a pereceived gap in Twitter’s governance. Yet the first of this wave started not with GamerGate but with other harassment campaigns around anti-feminism in atheism around 2012.

Stuart describes a series of systems: The Block Bot, which was created by people experiencing harassment within atheism. Another system,, offers a variety of algorithmic blocking as well as the sharing of block lists. Algorithmic blocklists like ggautoblocker use social network analysis to identify alleged gamergate participants. The ggautoblocker system is currently blocking over 10,000 accounts.

How are blockbot-based blocklists curated? In some cases, a single person could create a list. Or it might be created by a tight-knit group whose members share common values. It could be a bureaucracy, or natural language software, or any of the above. Many systems incorporate multiple approaches. For example, ggautoblocker started with a purely algorithmic approach, but then it gained an appeals board as people developed the systems further.

Stuart concludes by linking block bots to different kinds of theories:

  • Counter-publics: One way to understand these systems is as systems for creating counter-publics. Where counter-publics are typically thought of new, separate spaces that have approval processes for people to participate in that space, these systems disapprove people and disallow them from attention.
  • Classification systems: Another way to see block bots is as classification systems. They support classification systems for identifying harassment or abuse. They become sites for collective sensemaking and collective articulation of what harassment is, how it’s different from abuse, trolling, and other categories– categories that are debated and adjusted through that sensemaking.
  • Filter bubbles: Discussions of filter bubbles assume that algorithms take away discussion and reflection on public space. One critique of block bots is that they might create filter bubbles. However, Stuart thinks that personalization and recommendation algorithms typically create filter bubbles by sitting in the background. Block bots, on the other hand, attract and require substantial debate of who and what is excluded and included.
  • Delegation: Initially, it might seem like block bots involve delegating work to bots. But Stuart has come the view that it’s more a case of companies delegating work and software outside their platform to other people. On one hand, the APIs that support things like block bots support a diversity of counter-publics on a shared platform. On the other hand, it’s a case of companies delegating or distancing themselves from the responsibilities and activities taken on by these bots.