reddit Moderators: Let's Test Theories of Moderation Together | MIT Center for Civic Media
At the Center for Civic Media and the Berkman Klein Center for Internet and Society, Nathan researches factors that contribute to flourishing & fair participation online, making and
evaluating interventions for safe, fair, creative, and effective societies.
Nathan's current projects (C.V.) include large scale experiments on reducing discrimination and harassment online, as well as observational studies on social movements, civic participation, and social change. Nathan regularly liveblogs talks and events and has published journalism in the Atlantic, Guardian, and PBS IdeaLab. He coordinated the Media Lab Festival of Learning in 2012 and 2013.
Before MIT, Nathan completed an MA in English literature at the University of Cambridge, where he was a Davies Jackson scholar. In earlier years, he was Riddick Scholar and Hugh Cannon Memorial Scholar at the American Institute of Parliamentarians. He won the Ted Nelson award at ACM Hypertext 2005 with a work of tangible scholarly hypermedia. He facilitated #1book140, The Atlantic's Twitter book club from 2012-2014, and was an intern at Microsoft Research Fuse Labs in the summer of 2013.
reddit Moderators: Let's Test Theories of Moderation Together
Are you a reddit moderator or the creator of software to support moderation (bots, browser plugins, etc)? Do you have questions about the effectiveness of your tools or your approach to moderation?
Over the next few months, I'm looking for moderators, tool-builders, and experienced community members to design moderation experiments together to answer questions or debates that your subreddits have about moderation. TLDR: Are you involved in debates about the impact of subreddit moderation policies on your communities? Let's Talk! If you are interested, send me reddit mail at /u/natematias or participate in the /r/TheoryOfReddit conversation about this research.
source: reddit blog
In the medium term, I hope to build software that supports moderators to run experiments to answer your own questions about moderation. For now, I'm trying to gauge interest and get a sense of questions that moderators have. As always, I promise to respect the anonymity of anyone who approaches me privately.
*If* enough people are interested, I'll start the process of defining some early studies together, working in the longer term to design a system that supports you to run your own experiments.
Who Is In A Good Position To Test Moderation Ideas?
While anyone can encourage their subreddit to test out moderation ideas, I'm reaching out to:
- networks of subreddits
- creators of moderation tools
- groups of subreddits who have similar questions
- moderation teams
- individual moderators who want to brainstorm an idea they could present to their moderation team
In the long term, I'm also thinking about the use of experiments to audit the behavior of moderation bots, which could be done independently from moderation teams, but I'm starting out by reaching out directly to moderators.
Why am I reaching out to moderators rather than reddit itself? In my research, I'm trying to understand the work of everyday citizens to make a difference in our communities. Supporting moderators fits in with that larger vision.
What Questions Can Experiments Answer for Moderators?
Moderators already have access to plenty of analytics, and transparency reports like the recent r/science report show how much can be answered simply by looking at what moderators did in the past month. But those reports don't offer answers on the outcomes of new ideas or debated issues in moderation.
Field experiments (A/B tests are one variant) would allow moderators to test the outcomes (or effects) of a particular moderation idea. Here are a few example questions that might be asked:
- What effect does a given policy have on the amount of modmail that moderators receive? For example: does publishing a transparency report:
- reduce the workload of moderators
- reduce reddit mail received by moderators
- increase participation in community-wide decisions
- What effect do moderation actions have on the participation of people whose posts are moderated? Do they change their ways? Go away?
- What effect does a policy have on participation levels in the subreddit? On harassment and other nasty experiences?
- Does adding a policy to your wiki have any effect on subreddit behavior?
- Does linking to a policy in your moderation reason have an effect on how that person behaves next?
- What effect does a particular configuration of AutoModerator have on moderator workload? On subreddit participation?
- In high volume subs, what effect does a policy have on the kinds of posts that appear at the top of the subreddit?
- Does using a schedule to distribute the workload among moderators reduce the stress of moderation?
While it's true that I've done a substantial review of academic literature related to moderation, I expect that redditors will have much better questions than I do. That's why I want to collaborate on research that could have a meaningful impact on experience within your subreddits.
I also hope that if moderators are able to collect and share results on the effects of their moderation policies, it might help improve the quality of moderation across many platforms, perhaps reducing recurring social problems and even reducing the worst parts moderation work.
What Kinds of Questions Will Experiments Struggle With?
Experiments are only so good as our palette of good measurements for the things that really matter. I've written extensively about this question in the case of online communities:
- My article with Stuart Geiger on the need to define civic values for evaluating cooperation systems
- Aaron Halfaker on the numbers that govern wikipedia, and what can go wrong when you optimize for the wrong numbers
In my past work developing statistical models together with redditors (which was just honored by CHI!), I found that redditors bring tremendous insight to the process of defining research. I've had more than one occasion where redditors corrected problems and pointed out important social limitations in what my models were estimating. I hope to continue that approach by working together with redditors on experiments.
Why Should Moderators, Subreddits, or Subreddit Networks Do Their Own Research?
Other researchers, especially those who do platform-wide studies, have asked me why I think it's important to support moderators to ask their own questions. Why not just design one study that answers a single question for all subreddits, on average, and then convince the company to run that study across the entire site? Here are my reasons for wanting to support moderators directly:
Reason 1: Moderators have different questions. Not all of us can wait for a well-meaning corporate data scientist to ask questions for us. Moderators also sometimes have questions that platform operators might see as uncomfortable or unimportant.
Reason 2: Moderators can work through ethics more directly because they share a social context with their communities. In many subreddits, moderators hold open discussions with their subscribers about policy ideas and are often more accountable to their communities than the average data scientist.
Reason 3: Knowing an average effect can mislead individual communities. Online platforms have a huge variation in cultures. Acting on platform-level knowledge can lead moderators toward actions that have the opposite effect from their goals.
For example, Moira Burke found in a 2008 study that polite posts receive more conversation in some communities, but in others, rude posts get more attention. There is no single answer across all the communities she studied for how participants respond to politeness. Is there similar variation on reddit? I decided to find out, with this question:
do posts by frequent commenters get more comments, or do newcomer posts get more comments on average?
It's a basic question that any subreddit might ask, as they decide how to relate to newcomers. I explored it by collecting all comments and posts from a random sample of 1671 subreddits across subreddits with at least one post from July through September 2015 (thanks, stuck_in_the_matrix!).
On average across reddit, and controlling for other factors, there is a positive, statistically significant relationship between the number of comments a post will receive and the number of previous in-subreddit comments made by the poster. In other words, posts by more active commenters get more comments themselves, on average across reddit.
BUT WAIT: within a specific subreddit, the opposite can be true. To illustrate this, I fit a random slopes model to observe the variation among subreddits and found (like Burke) that different communities on reddit can have findings that are opposite (see the figure below). In some subreddits, more active commenters do often get more comments on their own posts, just like reddit overall. However, the relationship is also the opposite for many communities (to the left of the red line).
Browsing subreddits to the right of the red line, where posts by more frequent commenters receive more comments, I found fantasy sports leagues, marketplaces for videogame artifacts, and a community for in-depth discussion of international sport. To the left of the red line -- communities not explained by the reddit average, where newcomer posts received more comments, I found subreddits for learning new skills and ideas, as well as subreddits for sharing projects.
TLDR: Whatever might be true across reddit, subreddits have good reason to do their own research, because their community may be very different from the rest of reddit.
About My Research
My research with reddit moderators fits into my wider work on the role that everyday citizens play in shaping and supporting our collective online experiences. Since last June, I've been reading through the history of reddit moderation, interviewing moderators, and even doing data analysis on moderation work, including the reddit blackout. Here are some of my blog posts:
- Recognizing the Work of reddit Moderators (June 2015)
- What Just Happened on reddit: Understanding the Moderator Blackout (July 2015)
- Factors Predicting Participation in the reddit Blackout (September 2015)
My first academic publication about reddit moderators has just been accepted (and awarded an honorable mention) by CHI, the premier academic venue in computer human interaction. Here's the preprint version: Going Dark: Social Factors in Collective Action Against Platform Operators in the Reddit Blackout.
Right now, I'm finishing a paper that tries to put a name to the work of doing volunteer moderating, where your work supports a wider, often corporate system, but you're also held somewhat accountable to community members and to other moderators. When writing about this "civic labor," I am focusing on moments of transition and tension for moderators, looking at internships, applications, elections, transparency reports, coups, bannings, and the blackout—moments that surface less spoken assumptions about what it means to be a moderator. I'm sharing an early version of this idea in my talk on "civic labor" at the Oxford Internet Institute this March.
Research Ethics And the Promises I Make To Participants
Throughout my work, I have a very strong commitment to respecting the privacy, rights, and well-being of everyone who is part of my research. At MIT, I have full intellectual freedom, and my work on reddit is independent of the company.
One reason I'm writing this post is that I am hoping that reddit moderators and other redditors will work with me to develop an approach to experiment ethics that can support the needs of subreddits to do data analysis while also respecting the rights of subreddit participants.
For an example of how I tend to approach my work, scroll down to the "Ethics" section of the post announcing the beginning of my work with moderators. Since I'm no longer at Microsoft Research, I will be creating a new set of agreements, but my values will stay the same.
The idea of creating a system that lets any subreddit or group of subreddits run their own experiment raises important ethical questions in its own right, similar to those surrounding A/B testing systems. I'm still working through those questions, together with legal scholars and research ethics experts. Brian Keegan and I have published some of our early thoughts in this workshop paper for the OSSM symposium. I am excited about the chance to develop these ideas together with redditors.
I look forward to hearing from you!