Creating Technology for Social Change

Monitoring, Explaining, and Intervening: Field Experiments and Social Justice

What role can field experiments and other causal research play in efforts toward social justice in social computing? Aren’t experiments tools for reductionistic, top-down paternalism? How could causal inference ever support grassroots approaches to social justice?

This question is a central struggle in my effort to decide a dissertation topic. The idea of participatory experimentation motivated me to work on the cornhole experiment. It was also at the back of my mind in my talk on discrimination and other social problems online at the Platform Cooperativism conference (start at 1:00:30 mark). In this post, I outline my current thinking on this question.  I would love to hear your thoughts in the comments.

A Peaceable Kingdom with Quakers Bearing Banners 1829-30 Edward Hicks.jpg
A Peaceable Kingdom with Quakers Bearing Banners 1829-30” by Edward HicksI love Edward Hicks’s paintings, which often show the cracks in ideas of utopia that differentiated between rights and roles for different people. 

Because software systems shape human affairs, they necessarily advance or detract from the realization of human dignity. For example, information systems have enabled oppression under apartheid [7], welfare management systems have undermined the dignity of the poor [15], government systems have removed people of color’s right to vote [43], and advertising systems have carried out systematic discrimination [50], to name a few. Social justice research studies the forces detracting from human dignity and advances the wider realization of people’s experience of that common dignity [41].

The Role of Causal Inference In Social Justice Work

From Betty Friedan’s counts of women’s presence in 1960s magazines [18] and studies of race and gender bias by juries [3] to economic models of discrimination [16], quantitative methods have expanded our awareness and understanding of systematic patterns of social injustice. Yet researchers using these correlational methods struggle when data is limited and where the causes of injustices are unclear.

This post considers two kind of research that ask these causal questions: field experiments and natural experiments [20]. All of these methods ask a counterfactual question of events outside the lab: what would have happened if things had been different? Causal methods ask this question by comparing measured outcomes between groups. Field experiments randomly assign subjects to comparison groups [21,29]. Natural experiments construct post hoc comparison groups from observational data [1].

Detecting Injustices

Field experiments are a primary method for investigating systematic discrimination when data is limited or differences in outcomes might be related to differences in selection. For example, employment discrimination can be hard to study observationally if a particular group is mostly absent from a sector suspected of discrimination. Aside from the lack of data, people in the absent group may simply be selecting other sectors. In those cases, audit studies use experimental methods to estimate differences in job and loan application outcomes for identical people who differ only by race or gender [42]. These studies can estimate the magnitude of discrimination on average, identifying just how much discrimination affects a person’s chance to get a loan or job interview.

In HCI, field experiments ask traditional questions of discrimination alongside emerging issues unique to social computing. Audit studies have documented discrimination against dark-skinned users on classified ad platforms and on airbnb[1214]. Audit studies of HR systems have shown systematic discrimination against certain kinds of surnames [6]. 

Among questions unique to social computing, field experiments on videogames have shown that in some gaming communities, women receive more negative and hateful comments than men [32]. Other studies use experiments to investigate discrimination by machine learning systems [49]. Natural experiments are also used to detect social injustices online, including one work-in-progress study estimating the effect of mass surveillance on civic participation online [47].

Explaining Injustices

While audit studies help identify social injustices, they rarely reveal the underlying explanations. The work of advancing social justice requires theories of the causes of social problems so that the causes can be addressed. Social psychologists have long used lab experiments to test theories on the mechanisms behind behaviors like gender discrimination [27], methods that social computing researchers have adopted for field experimentation online [31]. Natural experiments can also test theories on the causes of social injustice; one work-in-progress study uses natural language processing in an observational study of DonorsChoose to estimate the role of gender stereotypes in the effects of platform design on gender discrimination [48].

Evaluating Social Justice Interventions

As a stance with normative goals, social justice research is fundamentally concerned with the outcomes of interventions to advance human dignity and justice. For example, social justice researchers do not stop at defining and understanding a problem like prejudice; they also evaluate the practical outcomes of prejudice reduction efforts [45].

Research on the social justice effects of technology interventions includes research on the effects of police-worn body cameras on police violence [2,33], the effect of digital, student-designed anti-conflict campaigns on school conflict [46], and the effect of peer pressure on government compliance with citizen appeals [52]. My own in-progress research is testing the effect of a self-tracking system on gender discrimination on social media [34].

Social justice interventions can also backfire or have side effects. For example, in some cases, efforts to defend a community from antisocial behavior can make that behavior worse [9]. In another case, research has linked the labor of reviewing and responding to violent materials online with secondary trauma [17,39]. Future causal research could help estimate the human cost of interventions, supporting decision-makers to limit or avoid these new problems introduced by social justice efforts.

Risks from Causal Inference Methods

Researchers who focus on social justice often have well-theorized skepticism towards causal methods, grounded in the limitations of quantitative research and the inequalities of power that often come with experimental research.

Measurement Problems

The difficulty of measuring meaningful outcomes is a fundamental weakness of all quantitative social justice research. For example, studies of discrimination often rely on classifications of race and gender that are theoretically weak, incorporating structural injustices into the research and making those injustices invisible to researchers [7,10,35]. More broadly, social computing and HCI research have a deficit of reliable dependent variables on issues of public interest compared to measures of productivity [38].

Participation and Deliberation

Quantitative researchers have a history of paternalism that ignores people’s voices in favor of surveilling their behavior and forcing interventions into their lives [51]. For that reason, deliberative democracy theorists identify experimentation as a major risk to citizen participation and agency. Rhetoric from experimental results can override citizen deliberation and paternalistic policies could nudge away citizen agency [26]. In contrast with these paternalistic approaches, social justice HCI research has emphasized participatory methods that include participants as co-creators of the goals, design, and evaluation of social justice efforts [4].

Research Ethics

Field experiments are also at the heart of an ongoing debate over the ethics of HCI research [22]. In particular, research on issues of inequality and injustice often involve vulnerable populations and offer differential risks and benefits to participants [28]. While some scholars advocate for an obligation to experiment in cases of public interest [40], the work of maintaining and studying large online platforms raises ethics and accountability challenges for experimenters [8].

Participatory Field Experiments

I believe that some risks of causal inference in social justice work can be addressed by incorporating lessons from participatory and emancipatory action research [25] in the design of experiments.

First, quantitative research on social justice can employ qualitative research and participatory design. Methods of “experimental ethnography” structure qualitative research through an experimental design [44]. My own research on social movements at Microsoft this summer explored “participatory hypothesis testing,” mixed-methods, participatory approaches to sampling, modeling, and interpreting quantitative research on social platforms [37].

Second, participatory methods can be used within the interventions that experimenters evaluate. In one field experiment, students were offered training and support to develop their own digital media campaigns; this intervention reduced conflict reports in schools while also testing theories related to the position of students within their social networks [46].

Third, marginalized groups are already using socio-technical systems to develop situated knowledge [23] on issues of labor rights, street harassment, and online governance [19]. A social justice approach to causal research might also expand access to experimental capacities, supporting marginalized groups to develop their own situated knowledge on causal questions.

Finally, experimental results need not override marginalized voices in policy debates. They just offer one more piece of evidence to deliberation. As I show participants in the cornhole experiment, even the cleanest experiment leaves plenty of questions for debate. In fact, research on discrimination in meetings may even expand participation and fairness in delibaration [26,27]. Other experiments could even test ways to expand citizen power in the face of paternalism. In one work in progress study, researchers conducted bottom-up experiments to optimize government compliance with citizen requests [52].

Conclusion

In this post, I have outlined ways that causal research is used for monitoring social injustices, understanding the causes of those injustices, and evaluating interventions to expand the realization of human dignity. While causal methods do introduce risks of reductionist paternalism, I have tried to sketch out possible directions for “participatory field experiments.” By experimenting from the standpoint of citizens, we may be able to work through some of those risks.

What are your thoughts? I would love to hear your reactions in the comments.

Bibliography