Creating Technology for Social Change

Mapping The Concepts of Content Warnings: Three Themes, Two Causes, & A Possible Path Forward

In summer 2015 I attended a meeting of the Freedom Expression Network (FEN) in Washington, DC. The FEN is an alliance of a few dozen civil liberties organizations convened by the National Coalition Against Censorship (NCAC), a nonprofit whose mission is to promote freedom of thought, inquiry and expression and oppose censorship in all its forms. I’ve served on the Board of Directors at NCAC since 2010, and was asked to address the FEN meeting on the topic of trigger/content warnings* in the context of higher education.

(*These terms are used somewhat interchangeably in this debate; since, as we’ll discuss below, ‘trigger warnings’ are narrower in scope, I’ll primarily use ‘content warnings’ for the rest of this essay)

I’m no special expert on content warnings (and if you know someone who is, please introduce us), but I do have a background in anti­-censorship work from my graduate research at Civic and my affiliation with NCAC, and I recruit, admit, and teach undergraduates at MIT, so I have a decent amount of experience thinking about this issue as both activist and educator. So my goal, at this meeting, was to try and talk about how I thought about content warnings in my work, and, to the best of my ability, to help other members of FEN who don’t work with students every day to understand their perspective as well. Some time has passed, but the conversation has continued, so I thought I would take the time to organize and post the notes from my talk.

Some points I should make before I proceed. These viewpoints are my own, and should not be construed to represent (indeed, likely do not represent) those of FEN, NCAC, or MIT. The purpose of this post is not to advance a strong argument, but rather to trace the outlines of the debate as I see it from my own standpoint and situation. I’ve shared drafts of this post with half a dozen different people with different perspectives on the debate, and they’ve all overall liked it while finding different things to be wrong with it, often in contradiction with each other. I’m posting it because I’m hoping that mixed reception is actually a signal it can provide some value, if not in answering any question, at least helping to map the terrain of the debate. 

I was asked to speak about content warnings in the context of higher education, but my most direct experience is limited to that of MIT, the institution where I have (at various points) worked, studied, and taught since 2009. In many ways, MIT is not a typical university (well­funded, exceptionally prestigious, and as such very privileged, for example), but still you can see some of the general ambivalence about content warnings reflected in the history of the institution.

For example: Dr. Mary Rowe, a longtime Ombuds, Adjunct Professor, and Special Assistant to the President, played a key role in developing the concept of microaggressions/microiniquities and other forms of “subtle discrimination” in a series of papers starting as early as 1973. Rowe, along with her colleague Dr. Clarence Williams, also organized and recorded a series of campus conversations regarding stereotypes and microaggressions in the context of race in the 1990s.

At the same time that it has hosted these kinds of ‘sensitive’ conversations, however, MIT is also known for a radically autonomous student culture, which you can see in the campus hacks, student protests when dormitory artwork was classified as a Title 9 violation and painted over, and even an infamous 1987 student lawsuit when the administration clamped down on a longstanding tradition of screening porn in a campus theater on Registration Day. The lesson that I take from this history is that MIT, like many universities, has been wrestling with the apparent conflict between autonomy and safety for decades before the current controversy over content warnings became prominent.

Part of this ambiguity, I believe, is that many different issues have been bundled together into the category of content warnings. In my discussions with students, activists, and educators, I’ve heard three major themes/metaphors/concepts mobilized for or against warnings:

Medical Trauma: this theme organizes content warnings around the metaphor of disease and treatment. Under this model, proponents of warnings tend to mobilize two different variants of this argument. The first, which draws on folk knowledge of PTSD and related disorders, is that certain kinds of content might be associated with, and therefore ‘trigger’ emotional responses to, past trauma. The second, which draws on folk knowledge of allergens, is that certain kinds of content might induce negative, ‘allergic’ reactions. Opponents of warnings tend to respond by arguing that both trauma and allergies should be treated with exposure therapy to desensitize any negative response. The debate then becomes about what kind of ‘treatment’ should be ‘administered’ and under what conditions. 

As such, this metaphor moves warnings (and the content they’re warning about) out of the domain of political disagreement into the domain of medical expertise. The effect of this move is to simultaneously depoliticize and professionalize the discourse by making it the kind of claim that can only be debated and settled by scientists (see, e.g., Trigger Warning: This Post May Contain Scientifically Accurate Information on Trigger Warnings). As Jack Halberstam has noted, it also presumes a neoliberal focus on the damaged individual rather than the structures and systems that damage them.

Informed Consent: this theme organizes content warnings as analogous to the content ratings that have been ‘voluntarily’ applied to e.g. movies and video games (but not books) by professional organizations. Under this model, proponents tend to argue that warnings aren’t a restriction of information, but in fact more speech, in the form of meta­speech that characterizes speech. As the blogger Scott Alexander writes:

I like trigger warnings. I like them because they’re not censorship, they’re the opposite of censorship. Censorship says “Read what we tell you”. The opposite of censorship is “Read whatever you want”. The philosophy of censorship is “We know what is best for you to read”. The philosophy opposite censorship is “You are an adult and can make your own decisions about what to read”. And part of letting people make their own decisions is giving them relevant information and trusting them to know what to do with them. Uninformed choices are worse choices. Trigger warnings are an attempt to provide you with the information to make good free choices of reading material.

Opponents of warnings tend to argue that rating models are censorship by another name because they attempt to enforce a top­down, universal classification of content as “appropriate” or “inappropriate” which has a chilling effect on what kinds of things can be said or taught by marking certain things as requiring a warning (while others do not). These critics position warnings as moves by the ‘moral authoritarian left’ to deploy tactics long used by the reactionary right during e.g. the culture wars.

“Don’t Be An Asshole”: this theme organizes warnings into the ethical sphere, where proponents see warnings as an acknowledgment of actually­existing differences in experiences and social power, and refusals to offer warnings as a move to force the conversation to happen on one’s own terms, a tactic only available to the socially empowered and which the writer Ta-Nehisi Coates has offered as a working definition of what makes someone an “asshole.”

As the journalist (and friend of Civic) Laurie Penny puts it: “Trigger warnings are fundamentally about empathy. They are a polite plea for more openness, not less; for more truth, not less. They allow taboo topics and the experience of hurt and pain, often by marginalised people, to be spoken of frankly.” This viewpoint, as in Penny’s formulation, often draws on feminist standpoint theory, which positions claims of what is true, and thus what/how it should be taught, within the domain of political struggle as opposed to received fact.

In the university context, this may manifest as an obligation for instructors to consider adjusting their curriculum, offering different examples or alternative assignments, as described by the law professor James Grimmelmann. Some opponents of this view argue that there are things that will make people uncomfortable yet must be taught at risk of being called an asshole; others simply say that yes, they have a right to be an asshole in a free society. 

This list of reasons is neither exhaustive nor exclusive but does incorporate some of the more common logics used to understand and argue about content warnings. But if we understand more about how people argue about content warnings, there remains the question: why is this controversy emerging now? I think there are two major and interrelated reasons:

Increasing precarity of educators and students alike

The precarity of educators, particularly at the university level, has been well­-documented. Adjunct lecturers are typically hired on semesterly contracts and can be easily disposed of should a student complain about the content of their coursework, so their sensitivity to conflict should be obvious. But even tenured faculty, who historically enjoyed almost total security and freedom, have become comparatively less­secure within the ranks of increasingly risk­averse, press­conscious, and bureacratically­managed universities, as in the cases of Steven Salaita and Laura Kipnis, whose polemical popular writings had serious professional consequences.

What has been perhaps less well understood is the parallel rise (and for the same reasons) in the precarity of students. This is particularly true in the case of graduate students who face historically poor job prospects (and who, in my personal experience, have been the most aware of/engaged with content warnings): if you have poor chances of ever becoming a colleague, there is little reason to be collegial with professors who are speaking in a way that strikes you as traumatizing/uninformed/assholeish. It is also true of undergraduates who may a) fear that their inability to fully ­engage with troublesome subject matter will impact their grades, student­-teacher relationships, and subsequent professional opportunities, and/or b) because of their subject position are sensitive toward the (perceived) entitlement of the professors who have almost total power over them.

For example: a few years ago ago, two queer undergraduates at MIT approached me for advice on how to ask a professor to modify his computer­-science curriculum after he included jokes about hermaphrodites in his videotaped lectures on object types. They were at once angry about his (in their view) tonedeaf and dismissive attempt at humor and fearful of his power to affect their grade.

Redistribution of power and deprofessionalization of everything
A reason (and result) of this precarity is the redistribution of power in terms of who is qualified to make claims to and subsequently propagate knowledge. The trouble for civil libertarians is that, although we aspire to support ‘more speech,’ in many cases, we have to functionally ‘pick sides,’ at least in terms of the consequences of the speech we support.

For example, most free speech organizations have historically opposed content rating systems, which are inarguably a form of speech, on the grounds that they will ultimately influence what kinds of movies and video games are made, a standpoint which sides with movie directors and game designers over watchdog groups and moral majoritarianism. Similarly, most of us routinely oppose parents who try to have books removed from libraries or syllabi, but defer to librarians and teachers who decide which books to buy or include in their curricula, because in practice the philosophy of ‘defending academic freedom’ means ‘defending the authority of professional academics.’

It’s instructive to note that many free speech associations have substantial programs, and in some cases entire organizations, built around supporting ‘student speech,’ yet have (mostly) taken positions against student activists in the content warning case, instead favoring professional organizations like the AAUP or ALA. As Sara Ahmed has written, it is a puzzling contradiction to construct the figure of the contemporary student­-activist simultaneously as a coddled, weak-minded millenial and terrifying, all­-powerful demagogue capable of erasing canons and destroying careers.

The broader phenomenon here, I think, is that not only is higher education suffering from a crisis of economic precarity, but from epistemic precarity as well. For years, scientists have watched as more and more people deny climate change or think that vaccines cause autism, which have transformed professional consensus into controversies that often operate as proxies for broader cultural conflicts. I don’t think content warnings are the same thing as the climate controversy: one is a disputed practice, and the other a disputed fact. But I do think the evident disruption in higher education regarding who is qualified to know, and what is appropriate to teach, is another symptom of the same underlying condition of skepticism regarding ‘the establishment,’ whether that establishment is an economic or academic elite.

What’s to be done?

I concluded my remarks with a statement of the concerns that face us and our work as member organizations. I worry that civil liberties groups, like other institutions of liberalism, are being confronted by an evident disjuncture between the principles and consequences of our favored approaches and a resulting crisis of indeterminacy in what we ought to do. To oversimplify somewhat, we are being asked (perhaps forced) to choose between competing visions of who should decide and influence what ought to be taught in the university classroom. As institutions, we have a loyalty to other professional institutions who have power; as activists, we have a loyalty to other activists who challenge it. In some cases, these loyalties are aligned, but in this case, they are fundamentally in tension.

In my own view, and in my own teaching practice, the way I have tried to resolve this tension is to treat students as capable, thoughtful adults. To me, this means respecting their time and intelligence by making sure I articulate what I’m teaching, and why I’m teaching it, as I prepare and present the syllabus. By publicly performing my own considerations, I communicate that I care about and value them and their time. It also forces me to evaluate, as I review my own instructional materials, whether I’ve truly done the best job of teaching I can do, by seeking out the most compelling, persuasive, and necessary readings and assignments, even/especially on challenging topics, and trying to be honest with myself if I can do better by revising it.

Instead of offering things I call trigger or content warnings, I’ve taken an approach like that prescribed by Grimmelmann, and tried to establish and follow a set of best practices and standards that will treat my students as the capable, resilient, and knowledgeable adults they are; and if they aren’t, to trust that treating them that way will help them become such. I’m doing this because, as far as I’ve been able to tell, it’s both the most ethical and effective way to work with the students under my instruction, and I’m hoping that, as this conversation continues, the controversy over content warnings begins to turn away from the perceived divisions between student and faculty into a productive compromise ­­ — which, as the French sociologist of science Bruno Latour reminds us, etymologically means “promise together” — that improves the educational experience of both.