Creating Technology for Social Change

The Four Horsemen of the Free Speech Apocalypse: Emerging Conceptual Challenges for Civil Libertarians

Last April, I blogged about a talk on trigger warnings I gave as a representative of the Board of the National Coalition Against Censorship (NCAC), a nonprofit whose mission is to promote freedom of thought, inquiry and expression and oppose censorship in all its forms. Earlier today, at the request of Executive Director Chris Finan, I presented to the rest of the Board some early thoughts about ascendent challenges and emerging threats to those concerned with the freedom of expression. What follows is a lightly edited version of my notes for that talk. Epistemic status: uncertain, but trying to trace lines to see where they might converge. Extremely interested in feedback.

There is, ironically, a common consensus that we live in a fractured public sphere. At the level of systems design, people worry about filter bubbles, echo chambers, and information cascades. At the level of ordinary politics, people worry about the ability to get opposing sides to agree on common facts, let alone effective policy. At the level of cultural coherence, canons are being challenged, and authority redistributed. Whether you blame liberals or conservatives, the alt-right or snowflake millennials, there is a shared understanding that the questions of who can speak to whom about what are more hotly contested today than they have been in some time.

However, there are more profound risks on the horizon for those invested in traditional conceptions of, and defenses for, free expression. The purpose of this blog post is to briefly outline four interrelated challenges to free expression activists that can’t be solved by the old civil libertarian saw of “more speech == better speech.” To be clear, when I say these are challenges, I don’t mean they are necessarily good or bad developments. I just mean they present thorny problems for existing frameworks about free expression. They are: a growing conviction (that I share) that more speech does not necessarily mean better speech, the economics of attention making it harder to be heard, automated content production swamping human expression, and fake content that’s indistinguishable from real content.

Conceptual challenge #1: conversational health and detofixication
The core thesis of this challenge was put nicely by Melissa Tidwell of reddit, in a New Yorker article regarding the company’s efforts to “detoxify” its community:

Melissa Tidwell, Reddit’s general counsel, told me, “I am so tired of people who repeat the mantra ‘Free speech!’ but then have nothing else to say. Look, free speech is obviously a great ideal to strive toward. Who doesn’t love freedom? Who doesn’t love speech? But then, in practice, every day, gray areas come up….Does free speech mean literally anyone can say anything at any time?” Tidwell continued. “Or is it actually more conducive to the free exchange of ideas if we create a platform where women and people of color can say what they want without thousands of people screaming, ‘Fuck you, light yourself on fire, I know where you live’? If your entire answer to that very difficult question is ‘Free speech,’ then, I’m sorry, that tells me that you’re not really paying attention.”

The framework of health and toxicity has also been recently adopted by Twitter, with CEO Jack Dorsey announcinginitiatives to research the “overall health” of Twitter, a notable departure from the previously laissez-faire attitude of a company that used to describe itself as the “free speech wing of the free speech party.”

In the not-so-distant past, social media companies largely tried to avoid policing what their users posted on their platforms, citing safe harbor provisions and/or libertarian philosophies and praising the Arab Spring as the result of their publishing tools. Today, as companies seek to expand and diversify their userbase (not to mention their engineering workforce), and confront the legal and economic challenges of their most noxious users, many platforms have shifted their own internal value-systems quite rapidly in the direction of a more nuanced understanding of speech beyond the simple (but common) conceit that more == better.

Conceptual challenge #2: the economics of attention overwhelming the economics of publishing
The core thesis of this challenge, argued persuasively by Zeynep Tufecki in her It’s the (Democracy-Poisoning) Golden Age of Free Speech, is that the relevant scarcity, and therefore point of vulnerability, to the free expression of ideas is not the inability to speak but the inability to be heard:

Here’s how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms…The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all

In a whitepaper titled Is the First Amendment Obsolete?, Tim Wu argues that this change in communications technology requires rethinking the way we regulate speech or risk giving up on Constitutional approaches to improving the public sphere altogether. This paper is especially notable as it was published with the Knight First Amendment Institute itself. ‘

A corollary to this argument observes that, since most publishing is paid for by advertising, i.e. attention/surveillance, platforms are economically incentivized to promote outrageous content. Certainly this is nothing new: yellow journalism and tabloids have turned a profit off this dynamic for decades. However, these processes are now optimized and individualized to a degree of power and precision never before possible. Which brings us to:

Conceptual challenge #3: automated content production
The core thesis of this challenge is that automated content generation, directed by the prenominate economics of attention and advertising, will produce truly massive volume of toxic, outrageous expression and swamp human expression with the proximately computational. In a haunting essay entitled Something is wrong on the internet, James Bridle falls down the hole of weird YouTube videos that, at least in some cases, appear to be computationally generated at massive volume in order to capitalize on the long tail of advertising dollars.

If smart scripts can reverse-engineer popular titles and keywords, and then mash pixels together to produce cut-ups of pop culture references, then Borgesian libraries of content can be manufactured and posted with none (or nearly no) human intervention. Nor is this dynamic limited to YouTube videos: algorithmic content generation and on-demand production means that you end up with screenprinted tshirts that read “KEEP CALM AND RAPE A LOT” by virtue of random pairings of nouns and verbs. As James Grimmelmann writes in The Platform is the Message, in “the disturbing demand-driven dynamics of the Internet today…any desire no matter how perverse or inarticulate can be catered to by the invisible hand of an algorithmic media ecosystem that has no conscious idea what it is doing.”

When humans create perverse or disturbing content, we chalk it up to sickness or to creativity, and institutionalize or memorialize accordingly. But when computers do it, at the scale and volume made possible by digital reproduction and incentivized by the economics of advertising, the sheer flood of content may overrun the stream that people can produce, drowning distinctions between good and bad, and obviating the idea of a “conversation” together except as occurs through algorithmic feedback.

Conceptual challenge #4: documentation that is fake but indistinguishable from real
The core thesis of this challenge is that new technologies that can produce fake content indistinguishable from real content will create a collapse of trust and/or rebuild it through invasive and surveillant technological means. Of all the challenges, I believe this to be the most profound and deeply dangerous. The unholy trinity of technologies that can totally destroy the concept of documentary truth include:

  • Tacotron 2, Google’s new text-to-speech system that is virtually indistinguishable from a human voice
  • Digital doppelgangers, through which researchers have been able to generate convincing speaking faces from pictures and audio to make people “say” things they never in fact said
  • DeepFakes, a software package that allows moving faces to be mapped seamlessly onto body doubles

In a recent post for Lawfare, Bobby Chesney and Danielle Citron recognized the grim national security implications for these technologies. Grimmer still are some of the proposed solutions, like the concept of digital signatures embedded in cameras so as to track and verify the creators of videos, which, even if it worked psychologically (as the author of the linked article admits it might not), risks building an even greater surveillance ecosystem, or undermining real (but unsigned) videos from everyday people.

So, these are the four horsemen of the free speech apocalypse. While the current controversies about speech and expression are difficult enough to navigate, to me, these risks seem to approach the existential. People who believe in the value of free expression and free speech must plan to confront these challenges soon or risk having the moral and normative ground melt away beneath their feet.