Creating Technology for Social Change

Beyond Accessibility: The Influence of E2E in Imagining Internet Censorship

In 1984 three MIT researchers published a paper titled “End-to-End Arguments in System Design.” They advocated a “design principle that helps guide placement of functions among the modules of a distributed computer system.” Certain network solutions, they argued, could “completely and correctly be implemented only with the help of the application standing at the end points of the communication system.” They called this rationale the “end-to-end argument.”

Early ARPAnet engineers like Paul Baran, tasked by the RAND Corporation with building a communications system that could “survive” even if elements of it were unpredictably destroyed, had already developed the basic principles of building a decentralized network. Baran and his colleagues designed protocols, most prominently packet-switching and best-efforts routing, which could robustly navigate unreliable networks. Instead of sending each message once through a designated pipe, Internet Protocol sends redundant packets through several routes, trusting the servers “at the ends” to piece them back together. And instead of the preordained path through a centrally switched telephone network, Internet Protocol sent each myopic packet “hopping” from server to server, each time asking if it was “closer” to its destination. Meanwhile, the servers acted with the earnest goodwill of small town traffic cop, gently pointing each packet a bit further along its path. Internet Protocol generally distributed duties across many decentralized, rather than a few centralized, technological agents.

 



A traceroute follows a packet’s journey from MIT to Stanford

 

This broad suite of protocols and practices inspired end-to-end’s authors. Yet, despite its ambitious title, their paper was markedly modest in its prescriptions. The authors argued only that end-to-end was the most efficient means to execute error control functions within application file transfers. Their article did not address network latency, throughput, or other important considerations. Nor did the authors clearly define “ends” beyond applications in their argument, an important practical limitation since, as Jonathan Zittrain has argued, ends are indeterminate: what is an “end” and what is an “intermediary” on the Internet depends on one’s frame of reference.

As an argument, however, end-to-end crystallized dull practices into shiny principle. Tarleton Gillespie has traced the spread and influence of end-to-end as an idea across borders, disciplines, and industries. Despite – or perhaps because of – the difficulty in nailing it down to any precise technological arrangement, “e2e” became a model for understanding the Internet. It sanded the rough edges of implementation down to the smooth contours of an ideal: that “intelligence should be located at the edges” of a network. The terms “intelligence” and “edges” were rarely explained by those who invoked the argument. Instead, the rhetorical package replicated like a virus through the digital discourse, as advocates from varying backgrounds and with varying agendas deployed or resisted e2e in arguments over what the Internet ought to be. e2e wrapped up the Internet’s sprawling inconsistencies into an extremely portable model. As an interlocuter remarks in Latour’s Aramis: “Do you know what ‘metaphor’ means? Transportation. Moving. The word metaphoros, my friend, is written on all the moving vans in Greece.”

End-to-end, in other words, became a dominant and widespread configuration of the Internet, a robust technological and social construction complete with operating manuals explaining how it could or should work. The rallying cry of e2e – “keep intelligence at the edges!” – imagined the Internet as smart nodes connected by dumb pipes. The influence of this configuration guided many of the digital debates of the last decade. Network neutrality supporters, for example, fought to keep the pipes “neutral” – that is, “dumb” – so that the edges could remain “smart.” The e2e configuration implied and assumed a means of use: far-flung folks, as intelligent edges, conversing “directly” with each other through open pipes. But this configuration also suggested a means of subversion: if the Internet delivers information between smart nodes through dumb pipes, a potential censor can subvert it by silencing a node or blocking a pipe.

As a result, access to the pipe became a key way to conceptualize censorship. What passed for Internet censorship in the 1990s and 2000s was usually associated with blocks and filters imposed upon individuals “at the edges.” Electronic Frontier Foundation cofounder John Perry Barlow publicly worried about “origins and ends [of packets getting] monitored and placed under legal constraint.” The Berkman Center’s canonical books on web censorship – “Access Denied”, “Access Controlled”, “Access Contested,” – invoked this central metaphor directly in their titles.

Even those methods of suppression which arose organically from users followed this model of making ends inaccessible. Distributed Denial of Service (DDoS) attacks, for example, operate by launching a dizzying number of requests at a server sufficient to disable it. By demanding access over and over they ironically prevent it. DDoS’ are generated “bottom-up” by individuals, not imposed “top-down” by institutions, but they pursue the same effect, a kind of electronic heckler’s veto rendering a speaker (or at least, her server) inaccessible to her audience. Meanwhile, those battling censorship organized around e2e by creating alternate sites or paths to blocked edges. Projects like Tor tunnel under the walls erected by censors, while sites like Pastebin offer redundant locations where threatened materials can be found should the originals be removed.

The e2e configuration was further reinforced by earlier narratives of censorship and resistance. The ACLU campaigned vigorously against both removing books and blocking websites in libraries by appealing to principles of free access. Emphasizing the edges fit intelligibly within the American legal tradition of individual actors: the Supreme Court, in an early Internet case, favorably compared every networked individual to a pamphleteer, framing the pipes as the means of distribution. These social and legal traditions suggested stock heroes (the pamphleteer; the whistleblower; the learner) and stock villains (the autocratic state or corporate censor; the wild and deafening mob). They provided generative frameworks of compliance and resistance drawn from an analog world, which were then reinterpreted and layered back upon the digital.

The end-to-end argument, animated by liberal traditions, helped shape how censorship was understood, practiced, and resisted on the networked web. Most importantly, its configuration suggested accessibility as a central theme of use and consequently of subversion.

Yet perhaps we are moving beyond accessibility? I’ve described in other blog posts about how some emergent methods of suppression seem to orient, not around whether an object is formally accessible, but whether an object is effectively findable. These methods take advantage of the fact that websites are never really on the Internet, but rather through certain systems which mediate the person and the thing. Tools which link people to things; the pathways through which people travel to find things. The cavernous space of the Internet ends up collapsing to these tiny, two-dimensional conduits through which information actually circulates. Not the physical pipes, but the sociotechnical paths, the Ariadne’s thread which, when followed, connects us and, when severed, disconnects, such that we remain adjacent but oddly, invisibly unavailable.