This is a liveblog from the “Private Platforms under Public Pressure” roundtable at AoIR16 on October 23, 2015 in Phoenix, AZ. This is not a transcript but recreation of people’s comments. Any errors are my own.
This roundtable featured scholars J. Nathan Matias, Tarleton Gillespie, Christian Sandvig, Mike Ananny, and Karine Nahon working on both critical and constructive appropriates to defining the roles and responsibilities of platforms, the governance of those systems by users, corporations, algorithms, and states, and the question of where we are at our public consciousness of what it means to have a new definition for or new socio-technical system called a platform.
Each panelist reflected on what brought them to the research topic and also on the panel theme: What happens to private platforms when they are put under public pressure? They found much left to explore in the topic: many questions were raised and the need for more research and new approaches was clear.
Finding constructive ways to address platform problems and acknowledge the shared work of governance
J. Nathan Matias
Nathan is involved in putting platforms under pressure and thinking about how those platforms should respond. Earlier this year, Nathan worked on a report of Twitter harassment in collaboration with WAM! (PDF). The report was in part about auditing how well Twitter responded to different types of harassment, while looking into how outside groups like WAM! could perform peer support in such cases. In so doing, they found the types of harassment Twitter struggled to respond to and looked at potential recommendations.
This summer, Nathan worked at Microsoft Research studying the practices of moderators on reddit around the reddit blackout. Hundreds of reddit communities shut their forums down to protest reddit corporate policies and perceived shortcomings. He was looking not only how moderators were putting pressures on the company but were subject to pressures from reddit and other moderators. How did they make their decisions to black out, what mechanisms democratic and autocratic did they employ? Nathan is trying to shine a light on moderators as not only volunteer workers and those that lead in governance of their communities, but also how they find themselves caught in the middle of things like the crisis this summer.
Defining and pressuring platforms in ways that recognize their complexity and unique scale
Tarleton has looked at how platforms have stepped into the role of addressing speech issues. He sees a tension between understanding a way platforms serve as mediators with the question of how should we compel them to change—as there are real issues platforms are encountering which demand a normative response. This is only one of many dilemmas facing scholarship on platforms and public pressure.
What is the expectation of platforms to provide a service and what constitutes and failure to provide service? There is hypocrisy in thinking about provision of service and also fair market practices, when users may be treated as laborers. How are users excluded in different ways? And where do we see a kind of mission creep among these platform owners: when users come to a platform with one understanding of what it is doing and then when that changes their data persists?
Where is primary responsibility assigned is it in needing to intervene when something goes wrong or in needing to prevent those things going wrong in the first place—did the platform cause it to happen? Sometimes this issue of responsibility exacerbates the existing problem in which platforms are the bottlenecks to addressing important issues.
Harder and more theoretical questions include: Do/should platforms serve a public sphere? Do/should they emphasize entertainment over seriousness? Are/should they mission critical versus service critical? What happens if when a platform makes a change to deal with a dire situation, which then makes them exploitative of users?
All of this seems to boil down to platform responsibility, a phrase Tarleton is not sure he is comfortable with. If we are trying to understand the role that platforms embody and have a conversation about responsibility, how do we define a platform because that changes how we think about them? Is it about mediating exchanges?
Systems that facilitate exchange like Uber and Taskrabbit are similar to Facebook and Twitter in this kind of service role. And we have existing ideas about what it means to be (and to regulate) an artificial exchange system. The other metaphor for platform is as infrastructure, which significantly changes our perception of these issues. Thirdly, platforms are also media: they mediate culture and participation and we have certain expectations and literatures about what that means.
Another key area Tarleton is concerned with is the question of scale. Intermediaries operating at the scale of Facebook and Twitter put a taxing demand on legal and human responses to crises. When platforms think about their responsibilities, they have to think about system-wide responsibilities and solutions. Yet when we as the public or individual users think about problems, we think of them as individual incidents, i.e. they took down my photo or blocked these hashtags. If we believe platforms have to respond to issues at a human scale, maybe we also need to think about our concerns at the platform scale. Tarleton isn’t sure how to deal with that tension—and our own responsibility as researchers.
Lastly, what other values might drive the conversation differently then freedom of speech? What would it mean to focus on freedom of association—maybe it’s about letting people interact with each other rather than what they say. How does that change our view of platforms?
What would it look like for the US to actually intervene and regulate Platforms?
Christian’s work on internet infrastructures has led to him having conversations about interventions that sometimes focus on Europe since the US has non-interventionist tradition of regulation of internet platforms. So he is curious about what a US intervention would look like?
Christian shares a quote defining infrastructure as something that other things depend on. But what is the threshold in the US to define something as infrastructure? We sometimes use the term utility. In telecommunication systems, we think about infrastructure as critical. They also have strategic value as a national champion of the US. Facebook and Google enable other high tech opportunities for the country. Additionally, they have military and government roles to play.
In the US, we have a story about the telecoms being special but to justify an intervention something special must happen. Christian quips that perhaps in the future we’ll have a “National Strategic Emoji Reserve or Ministry of Emoji.” This seems silly but things now treated as regulated utilities were once novelties or luxuries and then eventually became critical like the telegraph or telephone.
The first step to getting from novelty and luxury to infrastructure is they need to be widespread. We are already there with online platforms. Next, can we build substitutes, where users can’t move away from them? We don’t have a substitution problem: we have at least token examples of other social networks and search engines. Then there needs to be a public outrage, like when the Titanic changed how we think about radio.
What would be Facebook’s Titanic moment that moves the state to regulate them in a new way? Another way regulation is motivated is when a platform serves a function of the state and supplies the state with important goods. We are seemingly there with the surveillance systems and provision of geospatial data Google sells to the government.
It seems we are missing the Titanic moment though. So what would it take to motivate platform intervention? And if we don’t see this framing as effective, then what would be a more useful way to motivate the state to intervene?
Handling the constant evolution of platforms and search for normative frameworks to support regulation
Mike is interested in the dynamics of change and the reasons they occur or are justified. Platforms are constantly changing and change for public-like reasons, but we are not well versed in the dynamics of them or what those changes mean for the public. For instance, we see forces of power that can make changes sometimes but cannot at other times.
Empirically grounding Mike’s thinking are three cases. One is the hostage crisis in Sydney, where Uber increased the costs to individuals fleeing the area. There was widespread outrage, and Uber later apologized for the algorithms mistake and refunded consumers. A second case was news sites during the Boston Marathon Bombing, who dropped their paywalls to allow more people to get breaking news about the crisis. And third was last week when AirBnb ran sarcastic ads about how the City of San Francisco should use the tax money they are now providing. They pulled the ads after a negative response, saying they got the tone wrong.
For Uber, platform response meant changing an algorithm. For the news sites it was decommoditizing an information resource. And for AirBnB, it was reorienting a public relations campaign. Mike argues it’s important to think about all the ways platforms change and that it’s not just technical but legal and rhetorical.
Mike finds the work of Debra Satz helpful here. She is an ethicist who coined the term “noxious markets,” wherein sometimes things just should not be for sale. These include lifesaving medicines, coerced sex, body parts, etc. In some cases, it’s because people are so poor or so desperate they accept any terms of sale. There is also a loss of individual agency in these transactions. Some times there are extremely harmful outcomes, both social and economic outcomes that a violent toward individuals and society.
Mike wonders whether we can import a normative framework like noxious markets and see where this breaks down: when is a platform noxious? What sort of exogenous shocks resonate as important or are ignored by platforms? When are things free? When are users expected to absorb costs? What would it mean to work in non-noxious ways. And how can individuals take on the responsibility to address noxious platform: individual developers taking action, funders pressuring companies. Perhaps this normative framework can help the public and officials find a way to move from reactionary to regulatory around what is noxious.
Lastly, what constitutes the ethos of shared consequences (Dewey’s ideas of a public)? What is a shared consequence versus a private consequence? It’s our job as scholars to read into those platforms and find how they are constructing the notion of publics and responsibilities in how they change over time.
Toward a view of platforms as ecosystems
Karine Nahon (respondent)
Karine wants to step outside the box of “platforms,” which she sees as distracting, and also to go beyond the low level issues of infrastructure. She sees this discussion of platforms focusing too much on individual problematic cases or companies making mistakes under public pressure. Instead, we should see this topic as a much broader and more complex ecosystem with different forms of power working at different levels.
Can we find different levels of power being executed similarly across different cases? Instead of focusing on how Facebook regulates harassment, let’s think about all the different factors that led to their reaction.
We often talk about the opacity of technology. And we as researchers are the white/black nights coming in to reveal what is behind the scenes. And we too often assume that creating this transparency will solve the problem. But this is dealing with only one kind of power and one perspective on the problem.
Also, we often focus on big data and its problems of representation or its ethical conundrums. But too often we stop at simply naming these issues rather than dealing with them in transformative ways.
Finally, when we talk about markets, we are rationalizing the stakeholders—asserting they are rational actors in their exchanges with and through platforms. But many of our exchanges are social and emotional and can’t be rationalized by markets. A better framing of these issues and role of platforms and how to regulate them should account for all of this.
Alex Leavitt: Talking about scale as Tarleton did, it’s not just about holistic scale and what happens when humans or algorithms respond. We are missing the discussion of the political economy of design within the companies. How do we move forward as researchers into the political economy discussions when access to the companies is a problem: we aren’t on the ground to understand how developers and designers and doing the work to create what we experience in the problem?
Camille: In looking at Netflix, the company is involved in manipulating the discourse about concerns for policymakers. These include things like kids exposure to commercial content in the media they consume. They argue it’s not commercial manipulation, it’s about genre like Disney. While children can’t produce their own profiles, they can mine all sorts of data from them if they are in the Kid’s area and create a profile there. Netflix frames this about consumer choice and value and what people want.
Karine: The issue of choice came up in sociology and political science in the 1970s. Portraying certain decisions by users as choice is really problematic. Platforms do exercise power in those situations. So how do different stakeholders have power in different moments?
Robert Gehl: We choose to work for American liberal corporations. And we think about these same questions when we work for a company. How are those analogous to the issues we see in platform responsibility?
Ted Coopman: Is this really so different than public space and should we think about how we regulate those public spaces like the mall and the responsibility of owners and operators of those spaces? And more and more we are talking about it in terms of who is a employee. Social platforms provide the platform and we provide the content. How does that change the transaction when you are part of that? This prompts the workplace regulation, which is a different set of rules and laws. And then we go back to the original formulation of utility. Have we reached a point where this is no longer a viable model: do we need a new model that accounts for employee and citizen and other roles?
Nick Seaver: Karine brought up the idea that transparency is so often seen as the solution. Transparency often undergirds many ocritiques: if only we knew what was going on in this secret situation. We we do this we think about these platforms as monolithic and find platforms to have split/multiple personalities: doing a range of things that seem contradictory. But these are people at different levels making these decisions and we need to understand how these individual people end up working on these responses.
Tarleton: I agree that the ethnographic impulse is a good one in this instance: we need to understand these organizations internally. We as scholars should also offer analyses that are not about the “secret sauce,” but rather articulate the nature of the problem in ways that don’t talk about platform problems in monolithic ways. We should represent that there are individuals working on multiple levels and with complex relationships and conversations. That said, we should also think about what common values are driving these decisions.
Audience Member: In the example of 23andMe, which received a cease and desist from the FDA—they actually received state intervention which shocked others in Silicon Valley. It seems when you bring health into the conversation it changed the imperative to intervene, and allowed countries with different interpretations to regulate 23andMe differently. Can we bring our readings of these issues into such spaces that a more regulated to move this along?
Nathan: I want to address how we acknowledge the variety of other people involved in these cases. Many people have been delegated roles to take action in different cases both formally and informally. And while we may not see some of these cases as Titanic, moderators for instance are constantly facing Titanic moments at the scale of their communities.
Audience Member: I focus on cyberbullying and youth. There is very little talk here in the US about independent evaluations compared to in Europe. These don’t have to be intrusive and require state action abut can have real value.
Jathan: I work on Smart Cities and the metaphor of platforms has extended to company’s like IBM talking about smart cities as platforms. What do we even mean by platform and why is this such an attractive model or metaphor to use? It seems like it has a lot to do with power. There is power in being a platform by depoliticizing the stature of the company because it suggests they aren’t in the busy of making active decisions. Curious whether this metaphor of platform will soon break.
Nik: Picking up on Christian’s idea of what would intervention look like: what would non-intervention look like? You (Christian) know this because of how you have written about the Facebook Contagion study.
Sarah: I want to talk more about Nick Seaver’s point about the multiple personalities seen in platforms. How do norms of different communities come into tension with the platform? Blocking Twitter in Turkey had a positive outcome for Twitter when there was a user surge thereafter. When do companies make decisions to serve a user base versus going in a different direction? There are changing relationships between users and platforms and those shift by global positioning to the platform.
Fenwick McKelvey: Three potential directions to take this work: 1) push forward the relationship between platform and virtual world, especially looking at case studies like League of Legends—when is something a gaming space and when is it a platform; 2) there are also connections between platforms and companies that change at the routing level such as in the case of Netflix usage data; and 3) how are these issues territorialized and materialized platforms like in the case of Uber?
Tatevik: How much change is actually made? It doesn’t look like things in privacy have changed despite multiple outrages?
Audience Member: One of the things we haven’t quite addressed is that there has been a shift from the liberal framing of politics to protect rights to seeing corporations as protecting those rights. If Facebook is a corporation with preferences that collects users instead of the users with preferences going to corporations, how does that change our understanding of these issues?
Christian: We are talking a lot about platforms resisting regulation and users pushing for it. Historically, corporations want intervention because it can protect it from competition and guarantee its right to property. This might be helpful because it doesn’t frame the problem as confrontational.
Tarleton: I feel like the language of platform change is often used to target platforms because it is an easy shorthand. How do we find language that appreciates how these relationships go beyond just platforms? Of course, being more sophisticated about how we talk about this ecosystem, might mean we lose our ability to use language that can hold organizations accountable.
Nathan: Companies often engage with people who are trying to shift how they work. (He’s seen it in his work on Twitter and on reddit as well.) They have informal ties to different communities and this shapes the design decisions and also can cause some of the problem we saw over the summer.
Karine: I would like to see a macro theory that incorporates the structure of platforms and networks and the dynamics of content flows and relationships and through which we can talk about the actors and how they exercise power (something she has started writing about social media). We are missing the understanding of the ecosystem as an ecosystem.