Creating Technology for Social Change

Understanding, Responding, and Resisting Algorithmic Management: Min Kyung Lee on Uber, Lyft, and Human Centered Algorithm Design

What are impacts of algorithms on workers, and how do they respond to being managed by algorithms?

Today at the Cooperation Working Group at Berkman, we welcomed Min Kyung Lee, a research scientist at CMU’s Center for Machine Learning and Health. Today, she shared her work to identify biases in machine learning and her CHI 2015 paper on The Impact of Algorithmic and Data-Driven Management on Human Workers, focusing on Uber and Lyft drivers.

 

Imagine your typical morning routine, Min asks. You turn on your alarm clock, turn on your coffee machine, and eventually you will turn off your alarm clock. While you wait for the coffee to brew, you might browse online news. When the coffee is ready, you pick up your cup to go to work. How many algorithms have you interacted with? The coffee machine has an algorithm that knows when to turn on; the key remote has an algorithm that knows how to open the car, your news might be personalized, and of course your alarm clock has an algorithm too. Your workplace might set your start time with an algorithm too.

 

These algorithms are quietly governing our everyday lives, even as the government itself is taking on algorithms too. Min mentions predictive policing, immigration systems, and smart city resource management approaches. Because algorithms have such an important place in our lives, it’s important to understand them from a human centered perspective.

 

An increasing body of literature is showcasing problems of bias in algorithms, especially in search and advertising. Min’s research asks how we can integrate human decisionmaking into algorithm design, taking a more human-centered approach. Today, she shares a paper she recently published about algorithmic management in Uber and Lyft.

 

In platform economies, algorithms often manage the work of the people who provide services. For example, Uber and Lyft use algorithms to match customers with drivers, also dynamically changing fares based on demand. Finally, both services use quantified metrics collected through the app to evaluate drivers, including how many ride requests are accepted by a driver, as well as the driver rating. If metrics fall below a particular threshold, they receive a warning from the company and might get suspended by the company. However, if they do well on these metrics, they might get an opportunity to play a special role like a mentorship role. These users rarely interact with people; the companies often have only a few people managing hundreds of drivers.

Uber ride Bogota (10277864666).jpg

How do workers feel about being managed by an algorithm? To answer this question, Min interviewed 21 drivers and triangulated the findings by interviewing 12 passengers and analyzing 128 posts to Uber related forums. Firstly, drivers found it difficult to say no to requests, even if they’re asked to drive to a location they don’t feel safe. Drivers often created workarounds around these algorithms, for example turning off “driver mode” in bad neighborhoods to avoid jobs they didn’t want, or stayed in residential areas to get trips from home to bars rather than from bars to home. At other times, they looked at other drivers on the map and moved their cars to places away from other cars. In one case, Lyft drivers discovered that the algorithm was more likely to give them distant requests the longer they were logged in, so they frequently logged off to avoid distant requests. Furthermore, since the algorithm doesn’t explain its reasons, drivers often declined requests when they saw that other drivers were closer, even if there might have been a legitimate reason that they had been matched.

 

Designers of algorithms often assume that people are rational actors who will behave in a way that is optimally efficient. But Min found that drivers often diverged from those expectations.

 

Evaluation and rating systems often treat all rejections in the same way. For example, women often declined men’s requests if the man had no photo– and were penalized for it. Drivers often felt that there might have been uncontrollable factors that raters might blame the driver for (surge pricing, the driver being late, etc). As a result, drivers tended to develop a detached, “hakuna matata” attitude towards rating once they got over the threshold for being dropped. Drivers described getting over worries about ratings and stopped trying to improve their ratings. Drivers often shared tips online for how to improve their performance ratings.

 

What are the implications of these findings?

 

Min finds that the trends towards algorithmic management mean the quantification of input and output behaviors and a “scriptization” of management roles. Min argues that the quantification of data should be done carefully to reflect multiple stakeholders’ points of view and should be combined with qualitative feedback or at least multiple metrics. Next, good managers consider important nuances and make exceptions, but algorithms aren’t so good at that. Finally, the algorithm and their interfaces should be designed to account for diverse motivations in order to be successful.

 

How can we make the development of algorithmic technologies more human-centered? Min describes “Socius,” a smart city technology for supporting the homeless population. Every year, 3.5 million people live in homelessness. Min argues that support for the homeless is inefficient because support organizations can’t coordinate in real-time. The Socius project (with Berkeley & UCLA) uses sensors to detect the location of homeless people so that mobile food pantries can be deployed to support them and grocery stores can effectively supply them.

 

Min’s third project focuses on empowering people to make better use of algorithms using questions and visualizations. Many people have access to sensors like fitbit, but how can they use that data to make meaningful decisions for behavior change? Often counselors and personal trainers ask open-ended “why” questions to understand a client’s motivations. Min has built a system that compares two approaches: some users are asked their motivations. The other system doesn’t ask their motivations. Users often gave much more detail when asked their reasons a second time. After two weeks, Min found that people who had been asked to reflect on their reasons walked on average 20 minutes per day than people who had not. More recently, she’s been doing related work on systems that doctors can use with patients to discuss genetic issues in their treatment.

 

Questions:

An attendee asks how they chose the drivers. Min posted calls to social media, paper flyers, and posts to boards. They interviewed all 21 people who signed up. Another attendee asks if they could do interviews in the car. Min tried it, but no one followed up.

 

Someone asks if anyone’s compared the reasons for rejecting human management versus the reasons for rejecting algorithmic management. Min wasn’t aware of any work that directly human versus algorithmic management. Anecdotally, many taxi drivers who switched to Lyft thought that human managers & operators tended to prefer some drivers rather than others, intentionally or unintentionally. That’s very anecdotal.

 

When people talk about the algorithm, how do they talk about it? As the company, or as a system? Min was very curious about it. People tend to talk about the engineers more so than the algorithms, but when there’s a technology glitch, then the technology got the blame.

 

How do people on Uber or Lyft learn to encounter the online communities of other workers, someone asks. Both Uber and Lyft have official forms that the companies moderate in a very limited way, says Min. Some people use Trello, a voice chat to communicate with other drivers. Someone asks if they ever talk about labor organizing? Min notes that when they want to talk about more political things, they move their conversation to other independent forums like Facebook.

 

How would we design an algorithm to be more human centered, asks a participant. The big challenge, says Min, is that some people argue for transparency, but there are obvious reasons why it would be hard to make these open. There’s a great concern that people would game the systems, but on the other hand, a lack of transparency can generate mistrust. Min talks about the idea of setting goals that are informed by the different stakeholders involved. That’s the development practice that she’s most interested to focus on.

 

I asked Min if Uber and Lyft drivers are aware of all the data that Uber and Lyft collect. Drivers often assume that the company has information on absolutely everything that is on their phone, except when they turn it off. Min doesn’t actually know what data the companies collect. She did have a chance to talk to the people who create the surge pricing algorithms. The company does a wide range of things to influence drivers, including showing drivers different surge pricing screens to try to influence them to move to certain places or take particular trips.