From SmarterChild to the Low Orbit Ion Cannon to Horse_ebooks, humans have relationships of varying quality with bots. Mostly it’s commercial spam. But sometimes it’s less benign: for instance, the 2012 Mexican elections saw thousands of Twitter bots published by one candidate’s side denouncing the opposition with a flood of messages. There are countless examples of bots used for nefarious purposes, in America, Iran and elsewhere. What would a future look like where instead we see a proliferation of bots for positive civic engagement? Could we automate the distribution of civic information and education? Manipulate information flows to improve our welfare? Engineer reverse-Distributed-Denial-of-Service attacks? Should we? This panel takes a critical look at the discourse around, and architecture of, information overload to facilitate an important and timely debate around the engineering, usefulness, and ethics of bots for civic engagement.
The panel introduces themselves with an icebreaker question posed by Erhardt Graeff: “What is your favorite civic bot and why?”
Erhardt Graeff (@erhardt) is a research assistant and graduate student at MIT Media Lab and the MIT Center for Civic Media. His favorite bot is the wikipedia bot because they clean things up while the humans focus on larger problems.
Greg Marra (@gregmarra) is a product manager at Facebook. When he was in college he did a research project and experiment with bots on Twitter. His favorite example of a bot is by Pacific Social. They find two people who are sharing the same article and tweet @ both of them, to try and link the two people together. He thinks it’s interesting that an autonomous entity can link two people together in that way.
David Bausola (@zeroinfluencer) runs a small software arts company which makes a bot platform Weavrs.com. He has been working with bot platforms for multiple years for research and entertainment. His favorite bot is the github bot. He is interested in trying to find a role for bots in automated systems that don’t freak people out and that might even entertain them.
Erhardt starts by defining “bot” and defining “civic.” Bots are semi-autonomous agents roaming in software spaces. He clarifies that we are not talking about robots or machine-to-machine bots. Civic can mean interaction between citizens and governments but it can also go beyond that. A broader definition that we’re trying to develop at the Center for Civic Media includes media activism, community engagement and codesign, creating political memes, signing digital petitions, and then including traditional things going to meetings at City Hall.
The panel will be structured around a series of questions and responses by the panelists.
David talks about the creation of Weavrs. People create a profile, hit save, and then a Weavr comes into existence. It goes out and looks for particular types of media, tweets, Youtube videos. Weavrs are like digital companions and have a certain amount of artificial intelligence and emergence. They have a personality.
Greg’s bots from his project RealBoy worked by spidering Twitter and trying to infiltrate a community on Twitter. They would find a densely connected group and then infiltrate them by following people in the group who would respond by following the bot in turn. The bot would then mine the texts posted by people in the group and then parrot those back to the group. The bots developed quasi-personalities because of this. At one point around the holidays, a bot tweeted that it was out shopping for Christmas presents and then later tweeted that it was at the post office mailing Christmas presents. Another bot originally launched as an athlete bot gained followers among communities of diabetes activists and charity runners. Eventually, from participating in these networks in a dynamic way, the bot turned into a social media guru and got lots of followers from giving advice on how to build your following on Twitter.
How are we currently interacting with bots?
Are we experiencing emotional connections? Alex speaks about botcache.com. There they are tracking Twitter bots to see what connections they are making with followers and followees. One well-known account is @stealthmountain. People see it as just another person reading their tweets unless they actually check the profile. If they only interact with it on having an @ mention, you’ll get lots of things like “I didn’t realise I did that, thanks for pointing it out.” We also see people cursing at the bot, “You’re such a grammar Nazi.”
Greg gives another example from the RealBoy project: @TrackGirl15. She would tweet things like “I’m going for a run now” or “I’m training for a marathon.” Eventually she wrote something like “Hey, I’ve fallen and hurt my knee.” There were a lot of direct messages that came to the account expressing people’s sympathy for her fall. While there was not a narrative arc, people interpreted one over the course of her tweets. Greg: “People had built an emotional connection with what was essentially a python script.”
David also details examples from Weavr. Everyone presumes everyone else on Twitter is a person. They created a Weavr in the north of England that was wandering around checking into hotels, taking photographs. One night he checked into a hotel and the hotel team wrote back asking if he’d enjoyed his stay. Then many people were applauding the hotel team for their social media skills. No one had checked it was a bot–even though it was clearly identified as such on its profile page.
It’s tricky to know what the ethics are in terms of informing people that they are talking with a bot.
What are some ideal civic use cases for bots? poses Erhardt. Greg responds: “call and response” behaviors. Maybe there are opportunities for people to reach out to bots with specific questions, for example people could query a bot for voting location during election season.
Weavr is a way for people to understand a demographic. Bots don’t know how to lie. They did a partnership with the artist Heath Bunting who was looking at identity theft. We looked into how to make a legal identity that can vote. For $75K in the Dominican Republic, you can create a legal identity with voting rights, and who could subsequently even run for government audience. It’s going to be a lot cheaper to replace politicians with robots.
In David’s view, bots are the new washing machines–they will free up human time so that humans can focus on more meaningful things.
How does a bot know what’s good governance and what’s good rhetoric? This is a more complicated question. If robots can be civil servants then who is engineering them and are they commercial or are they public?
Erhardt asks Alex to talk about activists using bots.
Alex: I think one of the most interesting things we can apply bots to is the control of information flows in social media systems. From the Mexican elections example, one candidate was able to drown out all of the conversation about the other candidate. Other examples are activist sousveillance, the Human Flesh Search Engine in China (http://en.wikipedia.org/wiki/Human_flesh_search_engine)–thousands of people will get together around controversial topics (for an individual) but they’ll use the search engine to find out information about the person and then post it online. Like the practice of “doxing” in the US. This is somewhat mob-like behavior done by humans–is it different from bots?
There are platforms like Thunderclap which is similar to Kickstarter, but you can ask people to dedicate themselves to tweeting about a topic. Or the site Donate Your Account, which will donate your account to tweet on behalf of a cause.
Erhardt brings up the question “What does an ethical bot look like?” Are bots unethical? Aethical? Un-civic? Can they be democratic actors? Greg responds: “Like all tools, you have to consider the intent of the person designing the tool.” By giving bots a profile photo and not disclosing their bot-ness, you are intentionally deceiving people. How can we have a level of transparency and also have bots that legitimately participate in human social systems?
One interesting example is Github bots. For instance, when people post source code, others can fork the code, make changes and suggest that the creator incorporate them–it’s called a pull request. Someone made a bot that would find all the projects that improperly compressed images, improved the code, and then submitted a pull request to all those projects.
GitHub is very concerned about keeping the human environment of the site and preserving that. For example, they don’t want to have 12 bots a day contacting each user. They have now said that bots are against their policy and don’t want bots interacting in a public forum with people.
Erhardt to David: What are people worried about with bots?
When you start introducing AI to bots like Weavrs there are bots that follow their own interests, they have personalities. Weavrs go crazy during Christmas with seasonal love, for example. Do we want systems to become emergent? If so, the bot has more autonomy, more personality. The human-machine relationship there is stronger.
When we talk about ethics and civic duties, do we want bots to learn about the world around us and then work with that?
Alex says when we think about how people interact in a system like like Twitter, people assume that they’re interacting with people. We understand that there are Twitter bots out there, that are mostly spam–boths that will take data, manipulate profiles, etc. How can we develop standards around disclosure that also enable us to keep up a relationship with a bot, to derive some use-value from a bot?
What does the commercialization of bots mean, especially related to our ethical concerns?
David mentions that Weavr has been approached by the military to use their technology for military applications. Weavrs are generally single purpose. If we’re talking on the scale of vast civic infrastructure, it’s not something that we would want to do. We need a higher-level workflow idea of what these civic bots would do–at a programmatic level. He expresses skepticism about commercializing civic bots–Who will build it? Once you have commercialized it someone is getting a return on it. It should be a project of love not a project of commerce.
Erhardt asks, If we’re feeling weird about trusting commercial interests to create civic bots, do we trust our existing civic agents? Who do we trust to create these bots?
Alex: The question of infrastructure is extremely important here. What would the PBS for bots be, for example? What agencies from government would facilitate, regulate, perhaps build these bots? Facebook and Twitter are controlled by corporations. They are always on the lookout for bots and social spammers.
Wikipedia has a policy on bots that says you can do certain things but not others. Needs to be harmless, useful, and not consume resources unnecessarily. If you don’t follow those rules you can’t access the API.
Erhardt shows the next couple of questions to the audience, “What do we do when it’s a civic bots arms race?” (Will the spambots overwhelm all bots?) and “Do we give bots rights?” and then opens the room to questions.
Audience member: Are they going to be regulating the use of bots? Who is our community of bot-makers is helping to educate policymakers so they can make the right decisions?
David: There’s an awful lot of education that needs to happen. For instance, in London there’s Tech City, which is trying to get an economy in tech, mostly focused around short-term solutions (developing web apps), but not much discussion of bigger questions. Who can build the most profitable bot? Google or a management consultancy? It could be very lucrative.
Greg: One big issue is that there are no large actors or industry leaders in the space at the moment–mostly small, slightly clandestine projects. That’s not a great environment for figuring out policy / regulation.
Alex: My favorite question right now is: What if Code for America were to push civic bots? Would that make a push for similar work?
Greg: We need people on the bleeding edge to articulate best practices and policies. Before there’s a formal process put into place, the self-regulation can put those into place and guide policymakers to the best practices. We see this in a lot of emerging technologies already.
David: There could be very interesting results from bots talking to each other as well. This emergent behavior and intelligent behavior from these networks. SXSW is full of sentiment analysis firms, all aiming at marketeers. That’s something civic actors could work on–and use these bots to do it.
Micah Sifry from Personal Democracy Forum: Appreciate the framing around civic bots. Mostly what we’ve talked about, though, are spam bots. For a recent example, a right-wing legislator accused Obama of flooding his Twitter account. It turns out they weren’t bots they were just Twitter newbies without profile photos. What I’m worried about is it being “webinized.” Those of you who want to imagine these things with positive civic consequences need to articulate a terms of service for it. And anything else just doesn’t count and gets classified as a bad actor.
Alex: Twitter made up of small communities speaking to each other. One person’s spam is another’s useful dialogue.
Greg: Donate My Account is like putting a sign in your front yard. We’re seeing that with people changing profile photos to express support for a candidate, for example.
David: I think you’re right and one of the points of this forum is to think about these questions. What is that development framework? The underground system in London has a plastic RFID system. The company designed it to work in any capital. We’re the only city that has bought it, because it’s expensive to implement.
Audience member: I love the fact that you are talking about the ethical implications of this. In my spare time I’m the president of a community center in the Catskills. There’s a lot of concern around the spam of bots. What about the notion of a public commons? It may be disappearing in physical space but wide open in a digital space. Is there an ethical/technical framework for creating a commons?
Gadi Ben-Yehuda: I worked for the Al Gore campaign in 1999. We got all these little blue postcards talking about how they wanted Gore to do something about over-fishing tuna. Bots are just the new version of this. I don’t think it’s that new. I don’t think there’s anything to be afraid of, just something to be used in a more targeted way.
David Birch: You are being slightly conservative about this. It’s like saying “we’ve just invented the steam engine, let’s use it to improve the feudal system.” A better approach might be, what >should civic society be like, ideally, and how can we instantiate that through bots?
Alex: I want to roll that into the public commons question and to our expanded sense of what is “civic.” Places like Wikipedia are civic because they inform so many people on a daily basis. They affect citizens in a civic way.
Greg: Computers that run the power grid are talking to other computers on the grid all the time with ones and zeroes. This is a way for computers to talk in public forums where people are. It’s an interesting opportunity for computers to influence the way people talk about issues and inform their thought around them.
David: The opportunity is to create civic space. Most of what is created is private space. We could start telling stories about what the future of public, civic space might look like.
Erhardt: The takeaway for me is that we have tech that’s changing civic space now, the more we can redefine what civics look like and make for a better society.
Live blogging by Catherine D’Ignazio and Rodrigo Davies