How do youth allies promote young people’s critical thinking on privacy, in informal learning contexts in the Americas? This blog post is part of a series showcasing the work of different organizations at the intersection of youth development, digital rights, and online safety.
What: A social media helpline for schools
Mission/vision: To help U.S. schools and districts reduce cyberbullying and grow student safety and positive school climates
Where: Based in Salt Lake City and serving schools nationwide
Since: 2015 (based on research since 2000)
Years of operation (as of February 2018): 2.5
Works in the field of: youth online safety
Highlight quote from the interview: “We are trying to humanize the process, get educators to see that it’s not really about technology. Every incident involving young people is unique – as individual as the people involved. We ask administrators who call what’s going on, what platform, if local media have reported on it, if they’ve reported it to the app or internet company, or if they need help with that. […] In order to do any of this, often they need help from students because they don’t know how to use the app. In most cases students have brought them the issue because they don’t like the negativity either. So we encourage them to work with their students and develop their digital leadership.”
Interview with Anne Collier, iCanHelpline.org and the Net Safety Collaborative
I want to start this post disclosing that Anne’s views, through her writing and through our conversations, have helped me articulate my vision on youth safety that supports, not undermines, youth agency – and I am honored to start this blog post series with her most recent initiative. I am also grateful for all her feedback on the first draft of this post.
Anne is a youth advocate; the author of one of the most robust, nuanced and informative sources on youth and media in the United States (and anywhere, in my Latin American opinion); and a true connoiseuse of the evolution of advocacy and research in the field of youth and technology. Through twenty years of work on youth and online safety, Anne looked for practical approaches that served youth, and it became clear to her that the missing piece in the US was an Internet helpline. In 2016, she founded iCanHelpline – the topic of this interview.
iCanHelpline is based in the United States, and run by The Net Safety Collaborative. It is the place “where schools and districts can call or email to get help in resolving problems that surface in social media – problems that threaten students’ safety such as cyberbullying, impersonation, harassment and sexting”. This means that the helpline does not work directly with youth, or even with parents: they work with school staff.
For the purposes of this series of interviews, I decided not to focus exclusively on allies that work directly with youth; working with other audiences can be key in youth allyship, and iCanHelpline’s strategic decision to work with schools supports this rationale. “My research through the years has shown me that most messaging has been about kids with engaged parents and a significant proportion of our kids don’t have that support but need it. Those at-risk kids may or may not have engaged parents, but virtually all of them are in school. It seemed logical to start there”.
Why a helpline? Anne recalls that, in 2006, when kids were adopting MySpace and other platforms, a lot of initiatives were undertaken to fill in the knowledge gaps for parents, policymakers and media, from taskforces to Anne’s own writing on Net Family News. To her, “now, at the beginning of this decade, it felt more and more like we could keep writing and trying to guide parents, but it wasn’t going to get us anywhere”. Anne looked at practical approaches that served youth in English-speaking countries, and found early examples of helplines in the United Kingdom, Australia and New Zealand.
In addition to these countries, the European Commission had funded helplines in many EU countries; however, the United States did not have what’s called an “Internet helpline” yet. Anne worked with other colleagues from the world of net safety to set up a pilot, modeled after the UK’s internet helpline, and explore different sustainability models.
This is a blog series about youth and privacy. But Anne considers her work to be in the field of net safety. In a changing landscape of encompassing ideas, from the media literacies of the last decade to the digital citizenship of today, Anne considers that being deliberate about our approach to the framing of youth safety is key in the recognition of youth agency in the process:
“In the first 15-or-so years of Internet safety, all the messaging, from politicians to civil society organizations, was along the lines of ‘be careful with what you post as it can come back to haunt you.’ It was all about consequences to oneself and online in isolation – keeping yourself safe, your personal information private, your ‘digital reputation’ positive. Youth were represented almost entirely as potential victims. There was no focus on you as a stakeholder in a community–online, offline or both–on a participant in keeping things safe for yourself, your peers and your communities. Agency wasn’t even a component of ‘digital citizenship,’ which, at least in the U.S., has been about good digital behavior typically for the purpose of “classroom management.”
“Thankfully, there’s increased discussion about the importance of resilience in sustained wellbeing, online and offline. For too long in the public discussion about youth online safety, we neglected the development of resilience and other internal safeguards such as media literacy and the skills of social-emotional learning (or “social literacy”) in favor of surveillance and control: parental control tools, rules, policies and laws. These have their place, but there was an inherent imbalance. They’re all external to the child and send the message that only outside forces, never their own and their peers’ resources, are what keep them safe.
“Infrastructure certainly plays a safety role too,” she said, referring to one of “seven properties of safety in a digital age” she proposed in 2012, “providing users with the tools and know-how to counter social cruelty, report abuse and take responsibility for their own and each other’s wellbeing and that of their communities.” Through her interaction with major internet companies, she’s seen progress there, but “still not enough support on any adults’ part for young users’ agency. All the stakeholders – parents, educators, companies, policymakers and, through the conditioning of 20 years of Internet safety messaging, even youth – default too much to thinking that adult-to-child instruction about, and adult monitoring and control of, the online part of their lives are how that part stays positive and good.”
Bringing fresh perspective and focus to youth agency is recent work on researchers’ part aimed at adding the “digital” piece to the UN Convention on the Rights of the Child, Anne said. She pointed to a special issue of the journal Media & Society, “Children and young people’s rights in the digital age: An emerging agenda.” The editors, Profs. Sonia Livingstone in the UK and Amanda Third in Australia, “highlight a crucial policy imbalance worldwide where, they write, ‘Efforts to protect [youth] unthinkingly curtail their participation rights in ways they themselves are unable to contest.’ The imbalance they’re referring to,” Anne continues, “is an over-focus on their rights of protection, which jeopardizes their agency.”
So “because social media use is as individual as anyone’s social life,” she says, ”maybe the most practical way to educate adults about the pluses and minuses of teens’ social media use on a case-by-case basis, through a helpline that meets caregivers’ need to resolve problems when they arise.” By taking calls from school personnel trying to address cyberbullying and other social cruelty online, Anne and her helpline collaborators “are trying to steer schools away from defaulting to law enforcement and see if they can work on a solution with the students who want the problem resolved”. Anne says nearly two-thirds of cases they’ve been called about came to administrators from students, who don’t like drama and social cruelty any more than adults do. By working with students, she said, administrators see that students are “part of the solution much more than they’re part of the problem, honoring their agency and potential for digital leadership. Internet helpline work, she said, is “much more about adolescent development than crime and punishment, and we want to see more and more schools focus on restorative rather than punitive approaches to online problems as well as offline ones.”
School personnel call the helpline most often to try to get harassing content taken down. “Which is fine. Let’s meet that need, and in the process send the message that even problems in and with tech are actually more about humanity than technology. That’s not always easy to hear, but our process is simple. We find out what’s going on, what platform’s involved – sometimes it’s more than one – if local media have reported on it, if schools have reported it to the internet companies and if they need help with that. […] And in order to do that, many times they need help from students because they don’t know how to use the app.”
Once schools have reported the problematic content, iCanHelpline leverages its relationships with different internet companies to help expedite the process. Anne says that companies are “typically very responsive” in getting harmful content deleted.
This is all part of the complaint escalation model, which has complications: It means that only users who understand the abuse reporting tools or have access to the correct intermediaries are likely to see a prompt response to their reports, largely excluding users outside the global north (Athar, 2015). It also means that intermediaries like iCanHelpline and other helplines can’t always meet callers’ expectations because they can’t themselves act on Internet companies’ Terms of Service or community guidelines; only the companies themselves can. And these intermediaries bear the responsibility placed on them by the public without being able to guarantee a satisfactory outcome and without remuneration from the companies for making their users’ experiences safer.
But the fact is that, as of today, companies still struggle to respond to requests in a timely manner in the face of masses of user-generated reports, and the role of intermediaries like iCanHelpline is essential both in helping companies address time-sensitive issues more promptly and in helping users understand the reporting options available to them. “Schools and other institutions responsible for user safety don’t understand social media companies and systems, and a lot of the reports companies receive are not actionable because of a lack of context for what’s being reported,” Anne said. Most of the reports are what the companies call ‘false positives,’ coming in with inaccurate or inadequate information.”
In other countries, many of these helplines are largely government-funded, but Anne says she’s not convinced this is the right funding model in the United States – at least not now, in a politically charged environment.
“I think, ideally, support comes from a consortium of companies whose users are receiving help from these intermediaries around the world. But it’s complicated,” she adds. “Societies don’t yet understand that there’s this new intermediary layer that helps both users on the ground and the services in the cloud. Users get help and perspective, companies’ moderation teams get pre-screened context – and this is on top of the traditional help layer, with vertical-interest help services for suicide prevention, mental healthcare, support for domestic violence victims and all the other established services for offline issues.
“The new middle layer is unprecedented, so the business model has not yet been figured out, and meanwhile these companies, some of which have users in every country on the planet, are getting requests for funding from an unimaginable number of NGOs. They need some solid analysis of the safety ecosystem – the education parts for prevention and the helplines, hotlines and law enforcement agencies for intervention. I’d like to see an international gathering of representatives from both the prevention and intervention sides.” For now, iCanHelpline is operating under the subscription model.
With more funds, “we’d do a lot more marketing: it’s all about uptake and letting schools know that this is available to them. But that’s complicated too, because many schools still think of cyberbullying and other problematic digital content as ‘off-campus speech’ that isn’t their responsibility to address,” Collier said. “Another challenge, I think, is that schools are conditioned to believe that nothing can be done about harmful digital content. The social media companies are really stepping up now, but they weren’t as responsive in social media’s first decade, so schools, parents and users in general came to think they were on their own. Not only is social media, not to mention a service like this, hard to wrap their brains around, but they’ve developed stopgap measures, that we don’t think really work for social cruelty online – like calling in the tech coordinator, the school resource officer, or the district Title 9 lawyer. So little of what happens in social media is about tech, actually, and certainly not something you call the cops about. But people don’t know that”.
In an independent evaluation of the iCanHelpline’s service, 93% of the educators who called were “extremely satisfied” or “very satisfied.” I personally like to think that these are all users who will never assume that nothing can be done to improve youth’s experiences online.
“The public discussion and the news media, with the tremendous negativity toward young people’s use of social media, even if contradicted by research and the new sociology of childhood, does frame youth as victims. It is rooted in old-school consumer media culture – the previous media era, not this one. Not only does it not apply to young media users today, it disempowers them. That should not be,” Anne said. A content takedown request becomes a pretext to promote understanding among educators and to encourage them to collaborate with their students to resolve problems together – and, ultimately, enable youth to improve their experiences online, rather leave them alone with their devices or drag their social tools away from them. For Anne, this form of practical work serves all stakeholders, including the youngest ones.