Part I: Background

    What Is Self-Harm Monitoring Technology and How Do Schools Use It?

    Schools often adopt self-harm monitoring technology with the best intentions: to help keep students safe and improve their well-being. However, if implemented without due consideration to the significant privacy and equity risks posed to students, these programs may harm the very students that need the most support or protection, while ineffectively fulfilling their intended purpose of preventing self-harm.

    Before adopting self-harm monitoring technology, schools and districts should understand the risks self-harm monitoring technology can pose to students’ privacy and safety, take thoughtful steps to mitigate those risks, and carefully weigh the risks against any benefits.6 After weighing the equities, some schools choose not to adopt this technology. When schools choose to adopt, strong privacy and equity practices must be identified and implemented.

    Why do schools use monitoring technology?

    Schools generally use monitoring software with two goals: legal compliance and to keep students safe.

    Legal Compliance

    Most schools adopted monitoring software long before self-harm monitoring software was available in order to comply with the Children’s Internet Protection Act (CIPA). When CIPA was enacted more than 20 years ago, the role of software was to block access to obscene or harmful content, and monitoring typically took place in a computer lab, where teachers and school staff could view, in person, the content students were accessing on their school computers. Today, schools across the country provide students with various options to learn through technology, including requiring students to bring their own devices to school7 or providing them with school-issued laptops, tablets,8 or mobile hotspots,9 dramatically increasing the breadth and invasiveness of monitoring that can occur. The type and extent of monitoring required by the law has been interpreted unevenly by different districts, ranging from fairly minimal approaches to much more extensive interpretations.10 The Federal Communications Commission (FCC) has yet to publish guidance on CIPA and monitoring. In addition to the lack of guidance on CIPA’s practical application, there is also no guidance on how CIPA interacts with the Family Educational Rights and Privacy Act (FERPA), a federal education privacy law that grants parents and students specific rights to student education records (go here for more on FERPA’s application to student monitoring).

    Remote learning during the COVID-19 pandemic (see Box 3 discussing the effect of the pandemic) has only increased student usage and reliance on school-mediated technology, especially take-home internet hotspot devices issued by schools to help close the digital divide. In response to the pandemic, in August 2021, the FCC announced more than $5 billion in school and library-issued requests to fund 9.1 million connected devices, with 5.4 million broadband connections through the Commission’s Emergency Connectivity Fund.11 Without clarity on CIPA’s requirements, schools may unintentionally over-surveil and over-collect sensitive, personal information about students or their families in an attempt to comply with the law. For example, monitoring on school-issued hotspot devices brought home by students may not be limited solely to school hours, and may capture internet activities of not just the student but also other members of the household.

    In addition to CIPA, schools may be subject to state-level filtering and cyberbullying laws that may require them to implement filtering and monitoring technology to ensure that students safely access the internet for school purposes.

    Keeping Students Safe

    In addition to legal obligations, schools want to ensure the wellbeing of their students. The internet has enabled access to inappropriate content, bullying in cyberspace in addition to school hallways, and sharing of inappropriate images. The rapid adoption by schools of communication and collaboration tools from Google and Microsoft, driven in part recently by remote learning needs,12 has also generated large volumes of student communications and digital content. Because of these factors, monitoring technologies are often appealing to many schools, families, and other education stakeholders who seek to know what students are doing online,13 including identifying when students are facing mental health concerns and particularly when students are looking up information about self-harm or suicide.

    Many policymakers and educators hope that these monitoring systems can help schools identify students at risk of self-harm or suicide so that schools can direct them to help and resources they might not otherwise receive. For example, in early 2021, a Florida legislator sought funding for schools across the state to adopt the monitoring provider Gaggle to “protect Florida youth from suicide and self-harm.”14 Similarly, the North Carolina Coronavirus Relief Act 3.0 made $1 million “available to public school units to purchase one or more Gaggle safety management products to enhance student safety while providing remote instruction.”15 Local education leaders also see a need to adopt self-harm monitoring systems. In 2019, for example, a school system in Wilson County, Tennessee expanded its monitoring system, designed initially to detect violent threats to school safety, to also scan student-created content on school devices, such as emails and online posts, for signs of self-harm. A counselor in the school district described the monitoring system as generating red flags in response to keywords, including “self-harm,” “suicide,” and “overdosing,” or phrases such as “I just want to cut myself.”16 Importantly, the district noted that the program was incorporated in a larger process—when such online activity is identified as a potential source of harm, counselors can then perform a risk assessment, involve parents, and offer mental health resources. In the first two months of 2019, Wilson County schools told News4 Nashville that they had identified 11 cases requiring intervention using their expanded monitoring system, although these cases were not limited to suicide-related comments and also included language related to drug use and sharing inappropriate photos.

    Self-harm monitoring companies and the media have shared similar accounts and experiences from other school districts as well. In Caddo Parish Public Schools in Louisiana, the district’s instructional technologist reported that the self-harm monitoring system Lightspeed Alert helped identify a student contemplating suicide during the pandemic.17 In Las Vegas, a 12-year-old student was flagged by his school after he used his school-issued iPad to search for “how to make a noose.”18 Neosho School District in Missouri told NPR that the district has identified a struggling student at least once per semester, which enables them to conduct an early intervention.19 These anecdotes illustrate just a few of the compelling reasons why schools and districts may want to adopt self-harm monitoring technologies.

    Charged with the care of children, schools have clear incentives to look for straightforward indicators of self-harm risks; they would certainly want to catch students messaging classmates with a plain intention to harm themselves, or students querying a search engine for ways to die by suicide. But those circumstances—in which there is a clear, imminent danger of a student about to harm themselves—are fortunately rare, and scanning for self-harm using monitoring systems often seeks to identify situations that are much more ambiguous.

    What do schools monitor?

    Monitoring technologies generally work by scanning and flagging (marking for action by the system based on certain criteria) students’ online activities and content on school-issued devices, school networks, and certain school services (e.g. Google Workspace or Microsoft Office 365) for indications that a student may be at risk of harming themselves.

    As discussed above, CIPA specifically requires schools that receive federal E-Rate funding to filter and monitor20 students’ online activity to prevent them from accessing inappropriate content, such as graphic, violent, or sexually explicit material.21 When schools adopt self-harm monitoring software, it often is an addition to this more general monitoring occurring in the district.

    Each type of monitoring software is different, and may offer different features. Generally, monitoring software is either scanning all web traffic—the information received and sent in a web browser (such as Chrome, Firefox Safari, or Edge)—or monitoring the content of specific applications owned by the school, such as their email (such as Outlook or Gmail), file storage (such as Microsoft OneDrive or Google Drive), and school-managed chat applications (such as Google Chat or Microsoft Teams). Several monitoring software companies also provide an option for classroom management software, which allows a teacher to monitor the screens and web browsing of their students during a class session, and focus the class’s attention by preventing web browsing, pushing a web page to all students, or focusing student’s attention on a specific web page. Unlike general internet filtering software (which may filter or monitor student’s personal devices that connect to a school network or a school-provided wireless hotspot), self-harm monitoring software is typically installed only on school-provided devices. However, when there is monitoring of certain school-managed services (e.g. Google Workspace or Microsoft Office 365), monitoring can occur on both school-provided and personal devices since the monitoring software is scanning all content created in those accounts.

    How does monitoring occur?

    When monitoring software is scanning web traffic or specific applications, it could either 1) scan the content and only keep content when it is “flagged” as inappropriate or otherwise problematic,22 or 2) keep all of the content that is scanned so schools officials can retrospectively see the websites that specific students were visiting and some of their activities online.

    This process of reviewing, “flagging,” and alerting will be familiar to anyone that has ever received an alert from their credit card company of a suspicious transaction. While some schools deploy technology that simply emails an administrator when a student accesses an inappropriate website, email or search term,23 other schools use more intensive monitoring that creates a log of each student’s search and web browsing activity.24

    Monitoring services overwhelmingly employ algorithms that scan and detect key words or phrases across different platforms.25 These algorithms can be based on simple natural language processing of keywords or may attempt to use other types of artificial intelligence26 to examine the context of the content to improve the reliability of the “flagging” process.27 Some monitoring services go beyond algorithms and employ a second step in their flagging process, in which the content is reviewed by respective monitoring companies’ internal personnel to check for false positives or to review additional context to better understand the flagged content.28

    The alerting process varies between services and in different situations. For many monitoring services, different content can trigger different alerts or responses. For example, terms or activity that monitoring services have grouped into lower-level or less serious inappropriate content may simply be blocked.29 If more serious inappropriate content is flagged or detected, students may receive warnings by email for violations, and school administrators may be copied in instances of multiple warnings.30 When content indicating a possible threat to a student’s personal safety or the safety of other students is detected, it could result in direct personal notification to the school or, in extreme cases, to law enforcement or emergency services.31 These more extensive monitoring services can allow school officials to see what each student has been doing online (and, with some software, can automatically send that information to parents).32

    How common is student monitoring in schools?

    Monitoring technology has become prevalent in schools throughout the country. E-rate funding is provided to approximately 95 percent of schools,33 so most schools have some web filtering and monitoring system that blocks access to content that is obscene or harmful to minors in order to comply with CIPA. In recent nationwide research with teachers whose schools use monitoring systems, 52 percent reported their school’s monitoring included flagging keyword searches such as “Information on self-harm monitoring.”34 For example, more than 15,000 schools use the monitoring service Securly, and 10,000 schools use the service GoGuardian.35 However, all of these services only advertise their number of subscribers for their general CIPA monitoring or classroom management services, and not for their specific self-harm monitoring service. As a result, it is unclear what percentage of their subscribing schools have chosen to use self-harm monitoring detection in addition to their existing monitoring services. In contrast, Gaggle, which is used in over 1,500 school districts,36 does not provide CIPA content filtering directly and focuses exclusively on self-harm, violence, and objectionable content monitoring.

    How is self-harm monitoring different from monitoring generally?

    Self-harm monitoring systems present a new, significant turn from the way schools have used monitoring systems for content filtering and CIPA compliance over the past 20 years. By seeking to draw conclusions about students’ mental health status based on their online activities and initiating actions involving school officials and other third parties based on these inferences, self-harm monitoring systems introduce greater privacy risks and unintended consequences for students.

    Several online monitoring companies that market to schools have expanded their services to offer monitoring technology that specifically seeks to identify students at risk of self-harm or suicide. These services employ the same general flagging and alerting process described above, but with a specific focus on content that might implicate suicide or other forms of self-harm. A range of content may be flagged, and the appropriate response or alert may depend on the severity of the content, such as whether intentions of self-harm appear with evidence of an imminent plan.37 For example, the monitoring company Lightspeed has a product called “Alert”, which employs “safety specialists”38 who escalate immediately “to district safety personnel and/or law enforcement, enabling early intervention”39 if a student’s plan to harm themselves is deemed imminent. The monitoring company GoGuardian offers the alert service “Beacon,” which scans browser traffic to and from “search engines, social media, emails, chats, apps, and more” for “concerning activity surrounding self-harm and suicide.”40 Managed Methods, a student online monitoring service, offers a “Student Self-Harm Detection” tool that is described as detecting “self-harm content in school Google Workspace and Microsoft 365 apps.”41 Securly Auditor and Gaggle similarly monitor content in school Google Workspace and Microsoft 365 apps. While not the primary focus of this report, a relatively small number of schools have also used dedicated tools that scan students’ social media posts for indicators of self harm or other threats.42

    Concerns and Challenges Associated with Monitoring Technologies: Important Considerations for School Districts

    Schools often adopt self-harm monitoring technology with the best intentions: to help keep students safe. However, if implemented without due consideration of the significant privacy and equity risks posed to students, these programs can harm the very students that need the most support or protection, while ineffectively fulfilling their intended purpose of preventing self-harm.

    While monitoring companies claim to have flagged thousands of instances of self-harm content, there is no information available about how many of the students that were identified in these examples were found to be truly at-risk of self-harm as diagnosed by a mental health professional, how many students in these districts were at-risk but not picked up by the system, and what the context and size of the student population are in these publicized cases. No independent research or evidence43 has established that these monitoring systems can accurately identify students experiencing suicidal ideation, considering self-harm, or experiencing mental health crises.44 Self-harm monitoring technologies remain unproven as a prevention strategy and have not been substantiated by mental health professionals and clinicians as an effective tool for addressing mental health crises.

    It is difficult to conclude the effectiveness and benefit of self-harm monitoring systems based solely on a few anecdotal examples shared by school districts and monitoring companies, especially when there are countervailing anecdotes of false flags and invasions of privacy. For example, The 74 reported in 2021 that the monitoring software Gaggle, used in Minneapolis Public Schools, “flagged the keywords ‘feel depressed’ in a document titled ‘SEL Journal,’ a reference to social-emotional learning” taught as part of the school curriculum. In another instance, it “flagged the term ‘suicidal’ in a student’s document titled ‘mental health problems workbook.’”45 Gaggle’s CEO shared that a student “wrote in a digital journal that she suffered with self esteem issues and guilt after getting raped,” which allowed school officials to “‘get this girl help for things that she couldn’t have dealt with on her own.’” The Guardian reported in 2019 that school officials had received “red flags when students tell each other sarcastically to ‘kill yourself’, talk about the band Suicide Boys, or have to write a school assignment on the classic American novel To Kill a Mockingbird.”46 Education Week reported that in Evergreen Public Schools in Washington State, at least a dozen students were flagged by monitoring software when they “stored or sent files containing the word ‘gay.’”47 These incidents demonstrate how monitoring systems can both flag innocuous, extraneous content and create significant privacy incursions of sensitive student information. These privacy incursions and the related legal concerns for the districts running monitoring software (described here) can be exacerbated when the majority of content flagged occurs when students are at home outside of normal school hours.48

    Simultaneously, deploying self-harm monitoring technology raises important privacy and equity considerations that education leaders must consider. Schools and districts that consider or use self-harm technology must therefore weigh the harmful implications of using this technology against the uncertainty of its benefits or effectiveness. The section below outlines these specific privacy and equity considerations.

    Table of Contents