Part II: Privacy and Equity Concerns

    Privacy and Equity Concerns Raised by Self-Harm Monitoring Technology

    Before adopting self-harm monitoring technology, schools and districts should understand the risks self-harm monitoring technology can pose to students’ privacy and safety and carefully weigh those risks against any benefits.

    Schools have widely and rapidly adopted self-harm monitoring technologies, despite the fact that they are relatively new and unstudied.49 Over the past two years, adoption increased as concerns grew about students struggling with mental health during the COVID-19 pandemic.50 These facts raise important questions about the privacy risks and implications of monitoring that schools must carefully consider prior to implementation and revisit regularly. Such privacy risks may lead to disproportionate harms to students who are identified by self-harm monitoring, with especially inequitable consequences for systemically neglected groups of students. Suicide and self-harm disproportionally affect these vulnerable student populations, such as certain students who are minoritized in terms of race/ethnicity, sexual orientation, gender identity, disability status, or experiencing homelessness.51 Moreover, students who are identified as “at-risk” may feel they have a target on their backs, with their personal struggles given limited privacy in school.

    When schools use monitoring software, students deserve clear policies around what data is collected, who has access to it, how it will be used, and after what period it will be destroyed. Students deserve the assurance that all collected data will not be misused and that data collection and storage will be privacy-protective. Students deserve to have their schools held accountable, with clear consequences for those who put student privacy at risk by violating data sharing protocols. And students, educators, and families all deserve transparency.

    The following privacy, equity, and implementation considerations guide the analysis in the following section. School leaders should ask themselves these key questions as they consider implementing a self-harm monitoring system:

    1. How will the school district create a school-wide mental health support program that is equitable and inclusive, and how does the technology fit into that program?
    2. Does the school district employ staff (e.g. school psychologists, school counselors, and school social workers) with expertise to address mental health concerns that may be detected?
    3. What kinds of information do monitoring systems identify and flag, is the system collecting more information than the purpose requires, and how long will the data be retained?
    4. What harms, such as stigma or discrimination, may stem from collecting and/or sharing students’ information or flagged status?
    5. Who has access to the information identified or flagged, and do they have a legitimate health or educational purpose for accessing it?
    6. How is student information shared with third parties, if at all, and are such disclosures permitted by law?
    7. How does the school district plan to provide transparent communication with families and students about monitoring policies, and how have they ensured that monitoring plans meet community needs?

    1. How will the school district create a school-wide mental health support program that is equitable and inclusive, and how does the technology fit into that program?

    Merely adopting monitoring systems cannot serve as a substitute for robust mental health supports provided in school or a comprehensive self-harm prevention strategy rooted in well-developed medical evidence. Schools must have robust mental health response plans in place to effectively support any students who may be identified before adopting monitoring systems.

    Schools and districts should carefully consider and discuss the extent to which self-harm monitoring is necessary and beneficial to the goals of their mental health support program, and, if so, craft evidence-based policies to manage the privacy and equity risks. These goals need to be clearly stated and specific in their scope. Goals such as “improving student mental health,” or “saving lives” are too general because the connection between the tool and the steps required to achieve the goal are not evident. Specific goals define the problems to be solved and provide benchmarks to measure how successfully the chosen tool addresses the problem. Schools should have a clear explanation for why self-harm monitoring is necessary, as opposed to, for example, establishing deeper systems of school-based mental healthcare and providing more robust preventative care resources to students. If the benefit of adopting self-harm monitoring technology will not outweigh the privacy and equity risks, and if there are other ways to fulfill the goals of the mental health support program, schools and districts should reconsider monitoring altogether.

    If monitoring technology is adopted, it must be implemented as just one component of the broader mental health response plan. Identifying students alone does not support students or give them access to help. Monitoring companies agree that effective self-harm monitoring cannot solely rely on software and must be part of a comprehensive mental health approach by schools.52 Absent other support, simply identifying students who may be at risk of self-harm—if the system does so correctly—will, at best, lead to no results. At worst, it can violate a student’s privacy or lead to a misinformed or otherwise inappropriate response.

    2. Does the school district employ staff (e.g. school psychologists, school counselors, and school social workers) with mental health expertise to address concerns that may be detected through self-harm monitoring systems?

    It is imperative that schools employ professionals with the expertise (e.g. school psychologists, school counselors, and school social workers) necessary to identify and address mental health concerns such as depression and anxiety. Unlike these professionals, teachers and school administrators are typically not licensed to identify and address mental health concerns and crises.

    In the absence of staff with this specialized knowledge and training, mental health misconceptions can drive and negatively influence even the best-intentioned efforts to help students. The American Civil Liberties Union found in 2019 that millions of students nationwide attend schools with no counselors, no school nurses, no school psychologists, and no school social workers.53 Lack of personnel and in-school support means that flagging students via monitoring does not necessarily lead to help and resources for the students when there are none available in the school for them to receive. Likewise, simply informing a student’s parents that their child has been flagged by a school monitoring system as at-risk for self-harm will not necessarily result in the student receiving appropriate mental health supports—many parents may be left unsure what to do with this information, especially in the absence of in-school or community-based resources and services that they can access.

    3. What kinds of information do monitoring systems identify and flag, is the system collecting more information than the purpose requires, and how long will the data be retained?

    Identifying content indicating a student’s intent to self-harm is more challenging than it may seem. The data and activities that each monitoring system flags vary. A system may flag a student’s activity when their content matches specific words or phrases, based on an algorithm or a machine learning model.54 As a result, monitoring systems often fail to capture context or correctly interpret colloquial language that many students use. Peer-reviewed empirical research has repeatedly shown that context is extraordinarily difficult for most computer programs to accurately interpret,55 such that monitoring systems end up simply searching for certain words and flagging them without the capacity to determine what they mean and how they are being used. A computer program is therefore prone to interpret many innocuous phrases as dangerous language and raise alerts, thereby flagging content unrelated to any mental health condition or any intent to self-harm.56

    For example, the search history of a student conducting research on the poet Sylvia Plath or grunge-rock legend Kurt Cobain—both of whom died by suicide—might look remarkably similar to the searches of a student suffering from depression. Similarly, students who share innocuous posts using slang about a “photobomb” or how their parents are “killing them” may be mistakenly flagged for using terms associated with violence.57 This is an inherent shortcoming of using monitoring technology as a self-harm reduction strategy; it can penalize students for conducting research or expressing and exploring their feelings in developmentally normal ways. Published research studies58 on the subject suggest that monitoring and flagging student content in this way can have a chilling effect59 on students’ healthy and natural exploration while making students hesitant to seek help when they need it.60

    While some monitoring systems may include a broad range of default categories and indicators out of a well-intentioned belief that it is best to capture any and all alarming student content possible, flagging overbroad keywords can reduce a monitoring program’s potential effectiveness. In systems with a more narrow self harm focus, school officials may be able to adjust alert settings to monitor categories such as profanity. Including such overbroad indicators increases the administrative burden on school officials and provides little benefit, inundating them with vast amounts of normal student content that require extensive staff time and effort to review. This makes it harder for school staff to notice and identify flagged content actually related to risk of self-harm, and detracts time and resources from providing useful follow-up for any true risks and student needs.

    Moreover, some monitoring systems flag data and activities by default that are unrelated to self-harm but that the school district or monitoring company may consider otherwise inappropriate or concerning. For example, Buzzfeed reported in 2019 that one monitoring company included “LGBTQ-related words like ‘gay,’ ‘lesbian,’ and ‘queer’” as keywords that sent alerts to school officials (under the category of keywords that were monitored “in the context of possible bullying and harassment”).61 Other monitoring companies have filtered and blocked access to websites related to health resources for LGBTQ teens, news outlets that cover LGBTQ issues, anti-discrimination advocacy pages, and professional associations for LGBTQ individuals as part of their general monitoring regimen.62 These flags and blocks risk inadvertently disclosing a student’s gender identity or sexual orientation. This disproportionate flagging of LGBTQ students by monitoring systems can expose them to the privacy harms associated with monitoring and can even directly endanger their safety by exposing their sexual orientation or gender identity to school officials, families, or third parties. For more on the unique harms LGBTQ students may face as a result of monitoring technology in schools, and the legal implications of disparate flagging, see Legal Implications and Boxes 1 and 2.

    Finally, a key factor in limiting unnecessary over-collection of student information revolves around schools’ and monitoring providers’ data retention and deletion practices. Data collection and retention will vary by monitoring system, and in some cases by type of data collected (e.g. students’ web browsing history, email messages, drive files, etc.). One system, for example, monitors student emails by sending a copy of each email to the monitoring system and analyzing it for indicators of self-harm or other content that the system flags. If content is flagged, the student’s email is saved and the monitoring system sends an alert to school administrators. If nothing is flagged, the copy of the email message is discarded.63

    School leaders and monitoring companies should specify the period of time for which student information is retained in the system. School district leaders should consult with their state archives or records officer to determine the retention schedule for any data collected on students. Before publishing a retention schedule, school districts should determine whether any information collected through monitoring would be considered sensitive. Sensitive information, meaning any information that could adversely impact a student’s educational or employment prospects, or could jeopardize a student’s privacy or well-being by being shared, should be deleted as soon as legally allowable.

    School districts should publish a retention schedule as part of their transparent communication about policies around the use of monitoring technology and should include information on how information will be destroyed once it is no longer needed. For example, in 2019, Montgomery Public Schools, Maryland, became the first district in the country to publicize a policy to annually delete student information from certain systems, such as internet search histories accumulated through digital vendors, including from the district’s internet content filtering and classroom management provider.64 This plan provides a strong example of appropriately limiting data retention and can serve as a model of effective student data retention policies for other school districts.
    Increased data collection and sharing without clear justification frequently overwhelms administrators with information, undermines effective learning environments, casts suspicion on already marginalized students, tends to punish or criminalize students’ medical struggles or disabilities, increases inequities, and can fail to promptly identify individuals who may be at true risk of self-harm. To mitigate these drawbacks, schools should develop clear guidelines about the kinds of material that systems should flag, tailor systems narrowly to respond to actual risks, and think critically about how they address identified concerns.

    Table of Contents