Looking Beyond Commercialization Concerns in Student Privacy

May 25, 2022

Cross-posted from Public Interest Privacy Consulting LLC Blog

I've spent the last couple of years thinking about how to better protect student privacy by carefully tailoring bills to avoid unintended consequencesimproving existing laws, and moving beyond a narrow conception of student privacy to holisticethicalequitable, and successfully implemented data protection.* This week's newsletter focuses on a new report from Human Rights Watch, "How Dare They Peep into My Private Life? Children's Rights Violations by Governments that Endorsed Online Learning During the Covid-19 Pandemic."

The main (and far too often, only) focus of student privacy discussions in the last decade has been the potential exploitation of student data by companies. The HRW report, while interesting, retreads these concerns about the commercialization of student data and also is missing important data; for example, the researchers omitted some of the most common categories of edtech used in schools.**

When student privacy reports focus again and again on advertising technology as the major threat to student privacy, stakeholders overlook other insidious privacy harms. I wish more advocates considered a broader range of student privacy risks. Over the past eight years, I've developed the below list of major student privacy risks (thank you to the many, many people who helped me create and refine this list over the past eight years, especially colleagues at NASBE, FPF, and DQC):

Risk Category Questions a Parent Might Ask About this: Risk: Safety: Is a stranger or someone dangerous able to communicate with my child or learn where my child lives? Risk: Over-Collection & Over-Surveillance: How much information is being collected about my child? Risk: The Permanent Record: Will my child's mistakes be recorded forever? Risk: Loss of Opportunity: What information will be used to determine which opportunities my child doesn't have access to? Risk: Equity Concerns: What if the information is inaccurate or biased? What if it is used in an inequitable way? What if my child and I can't or don't have access to the information or technology? Risk: Social Harm: Is my child being cyberbullied or stigmatized? Risk: Age-inappropriate Content: Is my child accessing content that isn't appropriate? Risk: Commercialization: Are companies selling my child's data or targeting advertising to them?

The Organisation for Economic Cooperation and Development also has a list of risks for children in the digital environment that should be considered by advocates and policymakers when regulating child and student privacy:

Table: Risks for Children in the Digital Environment. Risk Categories: Content Risks, Conduct Risks, Contact Risks, and Consumer Risks. Risks that fall under all of those categories are privacy risks (interpersonal, institutional & commercial), advanced technology risks (e.g. AI, IOT, Predictive Analytics, Biometrics), and Risks on Health and Wellbeing. Content risks are manifested through hateful, harmful, and illegal content, and disinformation. Conduct risks are manifested through hateful, harmful, or illegal behaviour, and user-generated problematic behaviour. Contact risks are manifested through hateful, harmful, illegal, or other problematic encounters. Consumer risks are manifested through marketing risks, commercial profiling risks, financial risks, and security risks. The risks that cut across all risk categories are considered highly problematic as they may significantly affect children's lives in multiple ways.

Some advocates have done phenomenal work to try to address these other risks, such as Massachusetts' ACLU's 2015 report on student monitoring, the P.A.S.C.O. (People Against the Surveillance of Children and Overpolicing) Coalition's advocacy to end the sharing of sensitive student information with police, and the NYCLU's and Fight for the Future's campaigns to ban facial recognition in schools, among many others.

But too many reports keep singing the same tune: 'There is commercialization (or the potential for it) in schools. This is bad. We should stop it, maybe through laws.' I don't disagree - but let's look at the laws we already have, what is and is not working, and also examine and consider creating new privacy guardrails that encompass the wider range of student privacy risks that could limit students' futures.

To be perfectly clear: the commercialization of student and child data is an important issue, and has long been a focus of those in the student and children's privacy space. Even if you don't care about being forced to see the shoe ad for shoes you already bought across multiple websites, or don't mind your kids begging you for the newest toy they saw in a commercial, there are privacy risks that go far beyond trying to sell products. This data is often collected in opaque ways, sold without our knowledge, and is often used to manipulate us into a purchase or other action. It creates creepy dossiers about a lot of our day-to-day actions - and guesses about what we'll do next. It can lead to concrete future harms, like not being able to get a job, an apartment, or a loan.

But commercialization is overwhelmingly regulated in the student privacy space: targeted advertising to students and children is already illegal in many countries, even those without a specific student privacy law, and student privacy is more than just regulating or restricting student information processed by companies. 

Many of the most egregious privacy harms I've seen have little to do with commercialization, and some of the most impactful student privacy advancements have come from investigative journalism and scholarship exploring these less-discussed risks and from organizations and agencies providing the resources that schools and companies need.

The most compelling part of the HRW report is a momentary mention of harms beyond commercialization:

Abby Rufer, an algebra teacher in Texas, US, said that her school district initially did not implement protections for students' privacy. "Teachers were using [an online platform] which has no privacy protection. I was worried because, especially for our kids, this is not safe for them. Sixty to seventy percent of our kids had one primary family member that had been deported or was currently in ICE [US Immigration and Customs Enforcement] holding. So, this is unacceptable, and it is a dangerous situation to put these kids in."

HRW's report raises an important question: if targeted advertising and the creation of advertising profiles are already illegal, why does this report show that trackers still appear in some edtech apps and websites? I wish the report had examined this question. My guess is that there are serious enforcement gaps even when many of these laws exist, and it would have been valuable for this report to discuss ways to improve enforcement

I was also a bit frustrated about the report's limited discussion of existing student and child privacy laws. Few of the trackers highlighted were used in U.S. edtech, possibly because the U.S. has had a baseline student privacy law since 1974 and has passed well over 130 student privacy laws in the past eight years, nearly all of which ban behavioral advertising. The report also showed few trackers in use in the EU, perhaps because of limitations in GDPR and the new Digital Services Act on processing children's data for commercial purposes. I would have liked the report to further research and analyze why different jurisdictions were more or less likely to recommend apps without sufficient privacy vetting, and hope that this is examined in future HRW reports.

This report could have used its global perspective and the breadth of the research data collected to add substantial new information and value to the student privacy space. I was fascinated that many countries that created edtech apps during COVID included trackers, some of which appear to be used for behavioral advertising or the creation of ad profiles, and would have enjoyed a standalone report examining how and why that happened. Ultimately, I am frustrated that this report will not fulfill its potential to advance student privacy.

The likely result of the report will be a series of articles about how student privacy is in crisis and there are not enough laws and companies are bad. That may draw attention to this issue-not necessarily a bad thing, as any public attention to student privacy tends to at least temporarily increase compliance-but it also means that other really important student privacy issues could fall by the wayside.

*I am always down to present these thoughts if you have lots of time and want to go way down this nerdy rabbit hole

**As the report's methodology section explains, HRW used an algorithm to randomly select up to ten products from each country to evaluate instead of manually selecting the most widely-used edtech products that each government recommended (see the first paragraph on PDF p18 about the use of the Mersenne Twister pseudorandom number generator). The report also did not examine any products that required "verification of the child's identity as a student to set up a mandatory account to access the service"-a substantial market share of edtech-and also only examined "a limited number of EdTech products" that "require a user account to access learning content" which are commonly used in schools.