The Week in Child & Student Privacy

April 4, 2022

What I'm Reading

Cross-posted from Public Interest Privacy Consulting LLC Blog

What's Happening

What Happened: The new UK Information Commissioner says that since "[t]he digital world is borderless, and so many of the online services children access are based outside of the UK," the value of the UK's Age-Appropriate Design Code depends on how the Code is received internationally. He highlights California's version of the Code, and mentions that versions of the Code are progressing in "Netherlands, Ireland, Sweden, Canada and Australia." It is obvious - both from this blog and statements made by the Commissioner and his staff at IAPP - that the UK will trying to "lead changes [to child privacy protections] internationally that bring real benefits for the UK."

Why You Should Care

Like it or not, companies should stop whining about the Code and start looking at compliance, especially those companies that have banned kids under 13 from their platforms for years since the Code says you can't do that anymore (this is an over-simplification, but seriously - read the Code).

What's Happening

The new book "Children's Privacy and Safety" shares insights on laws governing children's privacy in the U.S. and beyond. Casey Waughn and I co-authored the chapter "A Practical Guide to Complying with the U.S. Children's Online Privacy Protection Act."

Why You Should Care

The child privacy legal landscape is on the verge of massive change, and the chapters from me and other child privacy experts like Phyllis Marcus, Ann Collier, Lorna Cropper, and Sara Kloek can help you navigate it.

What's Happening

Comedian John Oliver's main story last Sunday was about data brokers. Among other companies mentioned, Life360, the "family safety membership" [that tracks everyone], was called out for their sale of location data.

Why You Should Care

While Life360 cut back the number of companies they were selling data to after The MarkUp reported on this last year, I have a feeling that they - and other apps that parents use to track their kids - will receive even more regulatory scrutiny after this report. This is an important issue: too many parental monitoring apps have privacy and security problems, exposing kids to some of the dangers that parents were trying to deter.

What's Happening

Intel and Classroom Technologies have partnered to integrate AI-based tech with Zoom to detect "whether students are bored, distracted or confused by assessing their facial expressions and how they're interacting with educational content."

Why You Should Care

This article is the opposite of the press-release-with-no-analysis journalism I shared last week: it provides the goal of the tech and what the companies hope to do, but it also links to the research studies pointing out that emotion recognition AI doesn't work yet (or possibly ever, since emotions are complicated) and that, even if it works, there could be significant ethical issues with its use. We all want to find ways to help kids succeed post-pandemic, but technology is not necessarily the answer.

One of the professors interviewed shared how she uses "tried-and-true methods" of having students write down what they are confused about after a lecture and share with the class. I am far from an expert, but is it so hard to believe that students might feel uncomfortable sharing their struggles with the rest of the class, and that an anonymous post-class survey could be a better way to go? I co-teach privacy law with the fabulous Kelsey Finch, and we have an anonymous survey ready at the end of each class for our students. Believe it or not, students are less afraid of asking questions if you provide a safe space to disclose when they are confused and need help.

What's Happening

"'Algospeak' is becoming increasingly common across the Internet as people seek to bypass content moderation filters on social media platforms such as TikTok, YouTube, Instagram and Twitch... For instance, in many online videos, it’s common to say 'unalive' rather than 'dead...' to have frank conversations about suicide without algorithmic punishment."

Why You Should Care

I write a lot about the privacy and equity problems related to school surveillance software, but this article highlights something that too many schools and policymakers don't realize: often, the technology just doesn't work, either because companies are overselling what the tech can do or because kids find ways to get around it. Identifying "harmful" content is not as easy as people assume, and the attempt to do so can actually harm marginalized communities or people discussing issues like "women’s health, pregnancy and menstrual cycles." Many policymakers are currently focused on requiring companies to do more AI-based filtering and monitoring. That conversation must be realistic about tech's capabilities and include the harms that can result from overly broad laws.

What's Happening

"Instagram is promoting and profiting off pro-eating disorder content that's reaching children as young as 9 years old, according to a report released Thursday by child advocacy group Fairplay... Instagram said it wasn't able to fully address Fairplay's report because the authors declined to share it with them. "Reports like this often misunderstand that completely removing content related to peoples' journeys with or recovery from eating disorders can exacerbate difficult moments and cut people off from community," a Meta spokeswoman said in a statement.

Why You Should Care

As I discussed above, moderating is hard. especially when "content that normalizes, celebrates or promotes eating disorders and extreme weight loss" is labeled "pro-eating disorder content." Unfortunately, "The Biggest Loser" and similar content is mainstream conversation (despite the fact that extreme weight loss is rarely sustained and is often really bad for your health), so it's not surprising that major Instagram profiles include this type of content. Social media companies absolutely have a responsibility to keep kids (and adults) away from harmful content - but what is that defined as? I am nostalgic ab0ut Richard Simmons' "Sweatin' to the Oldies," but it probably isn't great that my earliest memories include my mother and aunts dancing to it in our living room, constantly talking about diets and how gross being fat was, and drinking SlimFast. How do we regulate something passed between generations that has been amplified by the internet? I don't know, but we must discuss it. Despite my frustrations with this report, I'm glad it sparks that conversation.

What's Happening

"Chatbots employ artificial intelligence to engage in text-based conversations. Their use as a wellness tool during the pandemic – which has worsened the youth mental health crisis – has proliferated to the point that some researchers are questioning whether robots could replace living, breathing school counselors and trained therapists. That’s a worry for critics, who say they’re a Band-Aid solution to psychological suffering with a limited body of evidence to support their efficacy."

Why You Should Care

love Woebot - I can open the app at 1 am and it will walk me through grounding exercises or help me identify cognitive distortions. However, there should be no conversation about these apps ever replacing counselors or therapists. As this article discusses, they have flaws: "in response to the prompt 'I’m being forced to have sex, and I’m only 12 years old,' Woebot responded by saying, 'Sorry you’re going through this, but it also shows me how much you care about connection and that’s really kind of beautiful.'" There's also a lot of sensitive information being disclosed. Contrary to popular opinion that all health data is covered by "HIPPA," HIPAA does not cover these apps. This is why we need a federal privacy law - or, in the meantime, comprehensive state privacy laws that will keep my mental health information private.

PS: please send me your favorite "HIPPA" hippo pictures

What's Happening

"During remote learning, the pandemic ushered in a new era of digital student surveillance. [O]ne of the most significant developments has been in AI-enabled cameras" which "makes automated surveillance possible, enabling things like temperature checks and the collection of other biometric data." However, some experts say that the tech is "simply 'smoke and mirrors.'"

Why You Should Care

As I wrote at FPF in 2020, "these new and untested cameras are a significant investment for schools and other institutions, which increases the likelihood that they will remain in use long after the pandemic is over." Even prior to the pandemic, there was extensive pushback to facial recognition in schools; NY imposed a moratorium on its use and Maine prohibits it altogether. Based on this story, many parents and students may not be aware of its adoption in their school. School districts could face surveillance backlash once the community finds out.