Feeding the Machine?: What the Updated COPPA Rule Says About Using Children’s Data to Train Algorithms

December 4, 2025

Jessica Arciniega, Morgan Sexton, and Amelia Vance

 

 

 

CC BY-NC 4.0

When your 8-year-old plays their favorite mobile game or uses an educational app to practice math, they're not just learning or having fun: they're also generating valuable data. Every click, swipe, and answer creates information about how children think, learn, and behave online. Until recently, many companies were operating under the belief that they could use this data to train artificial intelligence (AI) systems with few restrictions.

The Federal Trade Commission (FTC) recently updated the COPPA Rule (regulations implementing America's main child privacy law) to address this growing concern about how companies use children’s data. However, significant gaps remain that leave children vulnerable to having their personal information used in ways parents never intended.

What COPPA Actually Protects (And What It Doesn't)

Many parents don't realize how limited protections in the Children's Online Privacy Protection Act (COPPA) actually are. COPPA only covers companies collecting personal information from children. For example, if a child tells a company that he likes soccer, and their parent tells the same company that the child likes spaghetti, COPPA will only protect the information that the child likes soccer. Additionally, COPPA only protects the information of children under 13, meaning COPPA does not provide any special privacy protections for your 14-year-old. Even for younger children, COPPA's protections can be easily waived if a company obtains parental consent, which can be as simple as clicking a quick box that busy parents may not fully read or understand.

COPPA’s shortcomings have only grown as technology has advanced. Data collected from children is no longer just stored in a database. Instead, it's fed into machine learning algorithms that power everything from personalization to targeted advertising systems. Parents may now be asked to consent not only to a company’s collection of their child’s data, but also for the company to use that collected data to train new AI algorithms or target advertisements. These evolving uses of children’s data underscore the need to revise existing protections.

3

The Updated COPPA Rule: Significant Changes

The FTC updated the COPPA rule in April 2025, in part to address concerns about how children's data may be used to train AI systems. The changes represent real progress, but they also highlight how complex these issues have become. While we aren’t discussing all of the changes in the final rule, we selected a few significant changes with real-world applications that may surprise people. 

For starters, the updated rule makes clear that companies cannot retain children's personal information indefinitely. This is true even if companies claim that such retention is necessary to improve their algorithms. As Commissioner Bedoya explained in a concurring statement: “claims from businesses that data must be indefinitely retained to improve algorithms do not override legal bans on indefinite retention of data.”

Additionally, the final rule requires operators to be more transparent about their data retention practices. Specifically, operators are required to specify in a data retention policy the “purposes for which children's personal information is collected, the business need for retaining such information, and a timeframe for deletion of such information.” 

The updated COPPA rule also requires separate parental consent if companies want to share children's information with third parties for non-integral* purposes, including for AI training. This closes a loophole where companies could argue that handing over children's data to train someone else's AI was just part of their normal operations. This revision is clearly meant to give parents more control over how companies use their children’s data by prohibiting companies from asking for ‘catch-all’ parental consent–both to collect data and to share it for purposes the parent may not otherwise agree to.

*Integral disclosures are those that are necessary to provide the product or service the consumer is asking for (such as disclosing data to security providers or website hosting services). In the Federal Register notice for the final rule, the FTC explicitly said that “Disclosures of a child’s personal information to third parties…to train or otherwise develop artificial intelligence technologies, are not integral to the website or online service and would require consent.”

This addresses a troubling practice where companies would collect data from children for one purpose (like providing a mobile game) then keep that data indefinitely. For example, Weight Watchers was accused of retaining children’s personal information indefinitely and only deleting it when requested by a parent. The updated rule codifies and strengthens the principles underlying FTC settlements.

The Remaining Gray Areas

While these changes in the final rule help, they don't resolve all existing concerns about using children's data to train algorithms. 

Algorithms Developed Internally

The most significant gap remains around whether companies may use children’s data internally to train their own AI systems rather than sharing it with others. As referenced above, the final rule states that: 

“Disclosures of a child’s personal information to third parties for monetary or other consideration, for advertising purposes, or to train or otherwise develop artificial intelligence technologies, are not integral to the website or online service and would require consent pursuant to the proposed amendments to § 312.5(a)(2).” **

This language requires secondary consent for disclosing children’s data to third parties for training algorithms. It does not, however, say anything about whether companies need additional consent to use children's data to improve their own AI tools. It appears that a company would not be in violation of this provision if they use children’s data to train algorithms, so long as the algorithm is developed internally and data is not disclosed to any third parties.

COPPA permits companies to use children’s data to improve the product that a child is interacting with. For example, the FTC has said companies may rely on the internal operations exception to fix bugs, add new features, or develop a new version of the service. But there are limitations, particularly with respect to AI technologies. For example, Amazon was accused of using children's voice recordings to improve their AI assistant beyond what was necessary for the original service. The updated COPPA Rule’s terms regarding non-integral disclosures open the door to questions about whether operators can use data collected from one application to improve an algorithm underlying a suite of products or a completely different product.

Re-Identification Risks

It is extremely hard, if not impossible, to completely remove or sufficiently de-identify children's information from an AI tool that has already been trained on that data. This creates a practical problem: if companies that have used children’s data to train an AI model can't remove that data from their AI systems, how can they comply with COPPA’s rule against indefinite retention of children’s data? It's unclear; but as we discussed above, those retention practices must be detailed in writing. 

Audio Files

The updated COPPA Rule added a new exception to the verifiable parental consent requirement for when an operator “collects an audio file containing a child's voice, and no other personal information, for use in responding to a child's specific request, and where the operator does not use such information for any other purpose, does not disclose it, and deletes it immediately after responding to the child's request.” While this is an important update that allows operators of home assistants or similar technologies to collect limited information about children, the Commission’s comments in the proposed rule provide additional context. 

1

Specifically, the Commission stated that the exception should not allow operators to “retain the audio files or to use them for other purposes such as product improvement and internal operations, even if the operator has taken steps to de-identify the data… the privacy risk created by such data potentially falling into the wrong hands and being re-identified exceeds the benefit of allowing broader use” (emphasis added). Properly de-identified data is not considered PII, and is therefore not covered by COPPA. However, these comments indicate that the Commission views audio files as so sensitive that they cannot be sufficiently de-identified. 

While the comments are specific to using the files under an exception to VPC, they could indicate a broader position that the FTC views particularly sensitive data such as voice data as impossible to de-identify. If this position is correct, companies may not be permitted to use voice data (even data collected with parental consent) for purposes otherwise prohibited under COPPA.

What This Means for Parents

These regulatory changes reflect a growing recognition that children's digital privacy needs stronger protection in an AI-driven world. But parents shouldn't assume their children are fully protected.

COPPA’s fundamental limitations remain: COPPA only covers information collected from children under 13, leaving teenagers in a privacy protection gap during some of their most formative digital years. Additionally, COPPA’s notice and consent-based system means parents bear the burden of understanding complex technical practices of a variety of platforms and making informed decisions about their children's data for each one.

Parents should pay close attention to whether they are being asked to consent to companies using their children’s data to train AI. Ask specific questions: Will my child's data be used to train AI? What happens to that data if we stop using the service? Can the data be completely removed from any AI systems it has been used to train?

Looking Ahead

The FTC's updated COPPA rule represents an important step forward, but also demonstrates how current privacy laws struggle to keep pace with rapidly evolving technology. As AI becomes even more central to children's online experiences, we need clearer rules that don't require parents to become privacy experts to protect their children.

The ultimate goal should be ensuring that when children engage with technology designed to help them learn and grow, their personal information isn't being harvested to benefit companies in ways that have nothing to do with that child's wellbeing. Our children deserve digital experiences that prioritize their development and privacy, not corporate data collection strategies.

 

End Note: 

**While this provision is referring solely to third parties, the new COPPA rule similarly restricts the use of data for advertising and/or other uses of children’s data without parental consent.