California's Age-Appropriate Design Code: What You Need to Know

September 15, 2022

1663274682593

Cross-posted from LinkedIn

Today marks a major development for child privacy in the US as California’s Governor signed the California Age-Appropriate Design Code (ADCA). The ADCA mandates that businesses, as defined under the California Consumer Privacy Act/California Privacy Rights Act (CCPA/CPRA), implement increased privacy protections by design for online services likely to be accessed by children. As an advocate for the well-being of children and their safety, I’m glad to see that lawmakers are recognizing the unique needs of children of different ages and prioritizing the safety of all children online. But I’m also concerned about the practicality and implementation difficulties of the ADCA and how this law will fit into the US’s legal framework.

The ADCA Is Based on Good Ideas

Heavily inspired by the UK’s Age-Appropriate Design Code, the ADCA is founded on the admirable goal of protecting children online. In one of the first meetings of the Organisation for Economic Co-Operation and Development’s (OECD) expert working group to revise their recommendation on the protection of children online, Baroness Kidron presented on the UK’s AADC. I’ll never forget one of the first things she said in that meeting (paraphrased here): “What if kids didn’t have to lie on the internet to be protected but also get access to services?” The pitch was that kids could choose to opt in to supplying their age and receive a different, privacy-protective experience online. This is a really attractive idea that the ADCA is trying to realize for children in California.

The UK’s Age-Appropriate Design Code includes various privacy safeguards by design and default to make the internet a safer place for children. It protects the privacy of children’s location data by requiring geolocation data collection be turned off by default in most circumstances and that geolocation be turned back off after a child has turned it on. It emphasizes just-in-time notifications to ensure that children are aware of what is happening while they are using the internet. It also prohibits the use of dark patterns to the detriment of children; limits profiling a child by default; requires businesses to think carefully and document decision-making about the privacy impacts of their services that are likely to be accessed by children; and calls for privacy policies to be available in terms that children can understand. I would like to see all of these values represented in US privacy law.

Practicality and Implementation Concerns

That said, there are major questions about how the ADCA will be implemented and potential unintended consequences that must be addressed. These include the potential to significantly change the internet, age-estimation methods, timing, and whether the ADCA can work in the US legal context.

Potential to drastically change the internet

The ADCA requires covered businesses to either “estimate the age of child users with a reasonable level of certainty appropriate to the risks” or to “apply the privacy and data protections afforded to children to all consumers.” This requirement alone has the potential to transform how everyone, including adults, experiences the internet. California legislation generally ends up changing practices across the entire US because of the state’s market power.  The bottom line is that if businesses can’t find a way to accurately identify children, then the ADCA will effectively require them to treat everyone accessing their websites as children. This would mean we would all have to deal with content restrictions, limitations on personalization, and limited functionality.

These changes may materialize in various unintended ways. Using age and identity verification methods can require businesses to collect additional information they otherwise wouldn’t ask for, which then leads to even more privacy and security risks on the internet. The Register warns that age verification requirements could result in having to enter your age on every website you want to access and that it may end anonymity on the internet. General audience websites might resort to adding more adult content–such as permitting or promoting profanity–as a way to argue that children are not likely to access their service.

Age-estimation methods

Given how dramatically the internet might change under the ADCA if businesses can’t accurately determine which users are children, the “obvious” solution would be to accurately assess people’s ages online. Yet, every method of estimating age on the internet has issues that undermine the ability to obtain reasonable certainty. We all know that children lie when asked to self-report their age in order to access content online. Many kids don’t have ID cards, and the thought of requiring people to disclose their social security numbers to be allowed to access the internet, as suggested in this article, is alarming. The process of age verification online can result in increased privacy risks, inaccurate age estimations, and needlessly blocking people from being able to access important content.

Scanning a user’s face and then using AI to conduct facial age analysis is an emerging approach currently being tested by Instagram. While we don’t know yet if this innovative technology can achieve a reasonable level of certainty in estimating everyone’s ages online, it’s worth noting that the accuracy of established facial recognition algorithms has been found to vary amongst individuals across different demographics, such as sex and race. Accurate estimation methods are critical for all users under the ADCA because being off by even one year can result in categorization errors such as a 17-year-old child inappropriately gaining unrestricted access to adult content restricted to those who are 18+. Although it can seem like an attractive solution to solve the problem of age estimation, we need to remember that AI is not magic, and sometimes its stupidity can actually worsen the problems it is meant to solve.

Is the ADCA even possible in the US?

As mentioned, the ADCA is based on the UK’s Age-Appropriate Design Code, which went into effect in September 2020 with a 12-month transition period. The UK’s Age-Appropriate Design Code is structured to closely align with child data protection requirements under the General Data Protection Regulation (GDPR), the EU’s comprehensive data privacy law. While this framework seems to work in Europe, the US does not have a comprehensive privacy law like the GDPR to build on, so American courts will engage in a vastly different legal analysis.

The ADCA may be unconstitutional under the Free Speech Clause of the First Amendment. As explained in this Congressional Research Service Report, courts have struck down, on First Amendment grounds, multiple prior attempts of Congress to protect children by restricting which content they are exposed to. The ADCA, a content-based regulation of “harmful” and “detrimental” materials for children online, raises some of the same First Amendment issues as these prior cases. The ADCA will be subject to strict scrutiny because the legislation is content-based, and harmful, detrimental materials are not historically unprotected categories of speech. This legislation is unlikely to survive strict scrutiny because harmful and detrimental material will be subject to challenges claiming vagueness and excessive breadth. Who gets to decide what content is harmful or detrimental? Whose interests will be considered when making that determination? These questions are especially important when what could be viewed as harmful for one child may affirm the identity of another and could be defined as in that child’s best interests. I am also not convinced that this law can withstand the narrow tailoring aspect of strict scrutiny since there’s no way for parents to bypass the default age-restrictions for their children, and there are no clear standards for measuring harm.

Timing

The California legislature has left unanswered major questions about how the ADCA will be implemented and, instead, seeks to establish the California Children’s Data Protection Working Group to clarify the law’s requirements in a series of reports after the law is enacted. Decisions left to the working group include key issues such as “identifying online services, products, or features likely to be accessed by children” and “evaluating and prioritizing the best interests of children.” The first working group report is not due until on or before January 1, 2024. This means that we might not know the true scope of this legislation until over a year from now. The working group’s report could be the first time a business learns that California considers their online service likely to be accessed by children. If the working group were to submit their first report on January 1, 2024, businesses that previously didn’t know they needed to comply with the ADCA would have only six months to complete a data protection impact assessment before the law goes into effect on July 1, 2024.

Whether the ADCA can overcome the implementation challenges and even be possible in the US are yet to be seen. Either way, it could change how we all interact with the internet and is an important law to watch.

This blog was written by Amelia Vance and Morgan Leftwich.

My favorite “get up to speed quickly” resources on ADCA:

Additional Perspectives

Benefits

Concerns