What New Amendment to the Kids Online Safety Act May Mean for Integrated Data Systems

September 2023

Katherine Kalpos, Morgan Sexton, and Amelia Vance

 

 

 

CC BY-NC 4.0

A new amendment to the Kids Online Safety Act (KOSA) isn’t just about kids. 

On July 27th, the Senate Committee on Commerce, Science and Transportation passed KOSA as amended out of committee. While KOSA is designed to protect kids online, one of its new amendments – the Filter Bubble Transparency Act (aka Thune 2), hereafter referred to as “the amendment” – regulates platforms providing content to users of all ages. And notably for governmental integrated data system (IDS) stakeholders, the requirements of the amendment apply to both public and private actors.
2023-12-12 Canva Adapted Image (57)

Let’s dig into what the amendment entails. 

The amendment is focused on algorithmic transparency and establishes new rules for public-facing websites that use “opaque algorithms'' to order content. Under the amendment, “covered internet platforms” utilizing opaque algorithms in automated content selection or ranking decisions would need to: (1) inform users that opaque algorithms are being used, (2) provide an alternative option that only uses transparent algorithms, and (3) let users easily switch between the two options.

What is an opaque algorithm?

 

According to the amendment, an algorithm is considered opaque if it uses personal data in the automated content selection or ranking process that was not provided by the user “for express purpose of interaction with the platform.” “Transparent” algorithms, on the other hand, either do not use user-specific data or only use data provided by the user for the express purpose of interaction with the platform in the automated content selection or ranking process.

Opaque algorithms might filter content based on information acquired from other sources or inferences about the user. On the other hand, transparent algorithms may filter results based on specific search terms entered by the user or display results chronologically.

Potential application to IDSs.

The amendment is intended to apply to social media platforms (like TikTok) and large-scale internet platforms (like Google) that use algorithms to curate user content. However, the broad definition of covered internet platforms (see Sec. 2 (a)(5)) could be interpreted to encompass most websites - including public facing government websites that rely on IDS information.*

This means that websites open to the general public, including state-operated data dashboards and benefits websites that rely on individual-level IDS data to provide personalized recommendations to users, may be subject to the amendment's requirements.

*Websites that rely on IDS data to personalize content can be excluded from the definition of covered internet platform if they are “wholly owned, controlled, and operated by a person” who meets certain criteria or “operated for the sole purpose of conducting research that is not made for profit either directly or indirectly.”

How this might play out in the IDS context. 

Consider this hypothetical: A public-facing, informational website about state-level government benefits (similar to the federal government’s Benefits.gov) wants to personalize content for users to present information about benefits they are most likely to be eligible for at the top of their screen. To accomplish this, the website uses individual-level data within a multi-agency IDS to pre-screen a user’s eligibility for various benefits and prioritize content based on which benefits are most likely to be relevant to that individual. Under the amendment, this would likely qualify as a covered internet platform using an opaque algorithm to select or rank what content a user sees. As such, the website would need to provide specific notices to users and offer an alternative version that uses transparent algorithms.

This introduces a new compliance obligation for public-facing websites relying on IDS data to offer personalization–they now have to develop and maintain an alternate version of the website using only transparent algorithms. Aside from the administrative burden, this requirement potentially hampers the state’s ability to provide the most relevant benefits information to users who opt out of opaque algorithms, making it harder for them to access information about the specific benefits they qualify for.

The language of KOSA will likely continue to change, but as currently drafted, the amendment leaves us with these questions:

 

Can users “affirmatively supply” IDS data to websites relying on IDS data to personalize information if the user expresses their desire to receive particular information? 

If yes, then the IDS data is being provided “for express purpose of interaction with the platform,” allowing transparent algorithms to personalize benefits dashboards based on IDS data using transparent algorithms. 

How may limiting public-facing benefits websites to only transparent algorithms affect government agencies’ ability to provide information to users based on likely eligibility for benefits? 

Would it be more difficult for government agencies to detect fraud in benefits applications if users were able to easily switch to an alternative transparent algorithm-based platform? 

Will the definition of “covered internet platform” be narrowed to better align with Congressional intent? 

While it is challenging to find a  definition that encompasses the complexities of social media without being too narrow or too broad (see discussion in this blog), definitions in bills like Utah H.B. 311 may serve as a better starting point in finding the balance between social media and technology as a whole.

Is this the right approach to addressing concerns about algorithmic transparency? 

Would providing users with more insight into the factors that influence algorithmic decision-making (as in the xAI approach discussed here) be an effective solution?