November 6, 2023
Morgan Sexton, Katherine Kalpos, and Amelia Vance
There has been lots of hype recently around artificial intelligence (AI), including its magical promises and potential risks. While it may be tempting to get swept up in the potential around AI and incorporate it into Integrated Data Systems (IDSs) right away, it is crucial to understand that AI is only as good as the information fed into it and the soundness of the algorithms that it relies on. Additionally, the implementation of AI into IDSs can create significant ethical and legal challenges. In this emerging issues update, we explore (1) what AI is, (2) how AI is being used by government agencies, (3) what incorporating AI into IDSs may look like in practice, and (4) practical considerations for IDS stakeholders who are considering incorporating AI into their IDSs.
What are algorithms and how do they relate to AI?
Algorithms are a set of instructions that detail what steps to take in order to arrive at a particular outcome (we recommend this brief video for a basic introduction to what algorithms are, and this video for a slightly more technical look at how they actually work). You can think about algorithms as tools that carry out recipes for computers. For example, consider a basic recipe for making the perfect peanut butter and jelly sandwich. It may look something like this:
- Get two slices of bread, peanut butter, and strawberry jelly.
- Put peanut butter on one slice of bread.
- Put jelly on the other slice of bread.
- Stick the slices of bread together.
- Enjoy!
Pretty simple, right?
But what if a computer does not know to use a knife for spreading the peanut butter and jelly? Would the computer understand to use the peanut butter and jelly to “stick” the slices of bread together, or might it think that an adhesive-like glue is necessary? Is it inherently obvious that “enjoy” actually means to eat the sandwich? Is it optimal to use crunchy or smooth peanut butter? Does everyone like strawberry jelly, or do some think that grape jelly makes a more perfect sandwich? While the recipe may seem simple to us, the computer might struggle without clear instructions. Algorithms get more complicated as more variables and uncertainties are introduced–which is why it has been said “Algorithms can make systems smarter, but without adding a little common sense into the equation they can still produce some pretty bizarre results.”
As Cathy O’Neil, author of Weapons of Math Destruction, explains, “[algorithmic] models are opinions embedded in mathematics.” Algorithms reflect the values and biases of their human creators who decide what factors matter and how to define them. There is no single, correct recipe for the perfect peanut butter and jelly sandwich. Instead, different people will come up with different instructions to achieve the same goal, and those instructions will ultimately reflect the biases of the human who wrote the recipe. From the exclusion of the knife as an important detail, to the assumption that peanut butter and jelly are obviously sticky enough to hold the bread together, and even the belief that PB&Js are enjoyable, every aspect of the recipe is a reflection of specific human judgment and assumptions.
AI is influenced by the biases embedded in its algorithms, as well as the biases in the underlying data used to train it. This is known as the “garbage in, garbage out” problem, an example of which is seen where facial recognition software performed less accurately on women and people of color after being trained on primarily photographs of white men.
How are government agencies using AI?
Multiple government agencies are currently turning to AI to help accomplish a large variety of tasks. Orly Lobel, Warren Distinguished Professor of Law at the University of San Diego, writes that:
Agencies are already increasingly using automated tools to make decisions about enforcement, caseload management, benefits, and the application of rules. Agencies as varied as the IRS, BLS, SSA, FDA, EPA, SEC, FCC, and USPTO employ automated systems for governmental tasks once done by humans. For example, the Department of Veterans Affairs uses AI in administering veteran benefits while the Department of Education uses an automated chatbot to help navigate student loan applications. The Department of Health and Human Services has sponsored the creation of an AI-based tool to detect illegal opioid sellers. The Food and Drug Administration has similarly begun using AI in criminal investigations and to mine through online reports on unsafe food.
While there are many ways that governmental entities are adopting and using AI (see this inventory, parts 1 and 2 of this report, and pages 12-14 of this report for more), our focus is on a particular use case: how governmental agencies are incorporating AI into their IDSs and systems utilizing the data housed within an IDS.
For example, imagine a hypothetical state-run, public-facing, informational website about state-level government benefits (similar to the federal government’s Benefits.gov). The state wants to personalize the website’s content for users to improve their experience by presenting users with information about the benefits they are most likely to be eligible for at the top of their screen. To accomplish this, the state wants to use individual-level data within a multi-agency IDS to pre-screen a user’s eligibility for various benefits and prioritize content based on which benefits are most likely to be relevant to that individual. The state would most likely do this by implementing AI to enable this functionality to operate in an accurate, efficient manner at scale.
There are many ways that incorporating AI in IDSs and systems utilizing the data housed within an IDS can benefit government agencies. Lobel explains that “[a]lgorithms reveal patterns often unseen by humans'' and that “[a]utomation and digitization can also alleviate the burdens of administrative paperwork, which a 2021 Presidential Executive Order describes as a burden that exceeds nine billion hours annually with regard to federal agencies.” Utilizing AI can empower agencies to help more people, more efficiently without having to increase staffing. AI can also be used to quickly take care of repetitive, procedural, and standardized tasks that can take humans a long time to complete (i.e. filling out basic questionnaires when applying for unemployment compensation) and free up time for people to devote to tasks that require more thoughtful input (i.e. actual human conversations).
That being said, there is significant disagreement about AI’s benefits–especially in terms of its impact on equity. On one hand, Lobel argues that “the upsides of AI are immense. Automated decision-making is often fairer, more efficient, less expensive, and more consistent than human decision-making,” a view that, according to experts like former FTC Commissioner Noah Phillips, should be more closely examined. On the other hand, Commissioner Phillips also supported the conclusion of a 2022 FTC report, which stated that “government and companies should ‘“exercise great caution in either mandating the use of, or over-relying on, [AI] tools’” and that “humans should continue to be involved.” In a January 2023 brief, the Center for Democracy and Technology stated that an “important lesson for the government is that AI is not necessarily objective or fair compared to alternatives. One reason for this is that many uses of AI involve data, but data is inherently biased. This is especially true for government agencies that want to change historical trends in data like student achievement gaps or unhoused rates.”
A report from Upturn and Omidyar Network finds that “[t]oday’s automated decisions are socio-technical in nature: They emerge from a mix of human judgment, conventional software, and statistical models. The non-technical properties of these systems — for example, their purpose and constraining policies — are just as important, and often more important, than their technical particulars.” This is especially true in cases where AI is used to generate predictive risk scores to aid human decision-making in governmental agencies (as seen in the screening tool discussed in depth in the next section).
On a more theoretical level, Virginia Eubanks, author of Automating Inequality, explained how governmental agencies are using AI to ration access to benefits:
Often [government automated decision-making] tools are created in [a] context where the assumption is that there are an inevitable shortage of resources, that there will never be enough resources, and that what we need these systems for is to make more neutral objective decisions about who gets access to their basic human rights. So I think of that as like these tools are sort of addicted to doing digital triage often, but triage is a really bad metaphor for what these tools are supposed to do. Triage is really only useful when there are more resources coming in the future. And if we’re using these tools to make decisions that acknowledge and rationalize and justify some people getting access to their human rights while others do not, then what we are doing is actually digital rationing, not digital triage.” (starting at 42:57)
It's likely that governmental entities will continue to explore how–and under what circumstances–AI can be incorporated into their technologies (including multi-agency IDSs). As noted in an article from the Brookings Institute, “Government organizations more rapidly introduce new [information and communication technology]; more importantly, they develop standards and rules for its use.” There has been a lot of action at the federal level focused on understanding the emergence of AI and exploring how it could be regulated (see Appendix A for a high-level overview of some recent activity). It’s clear that AI is a top priority right now for federal and state regulators, likely including those responsible for overseeing the operations of state-level IDSs.
A deeper look at AI in IDS
Allegheny County, Pennsylvania provides an illustrative case study on the careful use of AI within a multi-sector IDS to make extremely important decisions. The Allegheny County Department of Human Services (DHS) started using the Allegheny Family Screening Tool (AFST) in 2016. The AFST is “a predictive risk modeling tool that rapidly integrates and analyzes hundreds of data elements for each person involved in an allegation of child maltreatment,” resulting in a “‘Family Screening Score’ that predicts the long-term likelihood of future involvement in child welfare”–essentially a prediction of whether a child “will need to be removed from the home in the future.”
The tool is rarely used on its own; a mandatory investigation of a family is only required when a Family Screening Score is “at the highest levels,” and it is “not used to make investigative or other child welfare decisions and is not shared beyond call screening.” On its webpage, DHS notes that pre-2016 when the tool was implemented, describes the process before the tool was implemented as “any allegation of maltreatment (a referral) received at the child abuse call center required staff receiving the referrals – and their supervisors – to manually access a myriad of data and information to help decide whether or not to investigate the allegation (‘screen in’ and investigate or ‘screen out’ and offer relevant community resources).” This manual process meant that “27% of highest risk cases were being screened out and 48% of the lowest risk cases were being screened in.”
Even critics of the system such as algorithmic discrimination expert Virginia Eubanks have found that the AFST was developed through an intentional, considered, and ethical process:
“In many ways, the AFST is the best-case scenario for predictive risk modeling in child welfare. The design of the tool was open, participatory, and transparent. Elsewhere, child welfare prediction systems have been designed and implemented by private companies with very little input or discussion from the public. Implementation in Allegheny County has been thoughtful and slow. The goals of the AFST are intentionally limited and modest. The tool is meant to support human decision-making, not replace it.” (Automating Inequality, p. 171)
But despite its well-intentioned design and implementation, critics like Eubanks have argued that the AFST still perpetuates inequalities, especially in the form of “poverty profiling”:
“Like racial profiling, poverty profiling targets individuals for extra scrutiny based not on their behavior but rather on a personal characteristic: living in poverty. Because the model confuses parenting while poor with poor parenting, the AFST views parents who reach out to public programs as risks to their children.” (Automating Inequality, p. 158)
Following this criticism, Allegheny County released a statement claiming that “[Automated Inequality] has numerous inaccuracies and several key points require correction,” noting that, in fact, a family’s receipt of “public benefits (e.g. SNAP and TANF)” actually lowers that family’s AFST score, and that, “[n]ationally, the disproportionate involvement in child welfare of those impacted by poverty is well documented. Eubanks explained that part of her critique was focused on the fact that Allegheny County’s system included data only about “families using public services not on those that access private resources for parenting support. Because this will result in higher risk scores and more scrutiny of poor and working-class families, I believe the system is unjust and discriminatory.”
Questions remain regarding the AFST’s variable weights, error rate, and potential limitations on human discretion in the screening process, but the more important questions are at a higher level: should AI ever be used as part of a determination about which families should be investigated? An article from the New York Times, “Can an Algorithm Tell When Kids Are in Danger?,” features viewpoints from both angles. While pediatrician, Rachel Berger, claimed that predictive analytics can bring objectivity to extremely subjective child protective decisions, former commissioner of NYC’s Administration of Children’s Services, Gladys Carrión, expressed concerns about infringing on individual’s civil liberties “under the guise that we are going to help them.”
AI, Discrimination, and Existing Laws
Consideration of AI integrations into IDS’ needs to include not only ethical review, but also a consideration of existing legal requirements. Governmental entities are subject to multiple anti-discrimination laws and statutes, including the Equal Protection Clause of the U.S. Constitution, Title VII of the Civil Rights Act of 1964, Title IX of the Education Amendments of 1972, and the Americans with Disabilities Act, among many others. These rules against discrimination do not go away in the context of new technologies. In the press release announcing the FTC, DOJ, CFPB, and EEOC’s Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems, FTC Chair Lina Khan said: “claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books”. IDS stakeholders should watch closely for more federal and state agency guidance as AI integration into technologies that IDS stakeholders are already using or considering adopting increases.
Despite the potential legal peril that automated systems could raise, humans, of course, have their own significant implicit and explicit biases that can lead to discrimination. Some scholars have advocated for evaluating the effects of AI decisions as compared to human judgment. Lobel gives the example that: “expecting autonomous vehicles to drive with zero crashes is less useful (and indeed a riskier path) than comparing human driving with self-driving vehicles to determine the relative benefit. We need to critically consider the limits and risks from both human and algorithmic decision-making.” Emerging technologies, like AI, have shown real promise for improving and increasing accessibility under anti-discrimination laws (check out some examples of how here and here). Rather than expecting perfection from AI incorporated in IDSs, the relative benefits of using AI over only using non-technical alternatives must be taken into account when evaluating the effectiveness of these systems.
It is clear that regulators are evaluating legal obligations (like those prohibiting the government from discriminating against individuals on various bases) in the context of emerging technologies (such as implementations of AI in IDSs and systems that rely on data housed within IDSs). So how should IDS stakeholders wanting to implement AI in their IDSs proceed?
The Path Forward
Seeta Peña Gangadharan, an Associate Professor at the London School of Economics and Political Science, explained: “Agencies are going to turn to a tool that promises efficiencies and cost savings like AI. Right? They are also sold as tools that will eliminate human bias or human error.” It is clear why government agencies are using AI, but what is more ambiguous is the process that should be in place to make sure they incorporate AI in the right way.
Agencies that are considering implementing AI in IDSs should start by considering why they want to incorporate AI in the first place. As Esther Dyson, founder of Wellville, advised: “Don't leave hold of your common sense. Think about what you're doing and how the technology can enhance it. Don't think about technology first.” (source). While identifying the underlying motivations and potential benefits of incorporating AI into IDSs is a good starting point, the analysis cannot stop there. Rather, agencies must also consider the potential consequences and negative impacts (including novel harms) that AI integration may have on individuals and society as a whole.
As Jennifer Pahlka, former deputy chief technology officer in the Obama Administration, noted in her discussion at BenCon 2023: software, design, and government are all made by and for people (starting at 25:31). Government agencies should seek to center people in the underlying software decisions and design process for any AI incorporated in their IDSs to ensure that these systems provide an overall benefit to the populations they are intended to serve.
IDS stakeholders incorporating AI in IDSs must carefully think through and evaluate many complex considerations to ensure their AI use is ethical, legal, technically sound, and achieves the intended goal(s).
We recommend these great resources to help you start thinking through this analysis:
- Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, AI Now Institute
- Algorithmic Equity Toolkit, ACLU of Washington
- AI Guide for Government: A living and evolving guide to the application of Artificial Intelligence for the U.S. federal government., General Services Administration, Centers of Excellence
- Purpose, Process, and Monitoring: A New Framework for Auditing Algorithmic Bias in Housing & Lending, National Fair Housing Alliance
- Automated Decision-Making Systems and Discrimination: Understanding causes, recognizing cases, supporting those affected, AlgorithmWatch
- Artificial Intelligence and Algorithmic Fairness Initiative, Equal Employment Opportunity Commission
- Artificial Intelligence Risk Management Framework (AI RMF 1.0), Department of Commerce, National Institute of Standards and Technology
- AISP Working Paper: Addressing Racial and Ethnic Inequities in Human Service Provision, Actionable Intelligence for Social Policy
- The Privacy Expert's Guide to Artificial Intelligence and Machine Learning, Future of Privacy Forum
- Guidance on AI and data protection, United Kingdom Information Commissioner’s Office
How DISC can help
The Data Integration Support Center (DISC) at WestEd, partnering with PIPC, can support public agencies’ ongoing efforts to evaluate privacy concerns regarding the use of AI in IDSs. DISC offers technical assistance to public agencies free of cost. For more information or technical assistance, reach out to us through our website.
APPENDIX: Policymaker Actions Related to AI
The White House
- The White House Office of Science and Technology Policy (OSTP) released a “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People
- “[T]he Biden-Harris Administration has secured voluntary commitments from [Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI] to help move toward safe, secure, and transparent development of AI technology.” (fact sheet).
- OSTP released the “National Artificial Intelligence Research and Development Strategic Plan: 2023 Update”
- OSTP released a “Request for Information [on] National Priorities for Artificial Intelligence”
Federal Agencies
- The Justice Department (DOJ), Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), and Equal Employment Opportunity Commission (EEOC) released a “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems”.
- The FTC sent a 20 page letter to OpenAI (the creators of ChatGPT) asking for records regarding consumer protection (more details here)
- The Department of Education, Office of Educational Technology released the “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations”.
Congressional Hearings
- “Oversight of A.I.: Rules for Artificial Intelligence” - Subcommittee on Privacy, Technology, and the Law, Senate Committee on the Judiciary (May 16, 2023)
- “Artificial Intelligence and Human Rights” - Subcommittee on Human Rights and the Law, Senate Committee on the Judiciary (June 13, 2023)
- “Artificial Intelligence: Advancing Innovation Towards the National Interest” - House Committee on Science, Space, and Technology (June 22, 2023)
- “Oversight of A.I.: Principles for Regulation” - Subcommittee on Privacy, Technology, and the Law, Senate Committee on the Judiciary (July 25, 2023)
Federal Bills
- Transparent Automated Governance Act (TAG Act)
- ASSESS AI Act
- No Section 230 Immunity for AI Act
- Kids Online Safety Act (KOSA) – as amended by the Filter Bubble Transparency Act
State Bills
The sheer amount of activity in this area demonstrates the importance of AI to federal regulators and policymakers. But interest in AI doesn’t stop at the federal level – state-level government officials are also taking notice, as demonstrated by bills like “An Act drafted with the help of ChatGPT to regulate generative artificial intelligence models like ChatGPT” (MA) and “An Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy” (CT) that were introduced this year.
Disclaimer
This report provides general information, not legal advice, and following the recommendations or tips within does not guarantee compliance with any particular law.