Child & Student Privacy Issues: AI
A collection of resources for unlocking the benefits of AI in education in an ethical, legal, and effective way without compromising the privacy of children and students.
Resources Authored by PIPC Staff
(Future of Privacy Forum) School safety tools and initiatives must be evidenced-based, have clear policies, and be narrowly-tailored for specific safety objectives. Reactive, non-evidence based protocols can have unintended consequences and increase the workload on already strained administrators. Increased surveillance can corrode learning environments and stifle creativity. Careful consideration of privacy and equity guardrails is needed to ensure vulnerable communities are not disproportionately impacted by school safety strategies.
(Future of Privacy Forum) Schools are using self-harm monitoring systems to identify and assist students at risk of mental health crises. Despite their increased use, self-harm monitoring systems are not proven to effectively identify students in need of assistance. Potential harms include misidentification, unfair treatment due to stigma or bias, privacy violations, inappropriate contact with law enforcement or social services, chilling effect on students seeking help, and exacerbation of existing inequities. Schools must have sufficient mental health resources to support identified students, robust response plans beyond identification, and well-developed policies governing student data use and storage before acquiring self-harm monitoring technologies.
(Northwest Commission on Colleges and Universities) FERPA was originally enacted to protect student records and ensure parents and eligible students can access them. Technological advancements spurred Congress to further reinforce FERPA with more safeguards around computerized records. FERPA does not explicitly cover automated decision-making algorithms, machine learning models, and data analytics techniques; however, privacy protections within the act - such as rights of access and the ability to challenge inaccurate information or have information deleted - still apply to AI.
(Northwest Commission on Colleges and Universities) FERPA prohibits educational institutions from disclosing personally identifiable information (PII) in education records without consent unless an exception applies. PII includes direct identifiers (e.g. name, social security number) as well as indirect identifiers (e.g. age, gender). With AI and other technologies becoming increasingly ubiquitous in educational institutions, the amount of PII in education records has also increased. De-identified data does not count as PII under FERPA; however, this is much trickier than it sounds.
(Northwest Commission on Colleges and Universities) Higher education institutions are currently using AI to classify, target, track, and anticipate student behavior. However, the many different ways that AI is used within higher education and the impact AI use has in this context is not clearly understood. This infographic illustrates the many different ways higher education institutions are currently using AI.
Resources on AI and Child & Student Privacy
Examines the opportunities, benefits, and risks of using AI in education. The report recognizes that AI can benefit educators and students while also identifying potential risks associated with AI - including algorithmic bias - and emphasizing the importance of trust, safety, and proper guardrails to protect stakeholders. Recommendations include continuing partnerships to emphasize humans-in-the-loop, designing AI using modern learning principles, prioritizing trust, involving educators, focusing R&D on addressing context & enhancing trust & safety, and developing education-specific guidelines & guardrails.
Discusses the opportunities and challenges of using generative AI (like ChatGPT) in education. Reasonable steps should be taken to prevent negative outcomes involving the use of generative AI in education, such as protecting personal data, strengthening cyber security, and preventing exposure to harmful content produced by the technology
(Taylor & Francis Online) The prevalence of large language models (LLMs) - one type of AI - have prompted many discussions about cheating, the inadequacy of essays as an assessment method, educational ethics, and datafication in education. LLMs have been associated with producing content that promotes racism and contested ideologies, as well as with generating false or potentially dangerous information. The discussion of LLMs in education should be based in the context of ongoing critical scholarship.
(The Intercept) Many companies have exaggerated the capabilities of their AI gun detection technologies. Despite the limited research on the accuracy and effectiveness of AI gun detection technologies and whether they actually prevent shootings, over 65 school districts have spent more than $45 million on these technologies since 2018. The FTC has warned companies not to exaggerate the effectiveness of their AI products and multiple law firms have started investigations.
Resources on AI
Hundreds of companies are providing surveillance and decision-making technologies for government use. The Algorithmic Equity Toolkit (AEKit) is designed to help identify, analyze, and assess the impacts of such technologies. The AEKit has four components: a yes/no chart to identify automated decision systems/surveillance tools, a map defining key terms, open-ended prompts to explore potential impacts, and questions to ask government employees/vendors.
AI is being used in a wide variety of industries and it presents new ethical and practical challenges, as well as privacy concerns related to sensory inputs, big data, and multi-layered processing models. It is critical for privacy considerations to be addressed early on in the design process, with ongoing monitoring and review for existing AI systems. Data governance policies must ensure that individuals have discretion, awareness, and control over their own information.
AI has the potential to improve and speed up benefits delivery, but it can also harm beneficiaries, perpetuate biases, and lead to transparency issues. Governments at all levels must address the ethical and equitable use of AI and automation in benefits delivery. This page provides resources to help practitioners better understand and evaluate AI tools.
(Princeton University) A lot of the claims around AI are inaccurate "snake oil". AI applications commonly fall into three categories: perception problems, automated judgment tasks, and predicting social outcomes—the latter of which AI has failed to excel at beyond more transparent, manual scoring methods.