Child & Student Privacy Issues: AI

Child & Student Privacy Issues: AI

A collection of resources for unlocking the benefits of AI in education in an ethical, legal, and effective way without compromising the privacy of children and students.

AI-Related Resources Authored by PIPC Staff

Higher education institutions are currently using AI to classify, target, track, and anticipate student behavior. However, the many different ways that AI is used within higher education and the impact AI use has in this context is not clearly understood. This infographic illustrates the many different ways higher education institutions are currently using AI.

This article identifies practical educational uses of tools and systems associated with AI and big data, offers clear principles for responsibly using AI, and makes recommendations for how instructors can effectively teach students about the ethical and privacy risks.

Exploring the complex intersection of self-harm monitoring technology in schools, this report discusses the technology's privacy and equity concerns, and offers resources to help schools and districts protect students’ privacy in the context of monitoring for self-harm.

Additional Resources

Hundreds of companies are providing surveillance and decision-making technologies for government use. The Algorithmic Equity Toolkit (AEKit) is designed to help identify, analyze, and assess the impacts of such technologies. The AEKit has four components: a yes/no chart to identify automated decision systems/surveillance tools, a map defining key terms, open-ended prompts to explore potential impacts, and questions to ask government employees/vendors.

AI is being used in a wide variety of industries and it presents new ethical and practical challenges, as well as privacy concerns related to sensory inputs, big data, and multi-layered processing models. It is critical for privacy considerations to be addressed early on in the design process, with ongoing monitoring and review for existing AI systems. Data governance policies must ensure that individuals have discretion, awareness, and control over their own information.

AI has the potential to improve and speed up benefits delivery, but it can also harm beneficiaries, perpetuate biases, and lead to transparency issues. Governments at all levels must address the ethical and equitable use of AI and automation in benefits delivery. This page provides resources to help practitioners better understand and evaluate AI tools.

(Princeton University) A lot of the claims around AI are inaccurate "snake oil". AI applications commonly fall into three categories: perception problems, automated judgment tasks, and predicting social outcomes—the latter of which AI has failed to excel at beyond more transparent, manual scoring methods.