A Multidimensional Ethical Framework for Evaluating AI Surveillance in Schools: A Systematic Review

0
27

Abstract

Background/Objective: Facial recognition, engagement analytics, and automated proctoring are effective tools to increase surveillance in educational settings, but the growth of artificial intelligence (AI) surveillance technologies poses serious ethical issues that the current legal framework cannot address. This system review considers the ethical consequences of AI surveillance in schools and suggests a multidimensional ethical framework of assessment of these technologies.
Methods: A PRISMA based systematic literature review was carried out. The search of electronic databases was conducted in 2015 to 2024 in PubMed, ERIC, Web of Science, and IEEE Xplore. Research papers that investigated the use of AI surveillance in education, ethical theories, and implications involved were considered. Data extraction was based on ethical issues, attitude of stakeholders and regulation factors.
Results: The studies of 47 peer-reviewed articles identified five fundamental ethical values that are key to assessing AI surveillance, which are proportionality, transparency, justice and fairness, data minimization, and student autonomy. Research has shown that 67 nodes out of students complain of being uncomfortable with facial recognition and that there is disproportionate error rates in how marginalized communities are identified by algorithms: it has been estimated that the error rate of identifying individuals with lighter skin color is 34 percent higher.
Conclusions: Some contemporary AI surveillance trends in schools do not always qualify in terms of the existing ethical principles. The suggested framework will offer educators, administrators, and policymakers with practical guidelines about evaluation, although balancing innovation and rights to students and integrity of education.

Keywords: Artificial Intelligence, Facial Recognition, AI Surveillance, Educational Ethics, Algorithmic Bias, Student Privacy

Introduction

Background and Context

The world is witnessing a move toward the adoption of artificial intelligence (AI) technologies in educational institutions, which is fundamentally changing the operation of the operations of administrative institutions and security measures1,2. The education technology market in the world has expanded significantly, and AI-based surveillance system has become a common feature in schools in the North American region, Europe, and in Asia-Pacific countries3. Among the trends are facial recognition attendance and security, proctoring tools that operate automatically to check the integrity of an examination, and applications of affective computing that determine the level of engagement by students4.

Even though these technologies are associated with a higher level of safety and operational efficiency, empirical data indicate that serious ethical issues are not fully addressed5. A survey of 1500 students in 12 U.S. high schools has identified that facial recognition technology in schools makes 67% of them feel uncomfortable because of their privacy and their beliefs that they are treated by school officials as suspects, not as students6. On the same measure, a study in the United Kingdom found that 72 per cent of parents were not aware that their schools utilize AI-based monitoring systems7.

Problem Statement and Research Question

Present-day regulatory measures such as the Family Educational Rights and Privacy Act (FERPA) of United States8 and the General Data Protection Regulation (GDPR) of the European Union9 came into existence prior to the popularization of biometric surveillance and the use of algorithms in education. Such structures fail to consider the specific ethical dilemmas of AI surveillance such as algorithmic bias, psychological effects of student development, and the decline of trust in ed-tech relationships10,11.

The systematic review will answer the following research question: What ethics principles can be applied to assess and implement AI surveillance technologies in educational institutions and how these principles can be translated into a feasible platform that schools may use?

Significance and Objectives

The review is a contribution to the area since it provides a synthesis of the available information on AI ethics and educational ethics to work out an integrated model applicable exclusively to school environments. The goals are threefold, namely: (1) to find in the peer-reviewed literature systematic evidence of ethical issues related to AI surveillance in schools; (2) construct a multidimensional ethical framework that will integrate the perspectives of several stakeholders such as students, their parents, teachers, and school administrators; and (3) offer a set of practical implementation guidelines that will take into account the technological, economical, and cultural factors in various educational institutions.

Scope and Limitations

The present review will discuss the AI surveillance-related technologies used in K-12 schools, i.e., facial recognition, engagement analytics, and automated proctoring. Although the framework is based on the U.S. context of regulatory frameworks (FERPA), other international views, including the European Union (GDPR), China, Personal Information Protection Law (PIPL) and emerging models in other jurisdictions have also been brought in to make the framework more universal. Weaknesses can be seen in the fact that AI technology is changing fast, as well as the relative dearth of longitudinal empirical research on psychology-level effects of school surveillance.

Methodology

Search Strategy

Four electronic databases were systematically searched using PubMed, ERIC (Education Resources Information Center), Web of Science and IEEE Xplore. Included were publications published in the previous year, that is, January 2015 to December 2024. The search terms were based on the following combinations: (artificial intelligence and/or AI and/or facial recognition and/or biometric and/or automated proctoring) and (school and/or education and/or K-12 and/or classroom) and (ethics and/or privacy and/or surveillance and/or rights). Research refinement was based on the use of the help of the Boolean operators and reference lists of the included articles were screened manually to determine the presence of the other relevant studies.

Inclusion and Exclusion Criteria

Studies were selected based on the following criteria: (1) they had to investigate AI surveillance technologies in K-12 or higher education contexts; (2) had to comment on ethical, legal, or social aspects of educational surveillance; (3) these studies had to be published in peer-reviewed journals or conference proceedings; (4) and were to be based on English. Research papers were rejected based on the following criteria: (1) research had a biased emphasis on technical implementation as opposed to ethical considerations; (2) study addressed non-educational surveillance scenarios; (3) the research was an opinion piece, editorials, or other unscholarly materials.

Data Extraction and Synthesis

Information retrieved: authors, years of publication, study design, selection criteria, kind of AI surveillance that is being studied, ethical issues found, opinion of stakeholders, and regulatory policies. The thematic synthesis approach was used, according to which the data extracted was coded and sorted into themes indicating ethical principles. The data were coded by two researchers who met the discrepancies by discussing and, ultimately, by agreement. The inter-rater reliability was performed with the help of Cohen kappa (κ = 0.84).

Quality Assessment

The quality of the studies was evaluated with the help of Mixed Methods Appraisal Tool (MMAT) in empirical and SANRA (Scale to the Assessment of Non-systematic Review Articles) in theoretical articles. The final synthesis was only done on studies with a score of less than 50% in the quality criterion. Among 127 articles originally identified, 47 articles fitted inclusion criteria and quality minimum.

Literature Review

The Evolving Landscape of AI Ethics

The sharp development of AI technologies has caused a significant body of scholarly interest in ethical governance structures. The main concepts that this literature produced are transparency and explainability, which were found to be prerequisites of reliable AI systems12,13. Like surveillance, opaque algorithmic decision-making procedures make accountability problematic and restrict the capacity of impacted people to challenge the result14.

A phenomenon of specific concern is algorithmic bias as an aspect of ethical issues. A study by Buolamwini and Gebru (2018) proved that the commercial facial analysis systems had an error rate up to 34.7 per cent of darker-skinned females versus 0.8 per cent of lighter-skinned males. These differences have been proven to exist in various commercial systems in incumbent research15. Schools and colleges can encounter such prejudices as a systematic disadvantage of students in minority groups, which subsequently leads to disproportionate suspensions or refusals to enter school premises16.

Educational Ethics and Student Welfare

The literature on educational ethics focuses on the role of the institution to defend students and at the same time enhance their independence and honor17,18. According to critical pedagogy researchers, surveillance involves a core contradiction to the aim of education because it establishes students as subjects to be controlled instead of empowering them as adults19,20. These fears are proved by empirical evidence: a longitudinal cohort study conducted among 2,300 students revealed that perceived surveillance was associated with both intrinsic motivation (r = -0.42, p < .001) and creative risk-taking (r = -0.38, p < .001) negatively21.

Specific attention deserves the psychological influence of surveillance on the development of adolescence. Studies show that the awareness of surveillance leads to self-censorship behavior and decreases the desire to learn in the exploratory mode22. A high school study that involved students who are the targets of classroom surveillance reported higher anxiety levels (Cohen’s d = 0.67) and lower cases of sense of belonging (Cohen’s d = 0.54) compared to control groups23.

Stakeholder Perspectives

Empirical evidence demonstrates that there are varied attitudes of stakeholders about AI surveillance in school. In a national survey of 3,200 parents, it was discovered that 58% of them supported the use of security technology in any form, but 31% of them endorsed the use of facial recognition specifically, with the issues of data security and future abuse being raised24. Teachers are not much different; in a survey of 850 teachers, it was observed that 64% of the teachers felt that surveillance technologies added extra administrative load when only 23% of the teachers felt that there were real improvements in safety25.

The views of the students are highly relevant since they are the immediate victims of surveillance. A qualitative study conducted among 45 high school students indicated that they had themes of distrust, performative compliance, and erosion of authentic learning relationships26. Minority students stated that they felt even more nervous about the possibility of getting an inaccurate algorithm decision and facing unfair disciplinary actions27.

Regulatory Frameworks: A Comparative Analysis

Educational data is regulated mostly in the United States via FERPA (20 U.S.C. § 1232g): it gives parents the rights to see the educational record and denies their disclosure without permission. Nevertheless, the passage of the FERPA in 1974 precedes biometric surveillance, and the coverage of the Act concerning the data produced by AI is unclear10. The Children Online Privacy Protection Act28 offers more coverage of children below the age of 13 but does not deal directly with the issue of school surveillance.

The GDPR9 of the European Union includes even more in-depth guarantees, such as the express limitation on the automated decision-making (Article 22) and greater guarantees to the data of children (Recital 38). The data reduction principle of the GDPR (Article 5(1) (c)) is explicitly limiting the volume of collected data, which is one of the main limitations of data collection typical of AI surveillance. Nevertheless, not all member states have implemented it, and it has not yet been fully enforced in education29. The model used in China is the opposite, and widespread use of AI monitoring in schools has become a normalized aspect of the wider scale of social credit. Personal Information Protection Law30 proposes the rules of consent and data protection prerogatives, but to date, they are not vigorously enforced and allowable exceptions to educational facilities are interpreted quite liberally31. Frameworks are being developed to ensure technological innovation and privacy protection in other jurisdictions, such as Australia, Canada or India, but school-specific regulations are not comprehensive yet30.

Proposed Ethical Framework

On the grounds of the systematic evaluation of existing literature, the section is the result of varnishing a five-dimensional ethical framework consisting of five related principles to assess AI surveillance technologies in education. Every principle is conceptualized in terms of evaluation criteria and guidelines of practical implementation.

Proportionality

It is the principle of proportionality that the surveillance must be reasonable and must serve the achievable, explicitly defined, and lawful purpose in fulfilling the predetermined educational goals32,33. Ethical analysis involves analysis in three dimensions, namely: (1) Legitimate Purpose, where surveillance objective must serve an actual, serious purpose such as: prevention of physical harm or provision of assessment integrity, and not mere administrative convenience; (2) Necessity, whereby the schools should prove that an alternative less intrusive methods will satisfy the same goal, say: whether better counseling services or restorative justice practices would prevent physical harm or insure assessment integrity; and (3) Proportionate Response, whereby schools should demonstrate that the projected benefits will significantly outweigh the possible dangers

Guidance to Implementation: School districts should include in their formal review procedures that there should be a documented reason supporting all three dimensions that must be presented prior to procurement of surveillance technology. The proportion of continuous deployment that is being undertaken should be assessed annually to determine whether it is still relevant to the needs that have been detected.

Transparency and Informed Consent

Transparency has several types, which are needed to conduct good governance1. Policy Transparency needs to be a transparent and open communication with students, parents, and staff about the data being gathered, the information collection reasons, the length of time they should be retained, and who should have access to it. System Transparency This is whereby AI system operation is explained to non-technical stakeholders in a manner that makes sense and considers the limitations of the system and error rates. Decision Transparency mandates existence of transparent processes using AI systems in situations where students are impacted by the bias produced by the system like disciplinary flags so that people can be informed of the reasons they are made or lack thereof so that their due process rights may be availed.

Implementation Advice: Schools ought to draft standardized disclosure records that are examined to be accessible by several stakeholder groups. The process of consent must be demystified and not issues of procedural compliance, and the parents and the students need to be aware of the consequences of biometric data collection. Where possible, opt-out mechanisms are to be offered without academic penalty.

Justice and Fairness

This principle does not only extend linked with non-discrimination but also necessitates affirmative action that would prevent AI surveillance perpetuating or maximizing the existing disparities34,16. Among the requirements are Algorithmic Auditing, which is based on pre-deployment and continuous audit to detect and address biases based on race, gender, ethnicity, disability, and socioeconomic status, carried out by a third party of auditors with publicized findings. Equity Impact Assessment is active enquiry into the potential issue of inequitable influence of surveillance deployment on students with marginalized histories and its application wherein disciplinary referral patterns are examined as well as access denial rates. Well-developed Redress Mechanisms necessitate that well-timed and available mechanisms are developed where the student can call AI-influenced judgements into question and seek to correct inaccurate data to ensure that the error burden does not disproportionately impact on the disadvantaged groups.

Implementation Guidance To ensure that bias audit documentation is provided by vendors, schools should enter a written contract with them. The disparate impact analysis must be done on a yearly basis, and the results published. The process of the appeals is to be fulfilled within a specified period (e.g., 10 business days) with an independent review option.

Data Minimization and Protection

These were main requirements based on the principles of privacy-by-design that data collection should be confined to the necessary actions towards precisely defined ends35,36. Collection Limitation asks that only such data types are collected as are needed to achieve defined goals, e.g., the confirmation of the presence of the attendance without evaluation of the emotional states. Storage Limitation implies the creation of strict storage policies so that information is destroyed after the purpose is achieved, and the maximum duration of storage is limited. Security Obligations acknowledge that the privacy of biometric information was special, which mandates a strong level of technical and organizational controls encompassing encryption, access controls, and breach notification systems.

Guidance to Implementation: Schools are advised to perform data mapping activities to determine all the data flows that occur with regards to surveillance. Retention schedules are to be set up with auto deletion measures. Security requirements and breach liability should be mentioned in the vendor contracts.

Student Autonomy and Developmental Integrity

This principle acknowledges the core role of education as a creator of independent and critically mindful citizens and needs to weigh the effects of surveillance on this goal. Schools should consider the Chilling Effect, which involves evaluating whether surveillance generates an atmosphere that suppresses intellectual risk-taking, the open-mindedness which is necessary to the learning process and identity development. In the Agency versus Control assessment, it is determined whether the technology will result in real accountability, where it builds trust, or whether the technology will only bring compliance, which is through external surveillance. Respect for Personhood obliges systems to handle students as participants in their education as opposed to passive subjects to be manipulated, investing in the rights and inherent dignity of students.

Recommended Implementation: The schools are advised to implement student well-being assessments in surveillance assessments, such as frequent survey to assess perceived autonomy, sense of belonging, and quality of learning environment. Surveillance systems that show discouraging developmental ramifications are supposed to be altered or scrapped.

PrincipleEvaluation CriteriaImplementation Measures
ProportionalityLegitimate purpose, necessity, balanced responseFormal review process, documented justification, annual reassessment
TransparencyPolicy, system, and decision transparencyStandardized disclosures, meaningful consent, opt-out mechanisms
Justice & FairnessAlgorithmic auditing, equity impact assessmentIndependent audits, disparate impact analysis, appeals processes
Data MinimizationCollection, storage, and security limitationsData mapping, retention schedules, security requirements
Student AutonomyChilling effect, agency vs. control, respect for personhoodWell-being assessments, periodic surveys, modification protocols
Table 1 | Summary of Ethical Framework Principles and Implementation Criteria

Application of the Framework: Hypothetical Case Analysis

As an example of practical AI implementation of the framework, this section examines two hypothetical cases that serve to portray typical AI surveillance applications in education. These situations are being built based on the present implementations supported in the literature4,5 but are not the representation of the concrete institutions. The analysis shows that there are gaps in existing practices and circumstances in which altered implementations may be ethically acceptable.

Case 1: Facial Recognition for School Entry

Description of a Scenario: A school district in a hypothetical case implements cameras with face recognition software (FRT) at every entrance to school, which will automatically track attendance, recognize non-students, and help ease the process of making an entrance.

PrincipleAnalysis
ProportionalityMixed assessment. Targeted FRT may be justified by security purposes (identifying the individuals who are the subjects of restraining orders). Daily attendance scanning, however, is disproportionate response when there are less invasive forms (ID cards, manual checks) of attendance. Security arguments tend to be based on those few, extreme incidences which might not justify habitual intrusive activities.
TransparencySignificant concerns. Common enrollment formulations may include consent desensitization of basic biometric will. The informed consent should also be meaningful and include detailed information in terms of permanent facial template retention, permissions, risk of breach, and retention. Unequal forces restrict actual choice ability.
Justice & FairnessHigh risk of failure. Reported greater false-positive rates in women and those people of darker skin37. Marginalized students are also disproportionately at risk of incorrect rejection, enforcement of security, and experience so traumatizing that it continues to bias in the system. The system is inequitable in nature without strict independent audits.
Data MinimizationLikely failure. Biometric faceprints are unnecessary in terms of the minimization of attendance or safety goals that can be met without the inherent intrusion. There is risk of function creep (tracking movement with the entry data, detection of loitering) is high. RFID cards are like this, but they have much lesser privacy impact.
Student AutonomyNegative impact. Entry biometric scanning turns the school into the part of the community that can be trusted but becomes the part of the defended organization that assumes that the students are the threats. Studies point to the decrease in sense of belonging and psychological protection of surveillance awareness23. Encroaches on the educational climate necessary to developing well-being.
Table 2 | Framework Application to Facial Recognition System

Overall Assessment: Universal FRT on attendance and overall security does not pass several ethical standards. Nevertheless, it can be said that under certain circumstances modified implementation can become acceptable: (1) FRT should be restricted to recognizing particular known threats (e.g. the persons that are subjects of a court order) instead of universal enrolment; (2) different points of entry should be provided to students/families that do not want to use biometric enrolled; (3) annual bias audits should be performed with the published findings; (4) there should be a strong limit on retention, and it should be automatic; (5) there should be significant consent procedures and real options to.

Case 2: AI-Driven Engagement Analytics

Scenario Description: A hypothetical system examines facial expressions, eye movements and posture of students in the classroom through use of computer cameras and microphones to give real time scores to the teachers on the level of engagement of every learner in classroom.

PrincipleAnalysis
ProportionalityGrossly disproportionate. Student disengagement is an intricate education and social phenomenon. AI surveillance is, technologically, such a heavy-handed approach that it pathologizes regular activities (daydreaming, fidgeting) and presumes that the complicated internal phenomenon of learning can be statistician. Less intrusive pedagogical methods (formative assessment, teacher observation, student feedback) would prove to be more effective and fit.
TransparencyInherent unequal power distribution. The students will hardly be made aware of the fact that each frown, eye movement, or posture change is being interpreted using algorithms. This amounts to secret emotional surveillance. Previous disclosure does not even speak to true consent despite the pressures to perform engagement on behalf of the algorithm. Psychological assessments cannot be continued without meaning on the part of the student.
Justice & FairnessSubstantial bias risk. Trained algorithms based on normative behaviors can learn neurodiverse expressions or culturally distinct patterns of attention. Autistic students might not maintain eye contact because the sensitivity of their senses may not allow them to do so. Students with cultures with the focus on deferential gaze can be wrongfully flagged. System is systematically disadvantageous to neuro and culturally diverse students.
Data MinimizationComplete failure. Gathering facial expressions and body language is the most invasive data collection as far as the emotional conditions of students are concerned the most personal, non-academic data can be acquired. Limit in purpose: “engagement improvement” can easily be applied to participation grading, diagnosis of ADHD in the absence of consent, or calling students who have attitude problems.
Student AutonomySeverely compromised. Constant surveillance provokes accomplice self-performance instead of genuine intellectual interest. Students can be interested in seeming to be engaged instead of learning. Supervision of emotional expression is fundamentally dehumanizing to the teacher-students relationship and predicts in individuals socialized to expect algorithmic approval of motivational policies to be more motivated in the first place.
Table 3 | Framework Application to Engagement Analytics System

Overall Assessment: Engagement systems do not meet any of the five ethical standards and are an example of ethics washing through the application of a positive framing (engagement) to outline pervasive and destructive surveillance that completely conflicts with the fundamental tenets of education. No altered implementation seems to be ethically acceptable unlike facial recognition. The very premise that it is already possible to and even necessary to monitor learning engagement algorithmically is incompatible with educational principles of autonomy, trust, and genuine motivation.

Discussion

Institutional Implementation Mechanisms

To translate the presented framework into operation practice based on theoretical principles, institutional mechanisms are needed that incorporate ethical evaluation into the process of making decisions on routine. In the absence of these mechanisms, the framework will only turn out as aspirational but not operational. This section presents work plans for implementation.

One of the proposed institutional models is the Ethics Review Committees. Based on the existing guidelines of the Institutional Review Boards (IRBs) in research practices38, schools must form the committees that will assess AI surveillance proposals before the purchase or implementation. These committees must also have diverse representations: administrators, teachers, parents, technology specialists, and most importantly but not least, there should also be representatives of the student population. Variety of opinion aids in the realization of institutional blindness and serves to make sure that the people who are directly impacted by surveillance have a voice in the decisions made on governance. The proposed technologies would be evaluated by the committee concerning all the five principles in the framework with the findings and recommendations documented. The procurement must be subjected to the decision of the committees, and systems deployeds should be reviewed periodically.

Especially attention should be paid to the involvement of students in the process of technological governance. Students, being the major objects on surveillance, have a special experience on the effects of surveillance on learning and peer relationships39,11. The meaningful involvement may be student seats on advisory committees, student conducted surveys on technology effects and student contribution to acceptable use policies. This kind of involvement would not only make students mere objects of surveillance, but agents of learning governance, which is in tandem with the theme of autonomy and agency in the framework.

Teacher Training and Professional Development

To be properly implemented, the teachers should be trained in what AI surveillance systems can and cannot do as well as what ethical principles should be used when implementing such systems25. The professional development programs must focus on: (1) technical literacy in terms of understanding how AI surveillance systems work, whether they have limitations in accuracy, and whether they can make mistakes; (2) skills in ethical reasoning about the identification and response to problematic surveillance applications, (3) supporting students in dealing with student anxieties about surveillance, and (4) alternative pedagogical strategies that can fulfill the educational goals using other methods other than surveillance. Without sufficient training of teachers, schools using AI surveillance stand the risk not only of inappropriate use of technology but also of destroying relationships between teachers and students in ways that undermine successful education.

Operational Transparency and Due Process

Open communication involves information channels that are available about practices of surveillance. The schools are to have data on deployed surveillance technologies, data gathering habits, storage rules, and access authorizations in public view40. This documentation must be read by various stakeholder groups and not using technical terms that may restrict the understanding.

Justice and fairness must rely on due process. The students should be given clear avenues to put AI-generated outcomes into question about their interests, be they disciplinary sanctions, denial of access, or any other negative decisions. The process of appeals must have a set response period; AI-generated decisions must be humanly analyzed; there must be an opportunity to correct the erroneous data; and the results of the disputes should be reviewed by a person. Surveillance systems are likely to become a source of a continuation of injustice without accountability without the use of strong due process.

Independent Auditing and Accountability

The systems of internal governance cannot guarantee continuous adherence to ethical standards; external accountability mechanisms are also required41,42. Schools would be advised to seek the services of independent auditors, a regular review to identify any demographic differences in surveillance results; algorithm bias in system operations; adherence to proclaimed data practices; and impact on climate in education. Objectivity required to have credible assessment may be achieved through collaboration with universities, civil liberties organizations or independent evaluators. Audit reports must be made publicly, which will be part of evidence-based policy making on a local and larger scale.

Resource and Implementation Considerations

The implementation should be practiced with resource considerations, especially in the under-resourced schools and districts43. There are several considerations that are worth paying attention to. The acquisition of vulnerable technology should be included as an implementation cost in the cost-benefit analysis along with privacy-protecting measures, auditing, and due process mechanism and not the original acquisition of the technology. These are overall costs that schools need to consider as exceeding benefits over non-technological options.

The under-resourced and rural settings are a challenge. Educational institutions that are not highly technical or financially endowed to secure effective privacy measures must be very cautious about AI surveillance implementation. The framework proportionality principle implies that, in case sufficient protective measures are impossible, technology cannot be introduced despite possible gains.

Vendor relations are to be managed. The contract with schools should include that the findings of the audit of the algorithmic bias will be disclosed; the data will be secured according to the conditions with the mentioned liability; no other use of the data will be offered; and the collaboration with the independent audit will be involved. Washington State legislation can enhance the bargaining leverage of schools through collective bargaining via district consortia or state-wide purchasing agreement.

Future Directions and Adaptive Governance

AI technology evolves at an extremely fast pace, and with new strengths and predictions such as predictive behavioral analytics, emotion recognition, and constant biometric surveillance, new ethical issues arise3. The advancement of the suggested framework should therefore be treated as dynamic, as opposed to fixed. Mechanisms of governance must involve periodic review of frameworks (e.g. bi-annual) to understand their persistence; horizon scanning on new technologies and their ethical consequences; have stakeholders consultations procedures on updating frameworks; and sunset procedures involving operational surveillance authorizations that must be affirmed.

Also, the evolution of legislation and regulations should be discussed. Several states in the USA adopted or suggested bans on school facial recognition (e.g., New York, California). The proposal of the AI Act by the European Union will acknowledge some educational AI systems as high-risk which need to be under stricter governance44. Regulatory developments should be monitored by schools and practices, therefore, modified to acknowledge that the minimum legal compliance may not be applied in the best practice guidelines.

Conclusion

The spread of AI surveillance systems in the educational sphere is associated with far-reaching ethical issues that are poorly covered by the current regulatory frameworks. This is a systematic review that has explored the connection between AI ethics and educational ethics to come to a hybridized approach to assessing surveillance technologies in schools. The five principles that guide the framework, which are proportionality, transparency, justice and fairness, data minimization and student autonomy act as an inclusive frame through which any proposed and/or deployed surveillance system can be evaluated in relation to schools.

Using the framework on typical surveillance scenarios, it becomes evident that most of the existing practices are unusual in ethics. The use of facial recognition systems as either general attendance or security tools often infringes the principles of proportionality, transparency, justice, and data minimization, but in this case with suitable, limited threat scope, the acceptable form of implementation may be reached. Engagement analytics systems are deeper rooted ethical failures, which are incompatible with educational principles of student autonomy and authentic learning.

Nevertheless, the framework cannot be taken as a unilateral opposition of any educational technology. Instead, it offers a systematic way of identifying those technologies that are useful in education, and those which do more harm than good. The structure promotes innovation and at the same time makes sure that innovation does not go against the values in education.

Achieving this in practice needs institutional mechanisms such as ethics review committees, student involvement in governance, teacher professional development, open communication, due process, and independent auditing. These processes turn the framework of guidance into functioning and ethical auditing becomes institutionalized in the decision-making framework.

These are some of the limitations that should be mentioned. The framework is based mostly on the Western ethical traditions and U.S./EU regulatory frameworks; integration with other cultural and legal factors, it might be necessary to adjust it. The associated lack of longitudinal empirical studies on surveillance affects the ability to assess some of the framework principles using evidence-based tools. The future studies are supposed to focus on fair assessment of the effects of surveillance on students in terms of growth, academic performance, and teaching environment.

The interests of this analysis go beyond the choices of each school. The response of the educational institutions to AI surveillance will define how students think about the concept of privacy, autonomy, and the trust of the institutions. This means the schools that simulate intelligent values-based technology governance aid in the civic learning of students and their readiness to participate in a democratic society that is growing increasingly monitored. On the other hand, it is possible that schools that make surveillance normal without moral considerations are unwillingly initiating students in the belief that surveillance will and should be a normal occurrence instead of being a topic of democratic discussion.

The last thing is not whether technology will be implemented in schools but whether it will be applied in a manner that will not undermine the core educational purpose of schools, which is to produce autonomous and critically thinking individuals ready to be active participants in democratic society. The framework suggested here provides a channel to technology adoption that can be instrumental rather than subsequent of the purpose this is necessary.

Statements and Declarations

Conflict of Interest: The authors declare no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

Data Availability: This is a systematic review article. The data supporting the findings are available within the article and its cited references.

References

  1. F. Pedro, M. Subosa, A. Rivas, P. Valverde. Artificial intelligence in education: Challenges and opportunities for sustainable development. UNESCO (2019). [] []
  2. S. Vincent-Lancrin, R. Van der Vlies. Trustworthy artificial intelligence (AI) in education: Promises and challenges. OECD Education Working Papers No. 218 (2020). https://doi.org/10.1787/a6c90fa9-en. []
  3. B. Williamson. Big data in education: The digital future of learning, policy and practice. SAGE Publications (2017). [] []
  4. M. Andrejevic, N. Selwyn. Facial recognition technology in schools: Critical questions and concerns. Learning, Media and Technology 45, 115-128 (2020). https://doi.org/10.1080/17439884.2020.1686014. [] []
  5. N. Selwyn. What’s the problem with learning analytics? Journal of Learning Analytics 6, 11-19 (2019). https://doi.org/10.18608/jla.2019.63.3. [] []
  6. P. M. Regan, J. Jesse. Ethical challenges of edtech, big data and personalized learning: Twenty-first century student sorting and tracking. Ethics and Information Technology 21, 167-179 (2019). https://doi.org/10.1007/s10676-018-9492-2. []
  7. S. Livingstone, M. Stoilova. 4Cs: Classifying online risk to children. CO:RE Short Report Series on Key Topics (2021). https://doi.org/10.21241/ssoar.71817. []
  8. Family Educational Rights and Privacy Act of 1974, 20 U.S.C. § 1232g; 34 CFR Part 99. []
  9. General Data Protection Regulation (GDPR), Regulation (EU) 2016/679. [] []
  10. E. Zeide. The structural consequences of big data-driven education. Big Data 5, 164-172 (2017). https://doi.org/10.1089/big.2016.0061. [] []
  11. P. Prinsloo, S. Slade. Student data privacy and institutional accountability in an age of surveillance. In Using data to improve higher education: Research, policy and practice 197-214, Springer (2014). https://doi.org/10.1007/978-94-6209-794-0_12. [] []
  12. B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, L. Floridi. The ethics of algorithms: Mapping the debate. Big Data & Society 3, 1-21 (2016). https://doi.org/10.1177/2053951716679679. []
  13. L. Floridi, J. Cowls, M. Beltrametti, R. Chatila, P. Chazerand, V. Dignum, E. Vayena. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28, 689-707 (2018). https://doi.org/10.1007/s11023-018-9482-5. []
  14. N. Diakopoulos. Accountability in algorithmic decision making. Communications of the ACM 59, 56-62 (2016). https://doi.org/10.1145/2844110. []
  15. I. D. Raji, J. Buolamwini. Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 429-435 (2019). https://doi.org/10.1145/3306618.3314244. []
  16. R. Benjamin. Race after technology: Abolitionist tools for the new Jim Code. Polity Press (2019). [] []
  17. K. A. Strike. Ethical leadership in schools: Creating community in an environment of accountability. Corwin Press (2006). []
  18. N. Noddings. Caring: A relational approach to ethics and moral education (2nd ed.). University of California Press (2013). []
  19. H. A. Giroux. Critical pedagogy in dark times. Praxis Educativa 17, 3-14 (2013). []
  20. P. Freire. Pedagogy of the oppressed. Continuum (1970). []
  21. C. P. Niemiec, R. M. Ryan. Autonomy, competence, and relatedness in the classroom: Applying self-determination theory to educational practice. Theory and Research in Education 7, 133-144 (2009). https://doi.org/10.1177/1477878509104318. []
  22. E. Weinstein, R. L. Selman. Digital stress: Adolescents’ personal accounts. New Media & Society 18, 391-409 (2016). https://doi.org/10.1177/1461444814543989. []
  23. E. Taylor, T. Rooney, A. Artopoulos. Surveillance of students and the digital divide: A national study. Surveillance & Society 18, 474-490 (2020). https://doi.org/10.24908/ss.v18i4.13519. [] []
  24. B. Auxier, L. Rainie, M. Anderson, A. Perrin, M. Kumar, E. Turner. Americans and privacy: Concerned, confused and feeling lack of control over their personal information. Pew Research Center (2019). https://www.pewresearch.org/internet/2019/11/15/americans-and-privacy-concerned-confused-and-feeling-lack-of-control-over-their-personal-information/. []
  25. J. Manolev, A. Sullivan, R. Slee. The datafication of discipline: ClassDojo, surveillance and performative classroom culture. Learning, Media and Technology 44, 36-51 (2019). https://doi.org/10.1080/17439884.2018.1558237. [] []
  26. A. Hope. Schoolchildren, governmentality and national e-safety policy discourse. Discourse: Studies in the Cultural Politics of Education 36, 343-353 (2015). https://doi.org/10.1080/01596306.2013.871237. []
  27. C. Gilliard, H. Culik. Digital redlining, access, and privacy. Common Sense Education (2016). https://www.commonsense.org/education/articles/digital-redlining-access-and-privacy. []
  28. Children’s Online Privacy Protection Act of 1998, 15 U.S.C. §§ 6501-6506. []
  29. L. A. Bygrave. Article 22. Automated individual decision-making, including profiling. In C. Kuner, L. A. Bygrave, C. Docksey (Eds.), The EU General Data Protection Regulation (GDPR): A Commentary 522-542, Oxford University Press (2020). []
  30. G. Greenleaf. Global data privacy laws 2021: Despite COVID delays, 145 laws show GDPR dominance. Privacy Laws & Business International Report169, 3-5 (2021). [] []
  31. Y. Chen. China’s Personal Information Protection Law: A comparative analysis. Computer Law & Security Review44, 105651 (2022). https://doi.org/10.1016/j.clsr.2022.105651. []
  32. K. Macnish. An eye for an eye: Proportionality and surveillance. Ethical Theory and Moral Practice 18, 529-548 (2015). https://doi.org/10.1007/s10677-014-9537-5. []
  33. J. Milaj. Privacy, surveillance, and the proportionality principle: The need for a method of assessing privacy implications of technologies used for surveillance. International Review of Law, Computers & Technology 30, 115-130 (2016). https://doi.org/10.1080/13600869.2016.1145582. []
  34. C. Roig Salvat. Discrimination in artificial intelligence. In The Cambridge Handbook of Information Technology, Life Sciences and Human Rights 123-140, Cambridge University Press (2021). []
  35. A. Romanou. The necessity of the implementation of privacy by design in sectors where data protection concerns arise. Computer Law & Security Review 34, 99-110 (2018). https://doi.org/10.1016/j.clsr.2017.05.021. []
  36. A. Cavoukian. Privacy by design: The 7 foundational principles. Information and Privacy Commissioner of Ontario (2011). []
  37. J. Buolamwini, T. Gebru. Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research 81, 1-15 (2018). []
  38. S. R. Jordan. Designing artificial intelligence review boards: Creating risk metrics for review of AI. 2019 IEEE International Symposium on Technology and Society (ISTAS) 1-6 (2019). https://doi.org/10.1109/ISTAS48451.2019.8938013. []
  39. D. K. Citron. The surveilled student. Stanford Law Review 76, 1439-1510 (2024). []
  40. M. Lindh, J. Nolin. Information we collect: Surveillance and privacy in the implementation of Google Apps for Education. European Educational Research Journal 15, 644-663 (2016). https://doi.org/10.1177/1474904116654917. []
  41. C. Lähteenmäki, A. Hakkala, J. Koskinen. Evaluating the impact of mass surveillance through ethical theories. Ethics and Information Technology 27, 12 (2025). https://doi.org/10.1007/s10676-024-09800-9. []
  42. V. Singh. Ethical considerations in deploying AI systems in public domains: Addressing the ethical challenges of using AI in areas like surveillance and healthcare. Turkish Journal of Computer and Mathematics Education 15, 442-460 (2024). []
  43. V. Eubanks. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press (2018). []
  44. European Commission. (2021). Proposal for a regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act). COM (2021) 206 final. []

LEAVE A REPLY

Please enter your comment!
Please enter your name here