To Use or Not to Use: The Risks of the Adoption of Facial Recognition Technologies in Smart Classrooms

Abstract

The rapid adoption of generative AI tools in legal education has sparked debates on establishing norms for AI use. These tools assist in assignments, translating legal documents, and research. Smart classrooms, with intelligent tutoring and emotion recognition technology, enhance teaching and learning but also pose risks like system errors, algorithmic discrimination, and decision opacity. This article examines the construction and benefits of smart classrooms, real-world integration challenges, and the effectiveness and risks of facial recognition technology, especially for diverse learning styles. It concludes best practices and recommendations to mitigate risks and promote a secure, effective educational environment.

Share and Cite:

da Silva, A. P., Feferbaum, M., & Salvador, T. G. (2025) To Use or Not to Use: The Risks of the Adoption of Facial Recognition Technologies in Smart Classrooms. Beijing Law Review, 16, 945-962. doi: 10.4236/blr.2025.162047.

1. Introduction: New Technologies Transforming Classrooms

The hype of generative AIs, which has emerged in recent years, has made the use of this technology by students a major focus of academic discussions about the use of technology in teaching, leaving little room for discussion of the use of technologies by universities and teachers for the evaluation of these students. From the moment that evaluations based on the perception of students’ learning are used, there is a need to discuss the use of technology at this point in the teaching-learning process, verifying its potentialities and risks1.

The adoption of new technologies in the classroom is not a recent phenomenon within the context of schools and universities. (Kaur et al., 2022) There exists a lengthy trajectory of technological integration across various layers of education, sharing goals of enhancing learning outcomes and optimizing the available resources within an educational institution.

Kaur et al. have documented numerous initiatives over the past years involving the incorporation of technological tools in the development of instructional materials, measurement of student attention and engagement, attendance tracking, exercise and exam grading, as well as the optimization of air conditioning resources and classroom lighting sensors. Technologies such as augmented reality, learning management systems, data mining, and learning analytics are cited by the authors as examples of the diverse applications of technology in the classroom environment. (Kaur et al., 2022)

Classrooms are evolving into a multi-modal audio/visual system capable of digitizing the behaviors of both students and teachers. It transmits data on movements, facial expressions, voice, and other information to an artificial intelligence system based on a neural network, facilitating real-time feedback on the performance of teachers and students. It may seem like a scene from a distant future science fiction film, but it is becoming a reality in schools and universities across different countries worldwide.

Bala mentions SensorStar Labs, a New York-based company specializing in real-time analysis of facial features as a means of assessing students in the classroom. Beyond the infrastructure of cameras and microphones installed in a classroom, the company has developed an artificial intelligence system capable of interpreting students’ behavior during classes, classifying them based on levels of engagement and attention. Through the company’s system, teachers receive information about the students’ attention levels during the class, enabling them to adopt pedagogical strategies to engage disengaged students. (Bala, 2020)

In various educational settings, the use of AI and facial recognition technologies has raised significant concerns. In 20192, some classrooms in China initially introduced facial recognition for security and attendance monitoring, but it quickly expanded to assess student performance, sparking privacy and academic freedom concerns. Similarly, UCLA publicly acknowledged using an AI facial recognition system for campus security in 20203, which led to findings of false positives and bias, particularly against people of color, by the digital rights group Fight for the Future.

The Anima Education Group in Brazil4 piloted an AI grading system, prompting an investigation by the National Consumer Secretariat (SENACON) over transparency and student rights. Despite regulatory gaps, the project continued with improved communication and review channels. Meanwhile, Bocconi University in Italy5 faced fines from the Italian Data Protection Authority (Garante) for GDPR violations after using an AI system to monitor student exams, citing issues with inadequate notification, lack of legal basis for data processing, and coercive consent due to pandemic conditions. These cases underscore the critical need for ethical considerations and robust regulations in implementing AI technologies in educational institutions.

Integrating new technologies, such as artificial intelligence systems, would transform traditional classrooms into smart classrooms, expanding the array of tools available for teachers and educational institutions to achieve their objectives in terms of learning outcomes and resource management. However, there are relevant risks to be considered when adopting artificial intelligence systems to analyze student behavior.

The first concern pertains to potential chilling effects on both students and teachers, shaping their behaviors to align with the criteria of the systems acquired by educational institutions. The classroom environment may become more rigid than desired, diminishing the diversity of behaviors and attitudes within the learning space. The second concern involves technology-specific issues, such as problems associated with AI system errors and biases present in the data utilized by the system.

An Intelligent.com research6 shows that both students and teachers are normalizing the use of technology, whether to perform school tasks or to build these tasks. Some of the respondents accused the need to use technology to help them create the work tasks. However, it is common ground that AIs are not neutral and that no technology is neutral. There are several attempts today around the world to try to correct or limit the negative impact of the bias of technologies such as AI on human activity, since this technology has had a development boom and has become the focus of concern for regulators.

However, the reflection on risks and precautions to be considered in adopting technologies associated with ‘smart classrooms’ is not being highlighted. Many academic works7 on the subject have focused on the technical construction of smart classrooms and the pedagogical opportunities offered by these new learning environments. While acknowledging the importance of these studies, we believe it is crucial to engage in a reflection on the risks inherent to the transformation from traditional classrooms to smart classrooms. It is necessary to be extremely careful with the use of technology in the evaluation of students. How can a fair and faithful evaluation be ensured? What tools can be used to ensure the numerous benefits of these technologies in teaching while limiting the harms?

The objective of this article is to explore the risks associated with the implementation of smart classrooms, dividing our analysis between risks related to the dynamics of teaching and learning and risks associated with the adoption of student assessment systems based on facial attribute analysis, which can vary between face recognition systems. Our reflection focuses on the student assessment dimension, with an emphasis on facial analysis technologies and their specific characteristics.

In the first chapter, the concept of the smart classroom is discussed, considering what it means to build a smart classroom and its potential benefits for higher education. The second chapter discusses the use of facial recognition in cases of failure to identify people. It questions the effectiveness of this type of technology and the risks that arise when we talk about the use of this monitoring technology in teaching and the different types of student learning styles. In the third chapter, good practices and recommendations for mitigating the identified risks are highlighted. We conclude this chapter with an overview of our paper, highlighting the complexity of the discussion on the successful use of facial recognition technologies in education.

2. Smart Classrooms and Monitoring Technologies

“A smart classroom is made up of various components that work together to provide an interactive and engaging learning environment that enhances teaching techniques, develops students’ abilities, raises their academic level, and allows them to engage more actively in the learning process.” (Kaur et al., 2022)

A smart classroom can rely on various technological resources, such as interactive whiteboards, audio/video, management, and mobility, which complement each other. In addition, innovations in pedagogy relating to content, student participation, and assessment are also part of a smart classroom. The aim of the architectural and pedagogical aspects of a smart classroom is to reduce the distance between students and teachers, make it easier for teachers to teach, and improve the teaching and learning environment.

Saini and Goel define smart classrooms as a technology-assisted closed environment capable of enhancing the teaching capabilities of instructors and the learning experience of students. Thus, the raison dêtre of smart classrooms is not merely the incorporation of technologies into the classroom but rather the attainment of improved teaching and learning outcomes through the integration of new technologies. Not coincidentally, smart classrooms are approached based on their impact on content, student interaction and engagement, assessments, and the comfort of the physical environment in which classes will take place. (Saini & Goel, 2021)

Considering that a standard smart classroom has improved material presentation, student participation, student-teacher contact, and physical surroundings while also providing resources for taking attendance, assessing, and allowing for real-time reviews, one can say that smart communication and participation in smart classrooms are mostly based on technological features. The so-called smart classrooms can be assessed by so many features, such as motion sensing, face recognition, eye gaze, and noise existence, through cameras, microphones, and sensors installed in the classroom, as studied by many authors.

Kaur et al. mapped through a literature survey the main fields of the teaching and learning experience in a smart classroom. As seen in Figure 1 below, they are (i) smart material, (ii) smart communication and participation, (iii) smart evaluation, and (iv) smart physical surroundings. Each one of these aspects of a smart classroom is formed by daily tasks that a teacher can automatize with the help of technology. (Kaur et al., 2022)

One of the first technologies associated with smart classrooms is learning analytics tools (Ferguson, 2012), where students’ actions in classroom interactions, exams, assignments, and posts in school systems are analyzed to create recommendations for the improvement of their academic performance. In this article, we focus on the smart evaluation aspect of smart classrooms, as it is the feature that is the easiest to automate fully by facial and body recognition technologies.

Figure 1. Taxonomy of smart classroom literature (Kaur et al., 2022).

Data mining tools (Romero & Ventura, 2013) have joined forces with learning analytics, expanding the scope of data analysis and incorporating assessments of the quality of information conveyed in the classroom and interactions between teachers and students. An example of this is the assessment of time consumption for specific tasks (Papamitsiou & Economides, 2016). With data mining tools, it became possible to evaluate the time a teacher spent presenting a task and the time students spent completing the task in the classroom. Additionally, an analysis of exercises and exams was conducted to assess the degree of information retention by students.

With recent advancements in computer vision processing, which apply machine learning techniques to extract information from images and to improve image quality in digital camera technology, there has been an increased interest in integrating facial recognition technologies into the educational environment. Facial recognition technology involves extracting points from an individual’s face in an image, identifying unique proportions between these points, and using this information for subsequent comparisons. This allows for computational analysis of similarities (for identification purposes), classification of expressions (for emotion analysis), and other applications (Andrejevic, 2022).

Facial recognition computational systems operate (Andrejevic, 2022), in other words, by analyzing proportions between parts of a person’s face, examining shapes and distances between points (i.e., the distance between the tip of the nose and the upper lip), creating geometric coordinates that together form a facial map or what is referred to as a “face-print.” The advantage of the face-print is that it serves as an excellent identifier, as most individuals possess unique proportions in terms of facial geometric coordinates.

For facial feature analysis to be conducted, the establishment of a biometric database is necessary. Biometric data is defined (Madiega & Mildebrath, 2021) as measurements of human body attributes, such as iris scanning, fingerprinting, and gait identification, among others. The construction of biometric data can be achieved through various methods, differing in their precision in assigning a body measurement to an individual.

Biometric construction technologies are used (Madiega & Mildebrath, 2021) to identify, verify, or confirm an individual’s identity. Identity can be identified, verified, or confirmed in two ways: through physiological traits (external appearance) or behavioral characteristics (how they behave in space). Recognition through physiological attributes is conducted through morphological identifiers, such as fingerprints, hand shape, retina and iris measurements, and facial structure, among others. Recognition through behavioral characteristics involves the analysis of voice, signature, gait, and distinctive gestures, among others.

In this context, Madiega and Mildebrath (Madiega & Mildebrath, 2021) define computer facial recognition systems as part of biometric technologies, as they handle the detection of a face in an image, undergoing more complex analyses such as verification, identification, and classification of an individual’s facial attributes. Verification, for example, involves comparing two biometric templates with the expectation that they belong to the same person. In contrast, in identification, the facial image template of an individual is compared with one or more templates stored in a database to verify the level of similarity between the images.

Unlike verification and identification, classification is a more computationally complex activity to perform. Beyond comparing images, such as comparing the distance proportions between points of one face to another, here the computational system will aim to classify the distance between points into expressions (i.e., smile) or even emotions (i.e., sadness).

While facial recognition technologies for verification and identification already demonstrated reliable applications in the early 2000s (Madiega & Mildebrath, 2021), evolving to be incorporated into products throughout the first two decades of the 21st century, classification expanded its usage through the integration of facial recognition systems with technologies from the field of artificial intelligence, especially machine learning techniques such as deep learning, and computer vision algorithms. Artificial intelligence algorithms, as such, have increased the speed of analysis and the accuracy of results.

This type of technology is increasingly being used in schools and universities. By checking and classifying students’ facial and body expressions, they were hyped as a way of making the teacher’s job easier through technology. Emotion recognition systems (ERSs) can, in fact, help teachers keep up with all students in the classrooms—whether to assess if they are awake or paying attention to the teacher. As many authors have already noted, however, many methods for detecting student inattention in the classroom are based on video and image recognition methods, such as tracking students’ eyes, face, head, and shoulders, as well as monitoring eye contact and head nods.

As mapped by Kaur et al., the construction of smart classrooms has integrated student attendance monitoring through personal device tracking and facial recognition technologies(Kaur et al., 2022). Patel and Priya note that initially, the adoption of these technologies was associated with security concerns; however, over time, the tools have been incorporated into pedagogical functions (Patel & Priya, 2014). Beyond the control of student attendance, the analysis of facial features has become an opportunity to assess the level of attention and engagement of students in activities proposed by instructors.

3. The Different Learning Styles and the Risks of Automatic Evaluation of Learning in Smart Classrooms

There are pedagogical implications of using facial recognition technologies in the classroom that deserve deeper reflection. Kenneth Saltman, for instance, argues that the logic of quantifying students’ emotions and classifying them in terms of classroom engagement levels commodifies the relationship between teacher and student. (Saltman, 2016) According to the author, commodification would reduce the student’s learning process to a passive consumption of monitored knowledge. Teaching, in turn, would become a constant quest to capture the student’s attention, measurable by computerized systems. As a result, the classroom would gradually lose its characteristic as a space for experimentation, debate, exchange, and investigation.

Ben Williamson argues that the use of facial recognition in the classroom may condition students’ behavior by reducing the space to recognizable and rewardable psychological attributes by a computational system. In this sense, the classroom would become an environment where biometrics are used as a control tool, rewarding predetermined patterns of what constitutes engagement or not. (Williamson, 2017). This scenario worsens when considering that facial recognition systems were not developed based on the social and cultural contexts of developing countries.

Evidence of this can be found in the poor performance of facial recognition systems in identifying and classifying non-white faces. According to Luke Stark, the databases used by facial recognition systems have not been representative of the racial diversity among students. (Stark, 2019) Furthermore, the performance of these systems is also poor in identifying other sexual groups individuals, with a higher number of cases in these groups where one student’s face is confused with another’s or even instances where engagement assessment is not recorded due to failure in detecting a student’s face.

Furthermore, there is a risk that students’ learning process gradually evolves into forms of manipulating the system as students identify patterns in their behavior that may yield improvements in their evaluation and patterns that may result in reductions in their grades (Andrejevic, 2022). According to Antoine Bousquet, the method used by facial recognition systems is limited, reducing the student’s behavior in the classroom to a single frame of their face (Bousquet, 2018). For the author, student engagement in the classroom is much more complex than the geometric abstraction of proportions collected from their face at certain moments during a class. There are issues involving students’ body language, the sensitivity of the teacher, and the characteristics of other students in the classroom, which should be part of the engagement analysis but escape facial recognition systems.

The inherent risk of students becoming aware of the use of technology in the classroom is the encouragement to behave unnaturally (Andrejevic, 2022) and to initiate a testing process to identify how they can or cannot behave in a monitored environment. Some may even acquire the ability to “game the system,” reproducing expressions that will be well-rated by systems, such as eye contact with the teacher, even if the expressions do not signify the student’s attention. On a deeper level, the use of this technology can dehumanize the face as an indicator of student engagement in the classroom, as it will no longer be the responsibility of the teacher to assess the students’ expressions in the classroom.

There is a fundamental concern regarding the models of artificial intelligence systems that perform facial recognition in the classroom. This is because these models will correlate a student’s facial expressions with a system for classifying levels of engagement, organizing expressions that demonstrate inattention, disinterest, and disdain, among other emotions, as low levels of engagement, and expressions that demonstrate attention, concentration, focus, among other emotions, which would indicate high levels of engagement. Students and teachers must be aware of how these models operate and what they reward and punish.

Particularly for students, there is a risk of what is called “social de-skilling,” where students may no longer develop social interaction and interpersonal skills due to monitoring in the classroom and university common areas. Hartzog and Selinger (Hartzog & Selinger, 2018) argue that monitoring associated with student assessment can create an oppressive environment where students may develop fears of expressing themselves, engaging with their peers, and communicating with their instructors.

Higher education students have different learning styles (Zapalska & Dabb 2002)8, which can influence their academic performance and their satisfaction with their courses. Learning styles are students’ individual preferences about how they acquire, process, and apply new knowledge. Some of the most common styles are visual, auditory, reading/writing, and kinesthetic. Each style has advantages and disadvantages and requires specific teaching and assessment strategies.

Visual learners learn best through images, graphs, diagrams, and videos. They find it easy to memorize visual information and understand spatial relationships. Auditory learners learn best through sounds, music, lectures, and discussions. They can easily grasp sound information and understand the intonation and emotion of the voice. Reading/writing students learn best through texts, books, articles, and notes. They find it easy to express their ideas in writing and understand logical arguments. Kinesthetic learners learn best through movements, gestures, experiments, and hands-on activities. They find it easy to use their hands and bodies to solve problems and understand abstract concepts.

An example of this gray area in standardizing student reactions is students who avoid eye contact with teachers compared to those who seek contact with the teacher. It is reasonable to assume that the student making eye contact is more attentive to what is being said by the teacher. However, this is not always the case, as there are different student profiles and different learning styles. The student who prefers not to make eye contact may have a preference for taking notes or maybe a shy person.

Teachers need to balance their teaching methods to cater to the different learning styles of their students. For example, they can use a variety of media and formats to present information, such as slides, videos, audio clips, or interactive software. They can also use a variety of assessment methods to evaluate learning outcomes, such as quizzes, essays, presentations, or portfolios. They can also encourage students to use their preferred learning styles to study and review the material, such as making flashcards, recording notes, summarizing texts, or practicing skills.

However, teachers also need to be careful not to overemphasize or stereotype learning styles. Learning styles are not fixed or innate traits that determine how students learn best. Rather, they are preferences that can change over time and in different situations. Moreover, learning styles are not mutually exclusive or independent. Students can have multiple or mixed preferences that vary across domains and tasks. Furthermore, learning styles are not determinants of academic success or failure. Students can learn effectively using different or unfamiliar modes of instruction if they are motivated and engaged.

As mentioned, each type of learner has its strengths and weaknesses, and the same logic should apply to the evaluation phase of learning. It is crucial to have an open mind when we are assessing the level of comprehension and understanding of the subject the students are showing. Not everyone will show in the same way that they are keeping up with the class.

In an environment where the aim is to control behaviors through standardizing student reactions, one questions to what extent standardization will accommodate different learning styles, encompassing the complexity of each individual’s learning process. Privileging one learning style (i.e., visual learners) over others means disregarding students with great potential for development, which does not seem consistent with the goals of higher education. How can facial recognition and intelligence systems be compatible with different learning styles of students?

If teachers need to adopt a flexible and inclusive approach to teaching that respects the diversity of learning styles in their classrooms, we need to mirror this need to the technologies used to help teachers study their student activity by measuring academic success and even predicting future performance. It is important to bear in mind some technological assumptions so that we don’t face negative outcomes from tools that should be helping our teachers.

Many technological tools that teachers use to evaluate students’ learning levels are not designed to address the diversity of learning styles. For example, online quizzes and tests may favor students who are good at reading/writing but disadvantage those who learn better by seeing, hearing, or doing. Similarly, multimedia presentations and videos may appeal to visual and auditory learners, but not to those who prefer to read or write their own notes. Moreover, some technological tools may not provide enough feedback or interaction for kinesthetic learners, who need to move and manipulate objects to learn.

However, a teacher can establish which method (and so characteristics the technological tools used will evaluate) they want to privilege and be transparent about it. By doing so, they shirk the responsibility of completely changing their teaching style to accommodate all types of learning styles while, at the same time, setting criteria and helping students to be sure of what is being prioritized.

A difficult aspect of using technological tools inside the classroom is the possibility of these tools’ misreading and/or misjudging. Whether due to technological failure, flaws in the tool’s programming, or even the translation of the student’s feelings outwards through their body posture, it is possible for there to be errors in the students’ learning assessment. It is quite normal to find academic papers that construct a classroom emotion recognition algorithm by classifying visual emotions to improve the quality of classroom teaching (Yuan, 2022; Kim et al., 2018; Chen & Jin, 2015; Gu et al., 2016; Kerkeni, 2017). However, it is not easy to find papers that worry about the level of confidence of the reading of the body posture when we are trying to understand what is going on inside the students’ heads.

Therefore, teachers should be aware of the limitations and potential of the technological tools they use to assess student learning, taking into account the diversity of learning styles that exist. In addition, teachers should clearly communicate to students the assessment criteria and methods adopted, as well as the skills and competencies expected. In this way, teachers can promote a more inclusive and effective education that respects individual differences and encourages the development of all types of learners. It is important to emphasize that there’s no perfect technology, so it is necessary to always check and review the tools’ outputs so that it can help rather than hinder the teacher’s day-to-day life.

4. To Use or Not to Use: Recommendations for Responsible Use and Alternatives to the Adoption of Facial Recognition Technologies

It is a fact that facial recognition technology has been widely used in various contexts, such as security, entertainment, health, and education. However, the use of facial recognition also brings ethical, social, and legal challenges, which must be considered before its adoption. In this chapter, we present some recommendations for responsible use and alternatives to the adoption of facial recognition in education. The chapter is divided into two parts: in the first, we discuss the technological recommendations, which involve aspects such as data quality, accuracy, privacy, and security; in the second, we address the pedagogical recommendations, which refer to the role of the teacher, student diversity, formative assessment, and learner autonomy. Finally, we point out some flaws in the assessment of students by technological systems alone, which can jeopardize the teaching-learning process and the integral formation of students.

4.1. Technological Recommendations

The first recommendation is about being transparent. The university needs to clearly explain what technology it’s using, how it works, who’s using it, how it’s used, and its limitations. Communication with the academic community is crucial, not just to announce the use of technology, but also to discuss its benefits and risks. Workshops, instructional videos, and training sessions for teachers and staff can help everyone understand and use the technology properly and ethically.

It is important to note that many personal data protection legislations embody this recommendation through the principle of transparency. This principle, for instance, in Brazil via the General Personal Data Protection Law (Lei nº 13.709/2018), stipulates in its Article 6, VI, that the personal data controller (university) must offer the data subject (students) clear, precise, and easily accessible information regarding the processing of their personal data. The European General Data Protection Regulation (GDPR), in Article 5 (1) (a), also establishes the controller’s duty of transparency, detailing its application in Articles 12, 13, and 14. These articles mandate that the information made available to the data subject (e.g., the identity of the controller, categories of data, source of the information, purposes, the data subject’s rights of access, etc.) must be provided in a concise, transparent, intelligible, and easily accessible manner.

There is no specific set of recommendations from European data protection authorities explicitly addressing the use of facial detection and recognition technologies in university classrooms. However, significant concerns regarding transparency surround the application of the same technology in other contexts, such as airports and public spaces. The European Data Protection Board9 (EDPB) issued an opinion following a request from the French Data Protection Authority (CNIL) to emphasize the importance of informing passengers that their biometric data is being processed in airports, highlighting the inherent risks of false negatives, bias, and discrimination associated with the technology. In the case involving Clearview AI, the Italian Data Protection Authority10 deemed it reckless for the company to process images available online for the development of its facial recognition tool, ordering the cessation of data collection and the deletion of the data collected up to that point.

Faculty, staff, and students should receive information regarding the risks associated with the use of this technology within the university environment, particularly how technical characteristics (e.g., accuracy rates) can generate undesirable outcomes (e.g., discrimination, invisibility, etc.). While understanding the risks does not inherently solve the problem, given their potential to materialize into harm, it serves as a crucial starting point for raising awareness among university community members. Furthermore, knowledge of these risks can also empower community members to reflect on their behavior within the university space, revealing themselves only when deemed necessary or when unavoidable.

The second recommendation focuses on dealing with errors. Facial recognition isn’t perfect, especially for minority groups like black, indigenous, or Asian people. The university should share the accuracy rate and potential errors of the tool. When errors occur, it’s important to inform and work with affected groups to find solutions. The university should encourage reporting and correction of errors and establish channels for human review of automated decisions to ensure corrections are made.

Facial recognition and detection technology are not a singular entity but rather an umbrella term encompassing a set of technologies with distinct characteristics and functionalities. Consequently, while identifying an error (e.g., the non-detection of an Indigenous student’s face) may be straightforward, ascertaining the root cause of the problem proves to be a complex task, necessitating the involvement of diverse professional expertise in the investigation (Andrejevic, 2022). Similarly, human oversight can yield positive outcomes in identifying errors, whether false positives or false negatives, thereby mitigating potential harm; however, it will not have a significant effect on identifying the underlying causes of the incident, as this investigation will demand considerable time and resources.

However, even though identifying the causes of an error within facial detection and recognition technologies takes time, the investigation process should yield valuable learning. The first of these is learning through observation, gathering information on how community members felt upon discovering the error, and their opinions regarding the university’s response. The second is preventing the error from recurring, for example, by creating tests or simulations to verify what would happen under conditions similar to the incident in a classroom setting (Gasser & Mayer-Schönberger, 2024). Even if it is not possible to foresee or entirely prevent errors in facial detection and recognition technology, it is feasible to adopt technology governance measures that allow for the reduction of risks and their improved management within the university context.

The third recommendation suggests conducting experiments and building a database for ongoing learning. Facial recognition technology evolves, and the university should perform ethical experiments in different contexts. Safely storing data generated by the technology is crucial for training and improving algorithms. Periodically renegotiating contracts with facial recognition providers ensures continuous improvements in quality and ethics, aligning with the university’s needs and values.

Negotiating or renegotiating contracts with the providers of these technologies is not always straightforward; in some instances, there is even no room for negotiation. However, certain strategies can be adopted in scenarios with limited or no negotiating leverage. The first is to ascertain whether the same system is offered in European Union countries. This is because, in Europe, for example, prior testing is an obligation for providers of artificial intelligence systems classified as high-risk, mandating that these technology suppliers have (Article 19 of the Regulation (EU) 2024/85—Artificial Intelligence Act) a risk management system, technical documentation with a record of tests, and guarantees that their high-risk system has undergone a conformity assessment. The second strategy involves evaluating the documents available from the company, verifying the existence of prior testing as a good practice, and providing other technical information about the tool.

These recommendations are key for the responsible use of facial recognition in universities, aiming to ensure ethical, transparent, and fair use while respecting the rights and diversity of everyone involved. However, they are not sufficient. It is necessary to go through some pedagogical aspects of the use of facial recognition technologies in the classroom, and these aspects may suggest they shouldn’t even be used at all.

4.2. Pedagogical Recommendations

Although facial recognition technology provides benefits such as streamlined student identification, attendance monitoring, and personalized teaching, it also presents risks, including privacy violations, discrimination against minority groups, and potential threats to students’ autonomy and creativity. In light of these considerations, it becomes crucial for higher education institutions to adopt pedagogical guidelines for the responsible use of facial recognition in classrooms.

Again, one such recommendation emphasizes conducting controlled, targeted teaching experiments across the university. This entails limiting facial recognition usage to specific subjects chosen through ethical criteria and aligned with transparent pedagogical projects. These projects should engage students in discussions about the objectives, methods, benefits, and risks of incorporating technology in the classroom. Ensuring free and informed consent, respecting diversity and inclusion, and safeguarding personal and sensitive data are integral aspects of this approach.

Another key recommendation encourages universities not to confine facial recognition to narrow applications but to explore its broader pedagogical potential. For instance, students can adopt the technology, its uses, and its risks as objects of reflection, which can be employed as a tool to foster critical thinking, ethical reflection, social awareness, and digital citizenship among students. Additionally, facial recognition can serve as a resource to promote student dialogue, collaboration, interdisciplinary learning, and creativity. Moreover, it can function as an instrument to enhance access, quality, equity, and diversity in higher education.

Student assessment through technology is a challenge for teachers, as it involves issues of authenticity, validity, reliability, and student learning style. Given this, it is necessary to look for alternatives that can capture student learning more comprehensively and diversely (Feferbaum & Klafke, 2021).

Observation through facial recognition systems is one way of assessing students that allows a teacher to follow the students’ learning process, checking their participation, interaction, collaboration, and attitude in the proposed activities. This method should be systematic and continuous, based on pre-established criteria shared with the students. Transparency is much needed for this method to be successful.

The syllabus for the course can already include assessment criteria, making it clear which types of learning styles will be favored in the course. This means that the teacher must be transparent about the expectations and requirements that students must meet to get a good grade. It also means that the teacher must recognize that there are different ways of learning and that not all students adapt to the same method. In addition, it is important to give students feedback on the aspects observed, acknowledge progress, and point out difficulties.

However, facial recognition technologies are not enough for teacher who wants a complete and truthful evaluation of their students. There are some layers in human interaction that a machine cannot fully comprehend, such as cooperation and teamwork in work assignments.

Peer assessment (Feferbaum & Radomysler, 2021) is a way of assessing students that involve students in assessing each other, encouraging autonomy, responsibility, and cooperation between them. This method can be done through tools such as questionnaires, rubrics, scales, or comments that students fill in about their work or the performance of their colleagues. It should be guided by clear and fair criteria that are known to the students from the outset. In addition, it is important to give students feedback on the quality and consistency of their peer evaluations.

Self-assessment (Feferbaum & Radomysler, 2021) is a way of assessing students that allows students to reflect on their own learning, identifying their strengths and weaknesses, their difficulties, and their goals. It can be done through questionnaires, rubrics, scales, or comments that students fill in about their own work or performance. Teachers should encourage this as a regular and formative practice. In addition, it is important to give students feedback on the honesty and relevance of their self-assessments.

Finally, the 360˚ evaluation (Feferbaum & Radomysler, 2021) is a way of evaluating students online that allows you to integrate different sources of information about student learning, such as teachers, classmates, family members, or other educational agents. It brings together an array of evaluation methods, such as teacher observation, peer assessment, and self-assessment, as previously discussed. It must be planned carefully, taking into account the objectives, context, and target audience of the evaluation. In addition, it is important to give feedback to students on the different perspectives that make up their 360˚ evaluation.

5. Conclusion: Striking the Balance between Ethical Implementation and Pedagogical Considerations in Adopting Facial Recognition Technologies in Education

The use of facial recognition in education presents risks and challenges that need to be considered carefully and responsibly. It is not a neutral or infallible technology, but a tool that can affect privacy, security, diversity, and the quality of teaching and learning. We recommend that higher education institutions (HEIs) that intend to adopt this technology make a careful assessment of its objectives, benefits, limitations, and ethical, legal, and social implications. Additionally, we recommend that HEIs foster open and transparent dialogue with the academic community about the pros and cons of facial recognition, respecting the rights and opinions of all stakeholders.

In the first chapter, we discussed that although the emergence of smart classrooms in higher education institutions is new and hyped, it often overlooks considerations for comprehensive pedagogical design. In the second chapter of the article, we brought to light the discussion on the risks of using facial recognition systems in education. We investigated the ethical, legal, and social implications of these technologies, such as privacy, consent, bias, accuracy, and accountability. This investigation was followed by a discussion on how to align the use of these technologies with the different learning styles of the students.

In the concluding chapter, we have outlined both technological and pedagogical recommendations for the responsible use of facial recognition technologies in education, emphasizing a dual approach to effectively address the identified risks and challenges inherent in the adoption of these technologies within the educational landscape. With this article, we hope to have been able to point out the complexity of the discussion on the successful use of facial recognition technologies in education, and we hope to have contributed guidelines for responsible use complemented by other pedagogical tools.

NOTES

1In this article, we analyze the implications of adopting digital technologies in higher education. To do this, we use some theoretical and empirical references from studies that deal with the use of technologies in educational contexts, especially in basic education. We clarify that this choice is due to the fact that most of the texts available on the subject deal with the application of technology in schools and that we used them because many of the characteristics of the technology and the consequences of adopting these technologies will be similar, although not identical, to those in higher education. Where they were different, we made adaptations for the university context.

2The use of artificial intelligence tools for analyzing facial features of students began in schools in China and expanded to encompass universities. Unlike the use of these tools for security and student attendance control purposes, their application for assessing students’ performance in the classroom has generated criticism in Chinese society. For more information, please refer to: https://edtechchina.medium.com/schools-using-facial-recognition-system-sparks-privacy-concerns-in-china-d4f706e5cfd0

3The article describes the implementation of a facial recognition system at the university. It outlines the reasons for adoption and critiques of the system, emphasizing how the errors of the chosen system could harm minority groups within the university. For more details on the case, please refer to: https://www.insidehighered.com/news/2020/02/21/ucla-drops-plan-use-facial-recognition-security-surveillance-other-colleges-may-be

4Following students’ complaints, the Consumer Protection and Defense Department, linked to the National Consumer Secretariat of the Brazilian Ministry of Justice, notified Laureate Education company for a violation of its students’ rights in their distance learning courses. According to the Consumer Protection and Defense Department, the company did not inform students about the adoption of the technology and the potential risks of its use in assessment activities. The prior communication of the technology adoption would be a right of students as consumers of education services, and the production of an impact report would be a right of students as personal data subjects. For more information on the case, please refer to: https://www.gov.br/mj/pt-br/assuntos/noticias/senacon-por-meio-do-dpdc-notifica-faculdades-por-utilizacao-de-inteligencia-artificial-na-correcao-das-atividades

5In September 2021, the Italian Data Protection Authority (Garante) fined Bocconi University EUR 200,000 for a violation of the General Data Protection Regulation (GDPR). The Garante’s decision is available at: https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9703988

6https://www.intelligent.com/8-in-10-teachers-approve-of-student-use-of-chatgpt-nearly-all-use-it-themselves/

7Through exploratory research on Google Scholar, the authors found an extended list of works that focus on describing and discussing the construction of technological tools, without shedding light on discussions from a pedagogical point of view. Some examples of our findings are as bellow:

Gligoric et al. (2015) discuss the construction of a smart classroom that detects and regulates the degree of interest in a lecture, enabling the lecturer to notice and regulate the level of involvement in a lecture.

Yang and Chen (2011) wrote a paper that presents a novel auto feedback system used in smart classroom based on face and eye detection technology and PTZ camera.

Yu, You, and Tsai (2012) argue that Social awareness is the next-generation challenge of aware computing, but it is missed the concern of the risks and precautions to be considered in adopting technologies associated with ‘smart classrooms’.

Bidwell and Fuchs (2011) present, in their paper, a video recording and behavior analysis framework as a first step toward an automated teacher feedback tool for measuring student engagement.

8Zapalska and Dabb present literature review on “learning styles” and pointed out in their first chapter these styles as the most common cited in the literature.

9https://www.edpb.europa.eu/our-work-tools/our-documents/opinion-board-art-64/opinion-112024-use-facial-recognition-streamline_en

10https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9751362

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Andrejevic, M., & Selwyn, N. (2022). Facial Recognition. Polity Press.
[2] Bala, N. (2020). The Danger of Facial Recognition in Our Children’s Classrooms. Duke Law & Technology Review, 18, 249-267.
[3] Bidwell, J., & Fuchs, H. (2011). Classroom Analytics: Measuring Student Engagement with Automated Gaze Tracking. Behavior Research Methods, 49, 113.
[4] Bousquet, A. (2018). The Eye of War: Military Perception from the Telescope to the Drone. University of Minnesota Press.
https://doi.org/10.5749/j.ctv6hp332
[5] Chen, S., & Jin, Q. (2015). Multi-Modal Dimensional Emotion Recognition Using Recurrent Neural Networks. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge (pp. 49-56). ACM.
https://doi.org/10.1145/2808196.2811638
[6] Feferbaum, M., & Klafke, G. F. (2021). Prova na frente das câmeras? É hora de mudar o foco! | Estratégias para avaliação de estudantes em cursos mediados por tecnologia. In M. Murashima (Ed.), Experiências na educação mediada por tecnologias (pp. 364-383). FGV Editora.
[7] Feferbaum, M., & Radomysler, C. N. (2021). Entre conexões e desconexões. Metodologias ativas e humanização como pilares de um ensino mediado por tecnologia. In M. Murashima (Ed.), Experiências na educação mediada por tecnologias (pp. 147-163). FGV Editora.
[8] Ferguson, R. (2012). Learning Analytics: Drivers, Developments and Challenges. International Journal of Technology Enhanced Learning, 4, 304-317.
https://doi.org/10.1504/ijtel.2012.051816
[9] Gasser, U., & Mayer-Schönberger, V. (2024). Guardrails: Guiding Human Decisions in the Age of AI. Princeton University Press.
[10] Gligoric, N., Uzelac, A., Krco, S., Kovacevic, I., & Nikodijevic, A. (2015). Smart Classroom System for Detecting Level of Interest a Lecture Creates in a Classroom. Journal of Ambient Intelligence and Smart Environments, 7, 271-284.
https://doi.org/10.3233/ais-150303
[11] Gu, Y. et al. (2016). Speech Emotion Recognition Using Voiced Segment Selection Algorithm. In Proceedings of the 22nd European Conference on Artificial Intelligence (pp. 1682-1683). The Association for Computing Machinery.
[12] Hartzog, W., & Selinger, E. (2018). Facial Recognition Is the Perfect Tool for Oppression. Medium.
https://medium.com/@hartzog/facial-recognition-is-the-perfect-tool-for-oppression-bc2a08f0fe66
[13] Kaur, A., Bhatia, M., & Stea, G. (2022). A Survey of Smart Classroom Literature. Education Sciences, 12, Article No. 86.
https://doi.org/10.3390/educsci12020086
[14] Kerkeni, L., Serrestou, Y., Mbarki, M., Raoof, K., & Mahjoub, M. A. (2017). A Review on Speech Emotion Recognition: Case of Pedagogical Interaction in Classroom. In 2017 International Conference on Advanced Technologies for Signal and Image Processing (ATSIP) (pp. 1-7). IEEE.
https://doi.org/10.1109/atsip.2017.8075575
[15] Kim, Y., Soyata, T., & Behnagh, R. F. (2018). Towards Emotionally Aware AI Smart Classroom: Current Issues and Directions for Engineering and Education. IEEE Access, 6, 5308-5331.
https://doi.org/10.1109/access.2018.2791861
[16] Madiega, T., & Mildebrath, H. (2021). Regulating Facial Recognition in the EU. European Parliamentary Research Service, European Union.
https://www.europarl.europa.eu/RegData/etudes/IDAN/2021/698021/EPRS_IDA(2021)698021_EN.pdf
[17] Papamitsiou, Z., & Economides, A. A. (2016). Learning Analytics for Smart Learning Environments: A Meta-Analysis of Empirical Research Results from 2009 to 2015. In Learning, Design, and Technology (pp. 1-23). Springer International Publishing.
https://doi.org/10.1007/978-3-319-17727-4_15-1
[18] Patel, U. A., & Priya, S. (2014). Development of a Student Attendance Management System Using RFID and Face Recognition: A Review. International Journal of Advance Research in Computer Science and Management Studies, 2, 109-119.
[19] Romero, C., & Ventura, S. (2013). Data Mining in Education. WIREs Data Mining and Knowledge Discovery, 3, 12-27.
https://doi.org/10.1002/widm.1075
[20] Saini, M. K., & Goel, N. (2021). How Smart Are Smart Classrooms? A Review of Smart Classroom Technologies. ACM Computing Surveys, 52, Article No. 130.
https://doi.org/10.1145/3365757
[21] Saltman, K. (2016). Scripted Bodies. Routledge.
[22] Stark, L. (2019). Facial Recognition Is the Plutonium of AI. XRDS: Crossroads, the ACM Magazine for Students, 25, 50-55.
https://doi.org/10.1145/3313129
[23] Williamson, B. (2017). Big Data in Education. Sage.
[24] Yang, S., & Chen, L. (2011). A Face and Eye Detection Based Feedback System for Smart Classroom. In Proceedings of 2011 International Conference on Electronic & Mechanical Engineering and Information Technology (pp. 571-574). IEEE.
https://doi.org/10.1109/emeit.2011.6023166
[25] Yu, Y.-C., You, S.-C. D., & Tsai, D.-R. (2012). Social Interaction Feedback System for the Smart Classroom. In Proceedings of the 2012 IEEE International Conference on Consumer Electronics (ICCE) (pp. 500-501). IEEE.
[26] Yuan, Q. (2022). Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification. Computational Intelligence and Neuroscience, 2022, Article ID: 6453499.
https://doi.org/10.1155/2022/6453499
[27] Zapalska, A. M., & Dabb, H. (2002). Learning Styles. Journal of Teaching in International Business, 13, 77-97.
https://doi.org/10.1300/j066v13n03_06

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.