1. Introduction
The rapid proliferation of Artificial Intelligence (AI) in recent years has revolutionised numerous sectors, with education being one of the most significantly impacted. AI offers a range of possibilities, from reducing the administrative burden on educators to enhancing the learning experience for students through personalised educational tools. However, this technology also presents challenges, including the potential for increased academic dishonesty and the ethical implications of its integration into the educational system. Understanding the attitudes of students and educators towards AI is crucial for determining the most effective ways to incorporate these technologies into educational environments. This knowledge is vital not only for the advancement of AI in education but also for ensuring that its application enhances, rather than hinders, the learning process. As AI continues to evolve, it is imperative to assess whether it is at a stage where it can be seamlessly embedded into educational practices without introducing more challenges than it resolves. The present study aims to explore the perceptions of high school students regarding the use of AI for academic purposes. By examining these perceptions, this research seeks to contribute to the ongoing discourse on the role of AI in education and provide insights into how AI can be tailored to better serve educational objectives. This understanding may help educators and policymakers decide how to best integrate AI into the educational system, ensuring that its adoption is both effective and ethically sound.
2. Literature Review
2.1. The Ethics of Using AI in Education
The integration of AI in educational settings has raised numerous ethical considerations, which are essential to address to ensure responsible usage by students. Huang (2023) highlights the significant concerns regarding data privacy, emphasizing that the extensive use of student data by AI systems could potentially lead to breaches of confidentiality. Similarly, Choi (2024) discusses the implications of AI on academic integrity, particularly focusing on the risks of plagiarism and cheating facilitated by AI tools. This viewpoint is supported by Kwak, Ahn, and Seo (2022), who argue that AI tools can easily be misused by students to generate content or answers without fully understanding the material, thereby undermining educational outcomes and ethical standards.
Eden et al. (2024) add to this discourse by exploring the broader impacts of AI on student engagement and decision-making processes, suggesting that the use of AI might inadvertently promote passivity in learning. This could hinder the development of critical thinking and problem-solving skills if students become overly reliant on AI systems for answers. Furthermore, ethical considerations extend to ensuring that AI technologies are transparent and accountable in their use, particularly in assessments and decision-making processes, as noted by Azzam and Charles (2024). These authors emphasize the importance of establishing ethical guidelines and fostering ethical awareness among students to navigate the complexities of AI use responsibly.
Moreover, equitable access to AI resources is another significant ethical concern, as disparities in access could exacerbate existing educational inequalities. Eden et al. (2024) argue that without careful attention to equitable distribution, AI could widen the gap between students with varying levels of access to technology. Ayas and Charles (2024) further discuss the need for educational policies that ensure all students have equal opportunities to benefit from AI technologies, thereby promoting fairness and inclusivity in AI-enhanced education.
Based on these discussions, our study investigates high school students’ perceptions of these ethical implications, particularly focusing on data privacy, academic integrity, and equitable access. Unlike previous studies that primarily address these issues from a theoretical standpoint, our research provides empirical insights into how students perceive these ethical concerns in real-world educational settings. This focus on students’ perspectives is crucial, as it highlights the need for ethical guidelines that are informed by the stakeholders most affected by these technologies.
2.2. How Are Students Using AI?
AI technologies offer various applications in educational settings, enhancing personalized learning experiences and academic performance. Pendy (2023) and Alshehhi, Alzouebi, and Charles (2024) discuss how AI tools are utilized in personalized learning, tailoring educational experiences to individual students’ needs and preferences. This personalization allows for targeted support and guidance, which can improve students’ understanding and retention of educational content. Lin (2023) further elaborates on the use of AI-powered tools, such as chatbots and intelligent tutoring systems, which provide students with interactive learning experiences and real-time feedback.
Moreover, Wen (2024) examines the impact of AI on student writing and language skills, noting that AI applications like grammar checkers and writing assistants help students refine their language abilities. Hazaymeh (2024) supports this by highlighting how AI tools enhance English language learning through activities such as vocabulary building and grammar exercises. These studies suggest that AI technologies have the potential to significantly enhance students’ learning experiences by providing tailored support and fostering skill development across various subject areas.
AI is also increasingly employed in non-traditional learning environments to foster skills such as critical thinking and problem-solving. For instance, Thomas (2024) discusses the role of AI in formal and informal education, suggesting that AI tools can support both structured learning and exploratory educational activities. Gupta (2024) adds that AI systems can help teachers identify individual learning needs and adapt instructional strategies accordingly, which ultimately enhances student outcomes.
Our study extends this body of research by exploring how high school students are currently using AI tools in their academic work, including tasks such as solving mathematical problems and writing essays. By focusing on the specific applications of AI that students find most beneficial, we provide a more nuanced understanding of AI’s role in contemporary education and identify areas where its use can be optimized to support student learning.
2.3. Students’ Perceptions of AI
Students’ perceptions of AI in education are influenced by various factors, including their experiences with AI tools and their readiness for AI integration into educational settings. Kim (2023) and Irvin et al. (2011) highlight the importance of experiential learning in shaping students’ attitudes towards AI. Their research suggests that hands-on experiences with AI can positively influence students’ perceptions, enhancing their confidence in using AI tools and their understanding of the technology’s relevance to their future endeavors. Peng et al. (2023) also emphasize the importance of students’ perceptions in determining the success of AI integration, noting that positive perceptions can lead to greater acceptance and engagement with AI technologies.
Chiu et al. (2022) examine the impact of AI education on students’ thinking skills and technical abilities, arguing that AI-focused curricula can enhance cognitive skills and prepare students for an AI-driven future. This perspective aligns with the findings of Tonbuloğlu (2023), who evaluated the use of AI applications in online education and found that these tools can significantly improve student engagement and learning outcomes.
Our study builds on these findings by investigating high school students’ perceptions of AI’s impact on education quality, ethical considerations, and equitable access. By exploring these perceptions, we aim to provide insights into how students view the integration of AI in their learning environments and identify factors that influence their acceptance and use of AI technologies. This research contributes to the broader discourse on AI in education by highlighting the need for policies that promote ethical use, equitable access, and the development of critical thinking skills, ensuring that AI enhances, rather than hinders, educational outcomes.
In summary, the existing literature on AI in education highlights several critical areas, including ethical considerations, the applications of AI tools, and students’ perceptions of AI. While previous research has provided valuable insights into these areas, there is a need for more empirical studies that focus on students’ perspectives and experiences with AI technologies. Our study aims to fill this gap by providing a comprehensive analysis of high school students’ perceptions of AI, offering new insights into the challenges and opportunities associated with AI integration in education. By doing so, we hope to contribute to the development of more effective and ethical AI practices that support student learning and foster a positive educational environment.
3. Methodology
This pilot study aimed to investigate how secondary school students perceive the use of AI for academic purposes, and our research question was “How do high school students perceive the use of AI for schoolwork?”. The study employed a cross-sectional survey design, which is appropriate for capturing the perceptions of students at a specific point in time (Cohen, Manion, & Morrison, 2002). The survey method was chosen due to its efficiency in collecting data from many respondents and its ability to provide quantitative insights into student perceptions (James et al., 2021). The sample consisted of 57 secondary school students from Newcastle, England, primarily in Year 10, who were recruited through social media platforms, including Snapchat, WhatsApp, and Discord. The selection was non-random, relying on voluntary participation, which may limit the generalisability of the findings to a broader population. However, the sample size exceeded the initial target of 30 participants, providing a more robust data set for analysis. Despite this, the sample’s homogeneity in terms of age and academic level suggests that the results may not be fully representative of younger or older student populations (ibid).
3.1. Data Collection Instrument
Data were collected using a structured questionnaire designed in Google Forms. The questionnaire comprised multiple-choice questions, Likert scale items, and an optional open-ended question (see Charles & Charles, 2024). These questions were carefully crafted to be short, concise, and neutrally worded, aiming to minimise bias and ambiguity:
1) What grade/year group are you currently in?
2) How often do you use AI in your schoolwork?
3) Which AI tools/brands do you use for your schoolwork?
4) Why do you use AI tools for your schoolwork?
5) What specific tasks do you use your AI for?
6) How do you feel about the ethics of using AI in schoolwork?
7) Do you think using AI tools hinders the quality of your education?
8) Do you believe all students have equal access to AI tools for education?
9) Please share your thoughts on how AI tools have impacted your learning experience and any ethical concerns you have regarding their use in education.
The survey focused on several key areas: access to AI tools, dependency on AI for academic tasks, the frequency and purpose of AI use, and the types of AI technologies employed by the students. The data collection was conducted online to facilitate broad participation and ensure anonymity, encouraging honest and accurate responses from the participants. To ensure the validity of the research, the questionnaire was developed based on existing literature and theoretical frameworks related to AI in education. Content validity was enhanced through an expert review, where an educator familiar with AI and survey methodologies provided feedback on the survey items. This process helped refine the questions to better align with the research objectives and ensure they accurately captured the constructs of interest.
3.2. Data Analysis Procedure
Data were analysed using R Studio, a comprehensive software environment for statistical computing. The raw survey data, collected via Google Forms, was first cleaned to remove incomplete responses, ensuring the dataset was suitable for analysis. We employed a series of statistical tests to examine the relationships between variables. We began with descriptive statistics to summarize the distribution of responses across different survey items. For instance, frequency tables and bar plots were used to illustrate the distribution of AI usage among students and their perceptions of its impact on education quality.
To investigate the relationships between variables, we conducted Chi-square tests of independence, appropriate for our categorical data. The Chi-square test is a non-parametric test used to determine if there is a significant association between two categorical variables. In our study, the Chi-square test was applied to examine the relationship between AI usage frequency and perceived impact on education quality, as well as between perceived impact on education quality and beliefs about equitable access to AI tools. For example, the Chi-square test showed a statistically significant relationship between how often students use AI and whether they think it hinders the quality of their education, χ2 (28, 57) = 46.44, p = .015. This indicates that there is less than a 2% chance that this relationship is due to random variation, suggesting a meaningful association between these variables. Overall, these analyses helped us to uncover key patterns in students’ perceptions and usage of AI tools, contributing valuable insights to the field of AI in education.
3.3. Ethical Considerations
Ethical considerations were paramount in the design and implementation of this study. Informed consent was obtained from all participants before they engaged with the survey, with clear information provided about the study’s purpose, the voluntary nature of participation, and the anonymity of their responses. Given the age of the participants, care was taken to ensure that the language used in the consent form was accessible and understandable. The study adhered to strict confidentiality protocols, ensuring that individual responses could not be traced back to specific participants. Data were stored securely, and access was restricted to the research team to protect the privacy of the participants. In summary, this study utilised a well-structured survey method to explore the perceptions of secondary school students regarding the use of AI in education. Through careful attention to sample selection, data collection, validity, reliability, and ethical considerations, the research aimed to provide a reliable and ethically sound contribution to the understanding of AI’s role in contemporary education.
4. Results
The survey data were analyzed using R Studio, employing descriptive and inferential statistics to explore high school students’ perceptions of AI in their academic work. Key findings from the data analysis are summarized below.
1) AI Usage Frequency and Perceived Impact on Education Quality:
The Chi-square test of independence revealed a statistically significant relationship between how often students use AI and their perception of its impact on the quality of their education, χ2(28, N = 57) = 46.44, p = .015. As shown in Figure 1, students who frequently use AI tools tend to be more critical of its impact, often perceiving it as potentially hindering educational quality. In contrast, students who use AI less frequently or not at all are less likely to believe that AI negatively affects their education. This finding suggests a nuanced relationship between AI usage frequency and perceived educational outcomes.
Figure 1. Usage frequency vs. perceived impact on education quality.
2) Perceived Impact on Education Quality and Equitable Access to AI Tools:
There was a strong statistically significant relationship between students’ perceptions of AI’s impact on education quality and their beliefs about equitable access to AI tools, χ2(21, N = 57) = 76.64, p < .05. Figure 2 illustrates that students who perceive AI as negatively impacting educational quality are more likely to believe that access to AI tools is not distributed equitably among students. This finding indicates concerns about fairness and the potential for AI to exacerbate educational disparities.
3) AI Tools and Their Specific Applications:
The analysis also showed a significant relationship between the types of AI tools students use and the specific academic tasks they employ these tools for, χ2(437, N = 57) = 522.35, p < .05. Figure 3 depicts that the most commonly used AI tools were Snapchat AI and ChatGPT, which were primarily utilized for solving mathematical problems and writing essays. This suggests that students select AI tools based on their specific academic needs, indicating a targeted use of technology to support their learning.
Figure 2. Perceived impact on education quality vs. perceived equal access.
Figure 3. AI usage frequency vs. ethical views.
4) AI Usage Frequency and Ethical Views:
Although the relationship between AI usage frequency and students’ ethical views was not statistically significant, χ2(12, N = 57) = 13.92, p = 0.30, Figure 3 also provides insights into students’ ethical perspectives. The majority of students, regardless of how often they use AI, believe that the ethicality of AI depends on the context of its use. This situational ethical stance suggests that students are aware of the complexities surrounding the use of AI in education and are cautious about its potential misuse.
These findings provide valuable insights into high school students’ perceptions and usage of AI tools, highlighting the complexity of integrating AI into educational settings. They suggest that while AI can offer significant benefits for certain academic tasks, its impact on education quality and equity is perceived variably depending on usage patterns and access.
5. Discussion
The results of this study offer several insights into the perceptions of high school students regarding the use of AI in their academic activities. By analysing the data presented in three key figures, we can better understand how students’ frequency of AI usage correlates with their views on the quality of education, ethical considerations, and perceptions of equitable access to AI tools.
5.1. AI Usage Frequency vs. Perceived Impact on Education Quality
Figure 1 explored the relationship between the frequency of AI usage and students’ perceptions of its impact on the quality of their education. The chi-square test revealed a statistically significant association between these variables. This finding suggests that how often students use AI is linked to their views on whether AI enhances or detracts from their educational experience. From the distribution of responses, it is evident that students who use AI more frequently tend to be more critical of its impact on their education. Nearly half of the participants believed that AI tools hinder the quality of their education, though they manage this perceived hindrance through traditional study methods. This perspective may stem from concerns that AI tools, while helpful for specific tasks like solving mathematical problems or writing essays, could lead to over-reliance and a reduction in the development of critical thinking skills. The small but notable percentage of students who do not use AI raised concerns about its potential to decrease educational quality, particularly in mastering fundamental skills. Conversely, a significant portion of students who use AI moderately or rarely reported that AI had no substantial effect on their educational outcomes. This group’s responses may reflect a balanced use of AI, where students integrate AI tools into their study routines without becoming overly dependent on them. This group’s belief that AI does not hinder their education suggests that moderate, informed use of AI may help students navigate academic challenges without compromising educational quality.
5.2. Perceived Impact on Education Quality vs. Perceived Equal Access
Figure 2 examined the relationship between students’ perceptions of AI’s impact on education quality and their beliefs about equal access to AI tools. The analysis found a very strong statistically significant relationship between these two variables. This significant relationship indicates that students who perceive AI as negatively impacting educational quality are also more likely to believe that access to AI tools is not equitably distributed. This perception may be rooted in the idea that students who do not have access to AI tools are at a disadvantage compared to their peers who use these tools regularly. The belief in unequal access might amplify concerns about the fairness of AI’s impact on education, with students fearing that those with limited access are unfairly compared to those who can leverage AI to enhance their academic performance.
Interestingly, students who believe that AI positively impacts educational quality are more likely to view access to AI tools as equitable. This correlation suggests that students who benefit from AI may assume that similar benefits are available to all, potentially overlooking the disparities in access that exist within educational settings. The almost even division in responses regarding equitable access highlights the ongoing debate about the inclusivity of AI in education, pointing to the need for policies that ensure all students can benefit from AI technologies.
5.3. AI Usage Frequency vs. Ethical Views
Figure 3 investigated the relationship between how frequently students use AI and their ethical views on its use in education. While the relationship was not statistically significant, the heat map provides valuable insights into students’ ethical perspectives. The heat map shows that the majority of students who use AI “Often” believe that the ethicality of AI depends on how and when it is used. This situational or conditional ethical stance suggests that frequent users of AI are more likely to consider the context in which AI is applied, recognising that its ethical implications can vary depending on the task at hand. For instance, students may view AI as ethical when used for simple tasks like jogging their memory or understanding complex topics, but they may question its ethicality in contexts where it could lead to academic dishonesty or over-reliance.
Similarly, a significant number of students who “Never” use AI share this conditional ethical view, indicating that even those who refrain from using AI recognize the nuances in its ethical implications. This suggests a broader understanding among students that the ethicality of AI is not black and white but rather dependent on specific circumstances. The relatively low numbers of students who believe AI should not be used to complete tasks or who have significant ethical concerns about its use in education further highlight that the dominant view among students is one of conditional acceptance. Most students appear to believe that AI has the potential to be used ethically if applied correctly, though they are cautious about its potential misuse.
Ultimately, the findings from this study suggest that high school students hold complex and nuanced views on the use of AI in education. Their perceptions are influenced by how often they use AI, how they believe it impacts their education, and their views on the ethicality and equity of AI access. While there is a recognition of the potential benefits of AI, particularly in enhancing efficiency and accuracy in academic tasks, there is also a significant concern about its impact on educational quality and fairness. Students are wary of over-reliance on AI and the possibility that unequal access to these tools could exacerbate existing educational disparities. These insights underline the importance of fostering a balanced approach to AI integration in education, one that emphasises ethical use, equitable access, and the development of critical thinking skills. Educators and policymakers must ensure that AI is used to complement, rather than replace, traditional educational practices, providing all students with the tools they need to succeed in an AI-enhanced learning environment.
5.4. Ethical Considerations and Societal Impact
The integration of AI in education presents several ethical challenges, particularly around issues of equal access and data privacy. These concerns are not merely technical or logistical but have profound societal implications that require careful consideration to ensure that AI technologies enhance educational outcomes without exacerbating existing inequalities or compromising privacy.
The deployment of AI tools in educational settings raises significant concerns about equitable access. As highlighted by Eden et al. (2024), AI can potentially widen the gap between students with varying levels of access to technology, leading to increased educational disparities. This is particularly true in under-resourced schools or regions where access to advanced technologies may be limited due to economic constraints. The findings from our study, as depicted in Figure 2, show that students who perceive AI as having a negative impact on educational quality are also more likely to believe that access to AI tools is not distributed equitably among students. This perception underscores a critical societal issue: the digital divide.
The digital divide refers to the gap between those who have easy access to computers and the Internet, and those who do not. In the context of AI, this divide could lead to significant disparities in educational outcomes. Students with access to AI tools can leverage these technologies for personalized learning, thus gaining a significant advantage over their peers who lack such resources. This disparity can have long-term societal impacts, including reduced social mobility and increased inequality. As AI becomes more integrated into educational practices, it is imperative to develop policies and strategies that ensure all students, regardless of their socio-economic background, have equal access to these tools.
Data privacy is another critical ethical issue associated with the use of AI in education. AI systems often require vast amounts of data to function effectively, including personal data from students. Huang (2023) and Choi (2024) emphasize the risks associated with data breaches and the unauthorized use of personal information. The potential for misuse of student data raises serious ethical concerns about privacy and consent, particularly in environments where students may not fully understand or have control over how their data is used.
The societal implications of these privacy concerns are far-reaching. In educational settings, the loss or misuse of personal data can lead to a loss of trust in educational institutions and technologies, potentially hindering the adoption of beneficial AI tools. Furthermore, breaches of student data could expose sensitive information, leading to long-term consequences such as identity theft or discrimination based on personal data analytics.
Our study’s findings reflect a nuanced understanding of these issues among students. As shown in Figure 3, many students express conditional ethical views on AI use, suggesting awareness of the context-dependent nature of data privacy concerns. This indicates a need for robust data governance frameworks that prioritize the protection of student information, ensuring transparency and accountability in how AI tools collect, store, and use data.
Beyond the immediate concerns of access and privacy, the societal impacts of AI in education include shaping future workforce dynamics and influencing socio-economic stratification. As AI tools become more prevalent, students with skills in utilizing these technologies may find themselves better prepared for future job markets that increasingly value digital literacy and proficiency with AI. This could lead to a widening gap between students who have been trained with AI and those who have not, reinforcing existing social stratifications.
Additionally, there is the potential for AI to perpetuate or even exacerbate biases. AI systems trained on biased data can reproduce and amplify these biases, leading to discriminatory outcomes in educational settings, such as biased grading or reinforcement of stereotypes. Addressing these issues requires a commitment to developing AI systems that are fair, transparent, and inclusive, as well as educating both educators and students on the ethical use of AI technologies.
The ethical implications of AI in education are profound and multifaceted, touching on issues of equity, privacy, and broader societal impacts. As AI continues to integrate into educational practices, it is essential to ensure that these technologies are used ethically and equitably. Policymakers, educators, and technology developers must work collaboratively to create frameworks that support the responsible use of AI, ensuring that it serves as a tool for enhancing educational outcomes without compromising ethical standards or social equity.
By addressing these ethical challenges head-on, we can harness the potential of AI in education while safeguarding against its risks, fostering an inclusive and equitable learning environment for all students.
6. Conclusion
This pilot study explored the perceptions of high school students regarding the use of AI in their academic work, shedding light on their views about the ethical implications, the impact on educational quality, and issues surrounding equitable access to AI tools. The findings revealed a complex landscape where students’ experiences with AI and their corresponding beliefs are deeply intertwined. Notably, students who frequently use AI are more critical of its impact on education, highlighting concerns about over-reliance and the potential degradation of critical thinking skills. Additionally, there is a significant perception that access to AI tools is not equitably distributed, which may exacerbate existing educational disparities.
As a pilot study, this research was designed to be exploratory in nature, aiming to identify key themes and trends that could inform future, more comprehensive studies. However, several limitations should be acknowledged. The sample size was relatively small and homogenous, consisting primarily of Year 10 students from a single geographical location, which limits the generalizability of the findings. The study also relied on self-reported data, which may be subject to biases such as social desirability or recall bias. Despite these limitations, important lessons have been learned that will guide the development of a more robust study in the near future. Future research will aim to include a larger, more diverse sample to enhance the generalizability of the findings. Additionally, we plan to incorporate qualitative methods, such as interviews or focus groups, to gain deeper insights into students’ thought processes and the contextual factors influencing their perceptions. These improvements may help to refine our understanding of how AI is perceived and used by students, ultimately contributing to the development of educational practices that maximise the benefits of AI while addressing its challenges.
In conclusion, while this pilot study has provided valuable initial insights into the complex relationship between AI usage and student perceptions, it also underscores the need for continued research in this evolving field. As AI technologies continue to integrate into educational settings, it is imperative that educators and policymakers remain attuned to students’ experiences and concerns, ensuring that the use of AI enhances learning outcomes in an ethical and equitable manner.