Assessment of Students’ Tasks in the Era of Artificial Intelligence: Evidence from Higher Learning Institutions ()
1. Introduction and Background
The integration of Artificial Intelligence (AI) in higher education has significantly transformed teaching and learning practices [1]. As AI has been seen as an effective tool for teaching and learning in higher educational institutions due to its ability to enhance self-directed learning, provide personalized feedback, and promote innovative pedagogical approaches to lectures, assessment of learning achievement is still paramount [2] [3]. However, as AI-generated content becomes increasingly sophisticated, concerns arise about the actuality and ethics of assessment practices in this era of AI [4] [5]. Educators, particularly lecturers in higher learning institutions, face the dilemma of whether they are assessing the learner’s own intellectual work or AI-generated work [6].
This dilemma seeks the establishment of guiding principles and policies that ensure fair, ethical, and responsible assessment in the context of AI-supported learning [7] [8]. As higher education institutions continue using digital transformation, there is an urgent need to understand stakeholders’ perceptions and develop frameworks that govern AI assessment ethically in most universities, including Tanzanian universities [4] [9].
2. Statement of the Problem
AI tools enhance learning and assessment efficiency; they also challenge traditional notions of academic integrity and fairness [10] [11]. In many higher education institutions, especially in Tanzania, there are no clear guidelines distinguishing agreed AI-assisted work from unethical use [11] [12]. The absence of ethical policies leads to inconsistent teaching and assessment practices, confusion among educators/lecturers, and potential compromise of academic standards by lessening critical thinking and innovative abilities in learners [11] [13] [14].
Therefore, this study seeks to investigate stakeholders’ perceptions of AI use in assessment, identify principles for ethical assessment, and propose policies that can guide responsible and equitable assessment practices in higher education [9] [11] [15].
3. Theories Guiding the Study
The study is guided by two theories, namely the Ethical-Socio-Constructivist Theory and the Academic Integrity Theory (Normative Ethics Framework).
3.1. EthicalSocio-Constructivist Theory
The Ethical-Socio-Constructivist Theory, drawn from Vygotsky’s socio-constructivist theory and strengthened by ethical principles in education, fits this study due to the fact that knowledge is socially constructed [16]. Socio-constructivist theory holds that learning is socially mediated and that knowledge is constructed through interaction, reflection, and guided support [17] [18].
According to this study, digital tools such as Artificial Intelligence (AI) act as mediating artifacts in the learning process rather than replacing the learner’s cognitive role [16].
In the context of this study, AI tools such as ChatGPT, Grammarly, and Turnitin function as mediation tools, similar to textbooks, calculators, or learning management systems. The concern, therefore, is not the existence of AI itself, but how AI is ethically integrated into assessment so that learning remains authentic, meaningful, and reflective of the learner’s own understanding [19] [20]. Thus, this theory explains how AI supports teaching and learning while also justifying the need for ethical boundaries to preserve genuine knowledge construction within higher education assessment practices in the AI era.
3.2. Academic Integrity Theory (Normative Ethics)
To address the ethical dimension of assessment in higher learning institutions in the AI era, the study is further grounded in Academic Integrity Theory, informed by normative ethical principles such as honesty, fairness, accountability, transparency, and responsibility [21] [22].
These ethical principles are core constructs of contemporary academic integrity frameworks and are recognized in higher education governance and policy development, particularly in relation to digital and AI-supported assessment practices [23]. Consequently, Academic Integrity Theory provides a suitable ethical lens for examining responsible AI use, guiding policy formulation, and safeguarding fairness and credibility in higher education assessment.
4. Objectives of the Study
4.1. General Objective
To investigate assessment practices in higher education in the era of Artificial Intelligence in relation to ethics.
4.2. Specific Objectives
i) To understand the perceptions of students and educators on assessment in higher education in the AI era.
ii) To identify guidelines that govern/could govern assessment in the AI era in higher education institutions.
iii) To suggest policies that could be useful in facilitating ethical and responsible assessment in the AI era.
iv) To suggest ethical practices that ensure fairness and academic integrity of assessment in the AI era.
5. Research Questions
i) What are the perceptions of higher education students and educators regarding assessment in the AI era?
ii) What principles govern/could govern ethical assessment in the AI era in higher education institutions?
iii) What are the policies that could be useful in facilitating ethical and responsible assessment in the AI era?
iv) What ethical practices ensure the integrity of assessment in the AI era?
6. Significance of the Study
This study has contributed to the responsible use of AI in higher education by providing insights into how stakeholders perceive AI in assessment [24] [25]. It has assisted in developing guiding principles for ethical and fair AI assessment for higher education learners [8] [26]. Again, the study has proposed institutional and national policies to guide AI use in academic assessment [23] [27]. The aim is to enhance academic integrity, academic accountability, and transparency in AI-supported learning and real learners’ assessment, which reflects reality [28] [29]. The findings are beneficial to policymakers, university administrators, educators, and students by providing a guide and promoting a culture of responsible AI usage in academic contexts [30] [31].
7. Scope and Delimitation
The study focused on selected higher education institutions, namely Mbeya University of Science and Technology, Catholic University of Mbeya, Sokoine University of Agriculture, and Jordan University College of Morogoro. The study targets lecturers and students who are directly involved in teaching, learning, and assessment processes. Yet the study did not cover AI applications in secondary or basic education levels, because in those levels of education, learners are not allowed to have mobile phones, smartphones, and computer services are limited. Again, the participants were sixty in total (45 students and 15 educators); these participants are few in making generalizations, yet the authors believe that the findings from this study give a light to researchers to start doing different studies about assessment in the AI era, and universities may use it as a benchmark for setting policies and principles guiding the use of AI in teaching and learning.
8. Methodology
8.1. Research Design
To ensure thorough data collection and analysis, a mixed-methods approach was used, integrating quantitative and qualitative research methodologies. This enables researchers to examine the phenomenon from several angles because of this methodological integration, which offers both depth and breadth of insight. Quantitative techniques, including questionnaires, made it possible to gather quantifiable data from a larger sample. Statistical analysis enabled the researchers to identify trends and patterns pertaining to students’ and educators’ opinions of AI in assessment [32]. However, qualitative approaches—like open-ended questions and interviews—offered more in-depth understanding of participants’ attitudes, experiences, and opinions regarding moral assessment procedures in the AI era [33].
This method’s strength is its capacity to triangulate results from several data sources, improving the study’s validity and reliability [34]. The study collects people’s opinions about AI assessment and the reasons behind them by combining numerical data with narrative accounts. Moreover, it places quantitative trends within the real-world experiences of students and educators. Again, this method enables the researchers to develop comprehensive knowledge of the ethical potential and difficulties presented by AI in higher education. Mixed-methods research is especially appropriate for complicated educational problems where both empirical data and interpretive insights are required to guide practice and policy [35].
8.2. Population and Sampling
University educators and students who are actively involved in teaching, learning, and assessment activities inside higher education institutions made up the study’s target group [36]. Because both groups are immediately impacted by the use of artificial intelligence (AI) in educational assessment and can offer insightful information about its ethical, pedagogical, and competence consequences, this population is suitable. Ary et al. [36] state that choosing participants with pertinent expertise and experience guarantees that the information gathered appropriately captures the facts of the study setting.
Respondents who are familiar with institutional assessment procedures and AI applications like ChatGPT, Grammarly, Gemini AI, Turnitin, etc., were chosen using a purposive sample technique. Because it allows the researcher to purposefully include people who may offer rich and pertinent information, purposeful sampling is appropriate for qualitative and mixed-methods studies [37]. Therefore, those who have employed AI for teaching, learning, or assessment throughout the previous academic year were the main focus of the inclusion criteria.
A sample of roughly 45 students from three public universities and two private universities (9 students from each university) was chosen for the quantitative phase in order to fill out questionnaires intended to gauge perceptions and ethical issues related to AI assessment. The choice of these universities was based on the following factors: specialization provided in the universities, such as science programs (preparing engineers, doctors, agricultural officers, etc.), education programs (preparing teachers for different specializations), social sciences, and economics. Moreover, the universities involved have administered the number of students for three consecutive years. Admitting many students means getting many students from different parts of the country and many lecturers with different views and ways of thinking. This was done to obtain perceptions and opinions from different teaching experiences, which can be representative of the current assessment in higher learning institutions in Tanzania in the AI era.
According to Cohen, Manion, and Morrison [38], this sample size enabled the researchers to analyse the collected data while preserving representativeness. Interview guides were used to fifteen lecturers (three lecturers from each university) for the qualitative phase in order to learn more about their experiences, difficulties, and suggestions for ethical AI Assessment. According to Creswell and Plano Clark [32], combining quantitative breadth with qualitative depth improves the validity and thoroughness of mixed-methods study findings.
By ensuring a balanced representation of viewpoints, this method enables the assessment of both narrative insights and numerical trends. The study’s validity was strengthened by the involvement of several stakeholder groups, which also helped to build well-informed, context-sensitive principles and policies for the ethical use of AI in higher education evaluation.
8.3. Data Collection Instruments and Analysis
8.3.1. Questionnaire
Forty-five students were given questionnaires to complete in order to gather quantifiable information about their opinions and experiences regarding the moral application of assessment in the age of artificial intelligence (AI). Large-scale patterns and trends in stakeholders’ opinions can be effectively captured by questionnaires, enabling statistical summarization and comparison [32]. To gauge the degree of agreement or disagreement on important ethical, pedagogical, and policy-related concerns pertaining to assessment in AI generations, items were created using Likert-scale forms and multiple-choice forms. Moreover, open-ended questions were included to enhance deep understanding of the quantitative data on which respondents gave their insights [32].
8.3.2. Interviews
Fifteen lecturers were interviewed to gather qualitative views into the ethical dilemmas, guiding principles, and policy consequences of assessment in the AI era. Because they offer the freedom to dig further based on responses while preserving consistency among interviews, interviews are perfect for thoroughly examining participants’ life experiences, opinions, and attitudes [33]. Additionally, they concentrated on comprehending the practical applications of AI tools, the difficulties in determining if an assessment is ethical or unethical, and suggestions for national policies and organisations.
8.4. Data Analysis
Quantitative Data: Descriptive statistics like percentages, and charts have been used to analyse survey responses. This made it possible to identify prevailing patterns and trends in stakeholder experiences and views of assessment in the IA era [39].
Qualitative Data: The six-step method described by Braun and Clarke [40]. was used for the thematic analysis of interview data. This entails getting acquainted with the data, coding, creating themes, analysing and defining themes, and then presenting the results. Recurring patterns, important insights, and a detailed knowledge of stakeholders’ ethical concerns, guiding principles, and policy recommendations on the ethical application of the assessment were made possible by thematic analysis.
8.5. Validity and Reliability
Before collecting data, the questionnaire was pre-tested on a small sample of participants who guaranteed validity and reliability. Clarity, linguistic appropriateness, and the items’ capacity to capture the target constructions were evaluated during pre-testing [35]. Because patterns found in one data collection may be verified or expanded upon by the other, triangulating data from both quantitative and qualitative methodologies further strengthens the findings’ trustworthiness [36]. The reliability of the theme analysis was also strengthened via audit trails, peer debriefing, and member-checking during the qualitative phase.
9. Findings and Discussion
9.1. Perceptions of Students and Educators on Assessment in Higher Education in the AI Era
Artificial Intelligence (AI) is perceived as the advanced technology that allows machines to assist in thinking, learning, and making decisions similar to human beings. During the study, the majority of the participants disclosed that AI tools such as ChatGPT, Grammarly, Grok, DeepSeek, Poe, and Turnitin have changed the mode of teaching by transforming teaching, learning, and assessment in higher education.
Several respondents acknowledged that they are aware of and understand AI and its applicability. They contended that AI assists in various academic activities, including writing, editing, designing, drawing graphics, preparing PowerPoints, translation, setting examination questions, supporting students with special learning needs, paraphrasing, and many others. Generally, their perceptions were categorized into two: the positive category and the negative category. Below is their presentation.
9.1.1. Positive Perceptions
Teaching better
During the interview, respondents disclosed that AI helps teachers teach better and students learn faster through personalized learning support. In this context, some of the educators argued that in reality, AI helps to search not only the resources but also directs the user to the reliable ones when applied with clear and precise directions from the user. This gives an opportunity for the user to read and verify that source. Below is the anchor example:
…When AI is perfectly directed it gives high percentage of perfect answers, but in most cases when it is imperfectly directed the probability of giving unreliable answers is very high (Interview J November, 2025).
Providing instant feedback
Respondents showed that AI provides automated grading, checks assignments, and provides instant feedback, saving teachers’ time, especially for tasks submitted in soft copy for grading. In this argument, educators who participated in the interview also disclosed that it is not easy to grade and check tasks generated by AI if they are in hard copy, but in soft copy, it can be easily determined.
…You know in teaching learners in higher leaning institutions we need to change our traditional ways of assessing their tasks otherwise you will be grading an AI work if you get a hard copy. But for the soft copy you can easily understand by checking plagiarism and AI work (Interview G October, 2025).
AI in academic writing
Respondents said AI enhances academic writing, research, and access to learning materials by providing organized and summarized content. In this aspect, respondents showed that AI helps in providing structures of what is required, which lessens the burden on the writer of thinking about what structure is suitable for his/her work. They also continued by declaring that:
…Tools such as ChatGPT help with idea generation and explanations, Grammarly improves grammar, and Turnitin enhances academic integrity by checking for plagiarism (Interview F November, 2025).
AI in learning
Some of the respondents said AI makes learning more efficient, data-driven, and accessible. This was clarified by the respondent saying it is easier to search for materials in AI than going to the library. Therefore, AI is perfect for areas where resources are limited but the internet works perfectly.
9.1.2. Negative Perceptions
Weakening critical thinking
In the interview, respondents argued that AI allows students to retrieve simple answers which are almost skeletons, and students use them without deep reasoning, weakening their critical thinking. This is in line with the study done by [9] [11], which argued on rote learning that might be caused by unethical use of the AI.
Reducing the ability to think independently
During the interview, respondents disclosed that the over-reliance on AI by many students may result in a reduction of students’ ability to think independently, analyze, or solve problems creatively. While some learners cannot perform even simple tasks without AI tools, seeing themselves as incomplete without AI.
Production of inaccurate answers
It was disclosed by one of the interview respondents that AI sometimes produces inaccurate responses, requiring further verification. This is problematic because there are some learners who believe that everything from AI is accurate, which is not always true.
…You ask something in AI you get an incorrect answer, you ask a source of it, you get a wrong source, so verification is needed to get accurate answers (Interview M October, 2025).
Dishonest practice
The dishonest practice was observed by one of the respondents, declaring that students humanize AI-generated content to evade detection tools. This is surely dangerous to our academic practices since the assessor is at risk of assessing content generated by AI and then humanized. This occurred as another threat, as AI threatens reading culture and genuine intellectual engagement among students.
9.2. Suggested Principles That Govern/Could Govern Ethical Assessment in the AI Era in Higher Education Institutions
Participants proposed several guiding principles to ensure fairness, accountability, and integrity in AI-enabled assessment systems. In the questionnaires, 51.2% stated that ethical principles must be included in assessment, 25.6% strongly agreed, 11.6% disagreed, 7% strongly disagreed, while 4.7% were neutral. See the summary below in Figure 1 below:
9.2.1. Transparency
Participants suggested that students must be informed when and how AI is used in academic learning and assessment processes. The Universities and College should organize training for students to learn how to use AI ethically. In the questionnaire, 32.6% strongly agreed and 44.2% agreed that Universities should train both students and lecturers on responsible AI use. Nevertheless, 11.6% were neutral, 4.7% disagreed, and 7% strongly disagreed. This is evidence that training is important to both students and educators; see the summary in Figure 2 below:
![]()
Figure 2. Training students and lecturers.
Conducting training on AI use will help learners understand their roles in learning while integrating rather than relying on AI. Moreover, educators should clearly guide the learners on how they use AI in grading and giving feedback.
9.2.2. Fairness
Fairness was another aspect presented by the participants in the sense that AI should not disadvantage students who do not have access to advanced tools. However, this challenge is equally true, especially when students use AI tools without guidance from their facilitators. Also, respondents stipulated that assessment of students’ tasks must evaluate understanding, not the ability to use AI. The question is how to measure the understanding of students while knowing that they also use AI and try to humanize.
9.2.3. Integrity
Questions and tasks given should always try to engage students in the sense that they reflect their own thinking and experiences and not just give general answers. Respondents argued that questions which are contextualized and reflect real-life experiences cannot easily be generated by the AI. Therefore, educators must change their styles of asking questions in this AI era. Nevertheless, learners must be guided to properly acknowledge if they have used AI.
9.2.4. Privacy and Data Protection
AI normally works with different sources, and when they are requested anywhere, they can be provided by AI. This endangers the privacy of the data, and there is less protection. Therefore, student data used by AI systems must be protected from misuse and maintain privacy. One of the respondents in the interview said:
…I asked AI to reshape my Curriculum Vitae and improve grammar in my document finally my Curriculum Vitae can be accessed by any body who knows me through AI and my documents can be accessed in AI as well… (Interview H November, 2025).
This is dangerous, especially to people who are dangerous and want to misuse the given documents.
9.2.5. Accuracy and Reliability
Sometimes AI generates inaccurate data, which is also not reliable. The respondents suggested that AI tools must be regularly monitored and updated to ensure correctness in their production. This will simplify learning, as learners will receive accurate and reliable answers to help them learn better and understand the material.
9.2.6. Accountability
In assessing the learners’ achievement, human oversight must remain central; educators should verify AI-generated judgments by using AI detection tools like Zero GPT and Turnitin. This will help in making a learner accountable for what has been presented and possibly validate sources and include reflective examples, which will show the reality in the existing environment.
9.3. Policies Suggested to Promote Responsible Assessment Practices in the AI Era
Many participants noted that their universities lack specific AI guiding policies; instead, they are relying on traditional examination or ICT policies, which are equally irrelevant to AI usage. Below are the suggested policies:
9.3.1. Institutional Recommendations
Respondents showed the necessity for each academic institution to develop clear and comprehensive AI use policies specific to teaching, learning, and assessment. In the questionnaires, respondents indicated that policies should be created to guide AI use. 53.5% agreed that policies need to be developed, 20.9% strongly agreed, 14% were neutral, and 11.6% strongly disagreed. This shows that higher learning institutions really need guiding policies. See the summary in Figure 3 below.
Respondents went further in the interview by suggesting that the tolerable level of AI usage in students’ tasks is 0% to 35% as the acceptable range, but more than 40% might be considered excessive. Others said the Ministry of Education must establish national-level guidelines on AI use in higher education and secondary schools as well, which should not exceed 40%. When they were asked why 40%, they argued that AI is mainly used to generate content directed by the searching individual and not by itself. After obtaining what is required, the individual must contextualize the information and relate it to their own intent. Then, if the work is subjected to AI detection, there must be a certain percentage of AI in the text. Others said 0% to 30% should be considered the tolerable percentage.
The authors also see that the percentage of tolerance should depend on the nature of the task given by the teacher. For example, assignments 0% to 15% as a qualitative suggestion and quantitative average from the findings. Since learners are given ample time to explore things at their own pace and experience.
For research work, 0% to 25% average from the findings and qualitative suggestions because the AI in research is used to obtain genuine sources of information, including references, which are later validated by the researcher. In this case, if you check, the percentage of AI usage will be around the suggested range. According to the authors, providing a specific agreed range will lessen the probability of learners humanizing the AI-generated work by AI tools; instead, they will use their own ability as human beings.
The Ministry of Education should work with the universities and colleges closely to ensure equal access to AI tools for all students to avoid inequality among learners in those universities and colleges. Also, supervise the integration of AI policies with academic integrity frameworks to minimize misuse of AI.
9.3.2. Implementation Strategies
All academic institutions should provide software tools such as Turnitin, Copyleaks, or GPTZero to detect AI use to all students so that they use AI ethically instead of waiting for them to submit their work and check. Educators should not wait for their students to fall into the trap. Proper advice is needed for educators and students as well to make sure that learning is not a struggle. Nonetheless, respondents suggested that the policies should align AI policies with 21st-century skills requirements, including collaboration, creativity, critical thinking, and communication. Finally, they suggested a regular update of the policies as AI technologies evolve.
9.4. Ethical Practices Suggested to Ensure the Integrity of Assessment in the AI Era
Respondents recommended practical strategies to maintain ethical standards by focusing on the students, educators, and institutions:
9.4.1. For Students
Respondents suggested that every institution should clearly disclose the type of AI tool allowed to be used and the level of assistance provided in the tasks given. In the questionnaires, there was a mixture of responses, for example, 46.5% agreed that the AI-generated work should be accepted for assessment, while 20.9% were not sure, and 32.6% said no. This is an indicator that learners themselves need guidance in AI usage; see the summary in Figure 4 below.
![]()
Figure 4. AI-generated work is accepted for assessment.
The authors argue that institutions should advocate for the use of AI for paraphrasing, language polishing, or idea generation and not for producing full assignments, which will eventually be detected. Finally, learners should acknowledge AI contributions in footnotes, endnotes, or declared statements.
9.4.2. For Educators
The educators should design process-based assessments, including: oral defenses, in-class writing, draft submissions, and authentic and practical tasks; this will assess the student’s ability to explain and defend submitted work. Moreover, training instructors to recognize AI-generated patterns and verify student understanding was recommended. Nonetheless, the use of AI-detection tools cautiously, but not as the sole evidence of misconduct, was another ethical practice suggested by the participants.
9.4.3. For Institutions
Respondents suggested that higher learning institutions should organize continuous training on AI literacy, ethical use, and associated risks. Institutions should organize seminars which show the importance of promoting critical thinking and discouraging over-reliance on AI, which is claimed to lower critical thinking.
The institutions were encouraged to maintain human oversight in grading to ensure fairness rather than making decisions without considering the probability of learners applying AI to generate the content. Albeit, participants contended that if tasks are contextualized, AI cannot easily replicate them. This must always be insisted upon to the educators as well as learners.
10. Conclusion
The study concludes by calling upon education stakeholders, ministries guiding the provision of education in higher learning institutions, and the higher learning institutions themselves to collaborate and frequently check the progress not only of teaching and learning in higher learning institutions but also of the mode of assessment by comparing the expected outcomes and the real outcomes. Also, in this era of AI, policies guiding the ethical use of it are of paramount importance. Nonetheless, currently the universities can set the bylaws guiding the tolerable percent of AI in students’ work as suggested by the research participants as well as the authors (0% to 15%, 0% to 25%, or 0% to 35%). Finally, conducting seminars for educators and students will increase awareness of the use of AI tools without affecting their creativity and critical thinking abilities.
Acknowledgements
Participants and authorities who gave permission to conduct this study are highly acknowledged. Since without them, nothing would have been done. Thank you.
Data Availability
Not all data have been presented in the text; the presented data are those describing the findings. The unrepresented data can be made available on request from the corresponding author. This is due to restrictions imposed by the ethics committee to protect participants’ privacy.
Ethical Considerations
The researcher asked for permission to conduct research from the responsible authorities in Tanzania.
Ethical Approval
The permission was approved by the Office of Research and Publications at the Catholic University of Mbeya on 11th September, 2025.
Consent to Participate
This study involved participants above 18 years old, and each participant filled out the consent form; thus, there was no need to seek permission from their guardians/parents. Participants were given prior instructions and freedom of participation.