Beyond Problem-Solving: The Future of Learning in an AI-Driven World

Abstract

As generative AI becomes increasingly integrated into higher education, its influence is reshaping what it means to learn, teach, and think. This paper questions whether education should continue prioritizing problem-solving in an era where machines excel at it. Drawing from constructivist theory, Bloom’s Taxonomy, and the Cattell-Horn-Carroll model of intelligence, the paper proposes a shift from utilitarian models of learning to a framework that emphasizes conceptual agility, dialectical reasoning, and meaning-making. We explore how AI may alter cognitive development, risk passive knowledge acquisition, and deepen achievement gaps—while also offering opportunities for enhanced scaffolded learning. Ultimately, this paper argues for a human-centered, inquiry-based model of education that redefines learning beyond the mastery of problems toward the cultivation of imagination, ethical reasoning, and epistemic curiosity.

Share and Cite:

Burns, L. (2025) Beyond Problem-Solving: The Future of Learning in an AI-Driven World. Creative Education, 16, 520-534. doi: 10.4236/ce.2025.164031.

1. Introduction

The advent of generative AI, such as ChatGPT, has introduced profound shifts in educational landscapes, particularly in higher education. While these tools offer unprecedented efficiency and capabilities, they also raise urgent questions about the integrity of the learning process. As educators, our role is to examine these implications through the lens of established educational theories and cognitive science to preserve and enhance the core goals of education. However, AI is not merely a tool that enhances or disrupts traditional learning—it compels us to reconsider the fundamental purpose of education itself. If knowledge retrieval and problem-solving are increasingly automated, what remains uniquely human in the learning process?

Central to this discourse is the distinction between learning by doing and mimicking the appearance of learning—a distinction that underscores the importance of engaging both fluid and crystallized intelligence in meaningful ways. Yet, the assumption that “learning by doing” remains the gold standard of education must also be questioned. The concept of learning by doing is rooted in experiential and constructivist learning theories, most notably developed by John Dewey, Jean Piaget, and Lev Vygotsky. Dewey’s work emphasizes active engagement with materials and real-world applications, arguing that deep learning occurs when students actively participate in their own education rather than passively receiving information. Similarly, Vygotsky’s zone of proximal development and Piaget’s stages of cognitive development stress the importance of guided, hands-on engagement in fostering intellectual growth.

These theorists emphasized active engagement and scaffolded cognitive growth, but their frameworks emerged in a pre-AI era. If AI now mediates engagement and scaffolding, should we still prioritize “learning by doing”, or must we seek a new paradigm that accounts for AI as both collaborator and disruptor?

Wang et al. (2024) offer a comprehensive synthesis of AI applications in education, highlighting the growing efficacy of adaptive learning, intelligent tutoring, and personalized feedback systems. Their review underscores the field’s methodological diversity and the potential for AI to support conceptual growth, not just efficiency.

Some scholars argue that AI enhances scaffolded learning experiences (Fortner & Katzarska-Miller, 2024), while others warn that it fosters cognitive shortcuts, diminishing students’ ability to engage in deep, effortful thinking (Gammoh, 2024). This paper extends beyond the question of skill acquisition and academic integrity to ask: How does education cultivate intellectual resilience, imagination, and meaning-making in a world where machines now “think” alongside us?

2. Rethinking Engagement and Equity in an AI-Mediated World

2.1. The Limits of Learning by Doing

While learning by doing is widely recognized as a cornerstone of meaningful education, some scholars challenge whether it primarily serves a utilitarian function—preparing students for specific tasks rather than fostering intellectual and imaginative growth. The question arises: Can active learning be decoupled from real-world application, or is its effectiveness inherently tied to problem-solving and concrete experiences? Some critiques suggest that traditional formulations of experiential learning may underemphasize engaged learning as an open-ended, exploratory process rather than merely a hands-on approach aimed at practical competence.

If the primary goal of education is not just the mastery of skills but also the expansion of imagination, ethical reasoning, and conceptual agility, then learning by doing might require a broader conceptualization—one that allows for deep cognitive engagement even in abstract or speculative domains. This perspective is particularly relevant in the age of generative AI, where knowledge production is increasingly automated, making the line between practical and intellectual engagement blurrier. If AI can generate solutions, insights, and even creative works, should the role of education shift away from doing toward interrogating—encouraging students to critique, interpret, and challenge automated knowledge rather than merely applying it? Palmer (2011) suggests that true learning emerges from the ability to hold contradictions reflectively rather than reactively, a skill that AI cannot replicate. The rise of AI challenges us to ask: Should education still center on solving problems, or should it instead prioritize the capacity to frame questions, navigate complexity, and explore meaning?

Additionally, the traditional focus on experiential learning often assumes that knowledge is best acquired through its direct application. However, generative AI disrupts this assumption by offering knowledge without the need for a prior struggle to obtain it. This raises concerns that overreliance on AI could erode students’ ability to engage in slow, effortful cognitive work. If AI can provide instant, structured knowledge, what happens to the cognitive endurance required for deep, effortful learning? Scholars such as Gammoh (2024) warn that AI fosters cognitive shortcuts, which may diminish students’ capacity for sustained engagement with complex intellectual challenges.

As we explore AI’s impact on learning frameworks, it becomes crucial to ask: What forms of learning best prepare students not just for work, but for deeper, more expansive intellectual engagement? This paper assesses generative AI’s effect on learning, critical thinking, and assessment integrity. Specifically, we explore whether AI serves as an effective educational scaffold that supports deep learning or whether it encourages a reliance on automation that weakens students’ intellectual engagement.

Furthermore, we will introduce competing perspectives on AI’s role in education. While some scholars argue that AI can augment learning through scaffolded problem-solving and adaptive learning environments (Fortner & Katzarska-Miller, 2024), others raise concerns that AI tools may encourage passive learning habits and limit students’ ability to engage in deep, effortful cognitive work (Gammoh, 2024).

By addressing these perspectives, we will explore the broader implications of AI’s integration into education, particularly its impact on higher-order cognitive processes. To that end, this paper examines AI’s alignment with constructivist learning theories and how it either disrupts or complements these frameworks. Ultimately, we argue that educators must approach AI’s adoption thoughtfully, ensuring that it serves as a tool for enhancing, rather than replacing, the cognitive and experiential rigor that defines meaningful learning.

2.2. Educational Disparities and AI’s Double Edge

Educational achievement gaps have long been a systemic issue, with students in the lowest quartile falling further behind their higher-performing peers (Malkus, 2025). These disparities are influenced by socioeconomic factors, access to resources, and the effectiveness of traditional pedagogical approaches. The introduction of generative AI raises pressing questions: does it offer an opportunity to mitigate these gaps, or does it risk reinforcing them by providing advantages primarily to students who are already academically successful?

Generative AI presents a paradox in addressing achievement gaps. On the one hand, AI-powered tutoring systems and adaptive learning platforms can provide personalized educational support to struggling students, offering real-time feedback and scaffolding that aligns with their specific needs (Batista et al., 2024). Supporting this perspective, Thomas et al. (2024) found that hybrid human-AI tutoring significantly improved student proficiency in low-income middle school settings, particularly benefiting lower-achieving students. These findings suggest that thoughtfully designed AI integration can serve as an equalizer when human oversight is preserved. Such tools can be particularly beneficial in addressing learning loss and offering individualized remediation that traditional classroom instruction may not always provide. By tailoring educational content dynamically, AI has the potential to democratize access to high-quality learning experiences. A recent meta-analysis found a significant positive effect of AI interventions on students’ academic achievement across diverse subject areas and educational levels, with an overall effect size of 0.924 (Dong, Tang, & Wang, 2025). This reinforces claims that, under the right conditions, AI can meaningfully enhance formal learning outcomes. However, this optimism must be tempered by a recognition of the disparities in AI accessibility—students from well-funded schools and households with high technological literacy are far more likely to benefit from AI-enhanced learning than their lower-income counterparts, exacerbating educational inequalities.

A historical comparison with previous technological interventions highlights both the potential and limitations of generative AI. The introduction of calculators, spellcheckers, and MOOCs (Massive Open Online Courses) followed a similar pattern—each was initially heralded as a democratizing force in education, yet their benefits were often disproportionately realized by learners with existing academic strengths. MOOCs, for instance, offered broad access to high-quality content, but completion rates were highest among already well-educated individuals. Likewise, while spellcheckers improved surface-level writing mechanics, they did little to enhance deep literacy skills. Generative AI could follow this trajectory, where students who engage with it critically and strategically reap the most rewards, while others may passively consume AI-generated content without developing essential learning skills.

Beyond accessibility concerns, AI also raises fundamental pedagogical questions about engagement and cognitive effort. If generative AI can instantly summarize complex texts, solve problems, and generate explanations, there is a risk that students may rely on AI as a shortcut rather than engaging in the deep cognitive struggle necessary for genuine learning. This risk is particularly pronounced for students already at an academic disadvantage—if they are encouraged to depend on AI-generated content rather than actively developing their own analytical and reasoning skills, AI could inadvertently perpetuate, rather than alleviate, educational inequities.

Thus, the challenge for educators is to integrate AI in ways that support equity rather than exacerbate existing disparities. This requires intentional design in AI-driven curricula, ensuring that tools are not only accessible but also structured to promote deep engagement rather than passive consumption. AI should function as a scaffold that encourages students to reflect, critique, and build upon its outputs, rather than merely absorbing AI-generated responses. Furthermore, educators must develop pedagogical strategies that emphasize AI literacy—ensuring that students learn how to question, validate, and refine AI outputs rather than accepting them at face value.

Future sections will explore these themes further, evaluating how AI aligns with educational theories and whether it fosters genuine intellectual growth. Ultimately, the success of AI in education will depend not on its technological capabilities alone but on how it is intentionally implemented to cultivate critical, engaged, and self-directed learners.

3. Beyond Problem-Solving: Cognitive and Taxonomic Shifts

The Cattell-Horn-Carroll (CHC) theory of cognitive abilities provides a crucial framework for understanding the cognitive implications of generative AI. CHC theory distinguishes between fluid intelligence (Gf)—the ability to solve novel problems and adapt to new situations—and crystallized intelligence (Gc)—the accumulated knowledge and expertise drawn from past learning experiences (Flanagan & Dixon, 2014; Schipolowski, Wilhelm, & Schroeders, 2014).

Fluid intelligence (Gf) plays a critical role in problem-solving, reasoning, and adaptability in novel situations. Students develop Gf by engaging in critical thinking exercises, pattern recognition tasks, and abstract reasoning challenges that require them to synthesize information and derive solutions independently. However, generative AI introduces a fundamental shift in this process, as it provides instant access to structured information, eliminating the cognitive struggle that often underpins Gf development. When students rely on AI-generated responses instead of working through complex problem-solving steps, they may experience a gradual decline in cognitive endurance, as their ability to engage with uncertainty and ambiguity is reduced. This raises the deeper question: should education continue to center on problem-solving, or should it pivot toward fostering question-framing, conceptual exploration, and meaning-making as core intellectual competencies?

Conversely, crystallized intelligence (Gc), which encompasses knowledge accumulation, vocabulary, and factual recall, may be artificially enhanced by AI. The vast informational capabilities of generative AI tools enable students to access, summarize, and apply large amounts of data with unprecedented speed. AI can act as a knowledge repository, aiding students in retrieving information efficiently, which may support faster learning cycles. However, this advantage raises concerns about surface-level engagement—if students do not critically evaluate AI-generated content or synthesize it into meaningful knowledge structures, they may develop a fragmented understanding of concepts rather than deep mastery. This phenomenon echoes previous critiques of education’s reliance on memorization rather than true cognitive integration. AI does not inherently promote learning—it merely accelerates access to information. Without deliberate intellectual engagement, students may amass knowledge but fail to cultivate wisdom.

The interplay between Gf and Gc in AI-enhanced learning environments creates both opportunities and challenges. Traditionally, the dynamic interaction of these two forms of intelligence supports cognitive flexibility, enabling students to apply prior knowledge (Gc) to solve new problems (Gf). However, AI use may shift this balance, leading to an increase in passive knowledge acquisition (Gc) while diminishing active problem-solving skills (Gf). Gerlich (2025) found a significant negative correlation between frequent AI tool usage and critical thinking, mediated by cognitive offloading. These findings raise important questions about the unintended consequences of AI convenience, particularly the erosion of deep cognitive engagement. A large-scale survey by Lee et al. (2025) found that confidence in GenAI was inversely related to the likelihood of users engaging in critical thinking. While GenAI can increase efficiency, it may simultaneously reduce the perceived effort required for reflective reasoning, shifting users from deep problem-solving to oversight roles. If AI handles pattern recognition and hypothesis generation, students may become less engaged in formulating their own analytical frameworks, potentially weakening their ability to transfer skills to unfamiliar contexts.

Yet, perhaps the most pressing critique is whether problem-solving itself should remain the focal point of education. If AI increasingly automates the mechanics of problem-solving, the human cognitive enterprise may need to pivot toward intellectual curiosity, ethical reasoning, and the ability to frame and interrogate meaningful questions. Should education focus on navigating uncertainty, synthesizing complex ideas, and making sense of contradictions rather than merely deriving correct solutions? This shift would require a reevaluation of what it means to be an educated person in an era where machines perform many cognitive tasks once deemed uniquely human.

To mitigate these risks, educators must design AI-integrated learning environments that require students to actively process information, reflect on their cognitive strategies, and engage in problem-solving tasks without over-relying on AI-generated solutions. One potential approach involves structured AI-assisted inquiry, where students use AI tools for preliminary research but are required to critique, refine, and justify AI-generated outputs. Another strategy is deliberate engagement in problem-based learning (PBL), where AI serves as a supplementary tool rather than the primary source of cognitive effort. By embedding AI in learning structures that prioritize deep engagement over passive knowledge retrieval, educators can preserve the balance between fluid and crystallized intelligence, ensuring that students develop both adaptability and expertise in meaningful ways.

Bloom’s Taxonomy has long been a foundational framework for structuring educational objectives, yet it remains fundamentally biased toward utilitarian learning—emphasizing productivity, task completion, and hierarchical cognitive progression over intellectual exploration. The model, rooted in a structured movement from knowledge acquisition to application, analysis, and creation, assumes that learning is best measured by problem-solving efficiency rather than by the ability to navigate complexity, frame meaningful questions, or engage in speculative thought. As Buckminster Fuller (1969) envisioned in Operating Manual for Spaceship Earth, technological progress should ideally liberate humanity from the necessity of labor. In this light, education’s purpose shifts—not merely to produce problem-solvers, but to cultivate individuals capable of conceptualizing the unknown.

4. From Problem-Solving to Possibility: Rethinking Cognitive Development in an AI Era

4.1. A New Model for AI-Integrated Learning

While generative AI has demonstrated remarkable capabilities in automating certain forms of structured problem-solving, especially in domains with clearly defined parameters (e.g., code generation, language translation, mathematical computation), its performance is far less robust in ill-structured domains requiring contextual judgment, ethical reasoning, or ambiguity resolution. Current models still struggle with nuance, contradiction, and tasks that require transfer across contexts—areas where human cognition remains essential. Therefore, this paper does not argue that AI can—or should—replace human problem-solving altogether, but that its growing influence repositions the value of problem-framing, meaning-making, and epistemic curiosity as uniquely human educational priorities. This paper proposes an alternative framework that accounts for the evolving role of AI in cognitive development. Instead of reinforcing hierarchical skill acquisition, learning should prioritize:

  • Dialectical learning: Treating contradictions as sites of intellectual expansion rather than problems to be solved.

  • Framing over solving: Teaching students to identify, scope, and redefine problems rather than simply working toward preordained solutions.

  • Imagination and play: Recognizing that the highest forms of intelligence involve creative exploration, speculative reasoning, and engagement with uncertainty rather than task completion.

4.2. Rethinking Bloom’s Taxonomy

The revised Bloom’s Taxonomy provides a framework for understanding how AI can either support or hinder meaningful learning. Research by Pujawan et al. (2022) highlights how Bloom’s Taxonomy-oriented learning activities improve both scientific literacy and critical thinking skills. The traditional hierarchy—moving from lower-order thinking skills (remembering, understanding) to higher-order skills (analyzing, evaluating, and creating)—suggests that AI tools should primarily assist with foundational tasks while leaving complex cognitive work to students (Arends, 2021). However, as generative AI increasingly mediates all levels of cognitive engagement, we must reconsider whether Bloom’s framework, which prioritizes problem-solving and task completion, is sufficient for preparing students for a world where AI can automate many cognitive functions.

AI excels at lower-order cognitive tasks, particularly in the remembering and understanding stages. Students can use AI to retrieve facts, summarize information, and clarify complex topics, allowing them to develop a foundational knowledge base more efficiently. However, while AI can expedite knowledge acquisition, it may also reduce the need for deep comprehension, as students might rely too heavily on AI-generated explanations rather than engaging critically with the material themselves. This raises a critical concern: Should education continue emphasizing knowledge retrieval and application, or should it pivot toward fostering interpretation, conceptual synthesis, and intellectual curiosity?

At the applying level, AI-powered intelligent tutoring systems offer adaptive learning experiences that personalize instruction and provide practice problems tailored to student progress. These AI-driven approaches can help students reinforce their understanding and transfer knowledge to new contexts. However, there is a risk that students may become passive recipients of AI-generated applications rather than developing their own independent problem-solving strategies, leading to a decline in active cognitive effort. If AI can suggest solutions and optimize workflows, the fundamental challenge is ensuring that students remain intellectually engaged rather than outsourcing cognitive labor to machines.

AI’s role in analyzing is more complex. While AI can structure data, categorize information, and even highlight key themes in large datasets, true analysis requires students to distinguish between credible and unreliable sources, evaluate arguments, and synthesize disparate ideas. If students rely solely on AI to perform these functions, they may develop superficial analytical abilities without engaging in deeper cognitive reflection. This underscores the need for epistemic agility, or the ability to navigate, critique, and refine knowledge rather than passively accepting AI-generated outputs. Educators should design assignments that require students to critique AI-generated insights, compare alternative perspectives, and engage in manual analysis of concepts to strengthen independent thinking.

The evaluating stage of Bloom’s Taxonomy involves forming judgments, weighing evidence, and making informed decisions—skills that are crucial for developing intellectual autonomy. AI can assist in this process by generating counterarguments, prompting students to refine their reasoning, and offering alternative perspectives. However, if students uncritically accept AI-generated conclusions, they risk overlooking biases in AI models and failing to cultivate independent evaluative skills. Educators must encourage students to question AI’s assumptions, scrutinize its responses, and justify their own viewpoints with robust evidence. The goal should be to use AI as a dialectical partner rather than a definitive source of knowledge—allowing students to refine their reasoning through active interrogation rather than passive acceptance.

Finally, at the creating level, AI’s role is highly debated. On the one hand, AI can enhance creativity by suggesting novel ideas, generating preliminary drafts, and assisting with brainstorming. On the other hand, critics argue that AI-produced content often lacks originality and depth, as it is derived from pre-existing patterns rather than true innovation. To preserve authentic creative engagement, educators should structure assignments where AI serves as a collaborative tool rather than a primary content generator, ensuring that students take an active role in shaping, refining, and personalizing their work. Creativity must remain an iterative, deeply human process that integrates imagination, contradiction, and self-expression—aspects that AI cannot fully replicate.

5. Embedding AI in a Human-Centered Learning Model

Note: Modified by Anton Tolman, Ph.D., based on Anderson and Krathwohl (2001) and Bloom (1956). Licensed under Creative Commons Attribution-Non Commercial 4.0 International License.

Figure 1. Bloom’s taxonomy revised—course design.

To counteract the risks associated with passive AI use, educators must design assessments that require active engagement at the highest levels of cognitive development. This could include:

  • Iterative assignments that require multiple drafts, self-reflection, and peer feedback to ensure deep engagement with the material.

  • Structured AI disclosure, where students must explain how they used AI, evaluate its strengths and limitations, and justify their final work.

  • Assignments emphasizing epistemic exploration, where students actively construct and defend ideas rather than merely demonstrating factual recall or automated synthesis.

By embedding AI into interactive and thought-provoking learning experiences, educators can harness its benefits while maintaining the intellectual rigor necessary for higher education. However, if education is to remain meaningful in an AI-enhanced world, we must move beyond frameworks that emphasize cognitive efficiency and instead cultivate deeper forms of intellectual exploration, conceptual agility, and human creativity. The key is not to replace traditional learning methods with AI but to expand educational objectives to ensure that human intelligence remains indispensable (Figure 1).

6. Reclaiming the Role of Educators and the Socratic Method in the Age of AI

The American Psychological Association (2023) emphasizes the importance of fostering deep learning, critical thinking, and engagement. Generative AI presents both opportunities and challenges to these goals, particularly when used without pedagogical oversight. While AI can enhance personalized learning and cognitive exploration (Fortner & Katzarska-Miller, 2024), its indiscriminate use risks diminishing students’ capacity for deep engagement and highlights the need for clearer AI guidelines, faculty training, and ethical considerations in AI-assisted education.

Rather than outright banning AI, educators must reclaim their role as curators of learning, guiding students in thoughtful and responsible AI use. Instead of perceiving AI as a threat, instructors should explore pedagogically sound frameworks that leverage AI to enhance student engagement while maintaining academic integrity. A structured approach to AI-assisted learning ensures that students develop AI literacy—learning to use AI as a cognitive tool for augmentation rather than a substitute for intellectual effort.

7. Strategies for AI Integration in Deep Learning

One effective strategy for AI integration is AI-assisted Socratic questioning. Burns, Stephenson, and Bellamy 2016 (see also Meadows, Rose, Burns, & Bellamy, 2021) emphasize the effectiveness of the Socratic method in fostering deep epistemological shifts among students. This method thrives on ambiguity, dialogue, and open-ended inquiry, encouraging students to grapple with complex questions and challenge their assumptions. In contrast, AI often provides certainty and definitive answers, which can hinder deep learning if students passively accept responses without engaging in critical evaluation.

By incorporating AI into structured debates and critical discussions, educators can foster intellectual agility, challenging students to analyze AI-generated perspectives, refine their arguments, and explore complex topics with greater depth. AI can serve as a devil’s advocate, generating counterarguments that compel students to defend or refine their positions. This approach ensures that students interact with AI dynamically, treating it as an intellectual sparring partner rather than an authoritative source of knowledge.

Another promising application is AI-enhanced peer feedback systems. AI can assist in evaluating drafts, identifying logical inconsistencies, and providing structural recommendations, allowing students to receive immediate formative feedback before peer or instructor review. However, passive acceptance of AI-generated suggestions risks undermining students’ analytical engagement. Educators must ensure that students critically assess AI feedback, articulate their reasoning for revisions, and refine their work beyond algorithmic optimization.

8. Repositioning AI as a Catalyst for Inquiry, Not an Answer Key

To reclaim their role as facilitators of deep learning, educators should implement AI within pedagogical frameworks that prioritize active engagement, critical analysis, and ethical reflection. This involves:

  • Developing transparent AI policies that promote responsible use without fostering cognitive dependency.

  • Training faculty on AI’s epistemological limitations, equipping them with the tools to guide students in meaningful AI integration.

  • Redesigning assessments to emphasize inquiry over answers, ensuring that students engage with AI as a thinking partner rather than an information retrieval tool.

Additionally, AI should be used to prompt deeper questioning rather than simply supplying answers. Instructors can design assignments where students analyze AI-generated responses, evaluating biases, strengths, and limitations. This fosters metacognition, compelling students to critically engage with AI outputs rather than accepting them at face value. AI can also be incorporated into debate formats, where it presents contrasting viewpoints that students must interrogate, further developing their analytical reasoning skills.

By embedding AI into inquiry-based learning models, educators can balance technological efficiency with intellectual rigor, ensuring that students remain active participants in the learning process. A well-structured approach to AI-enhanced education should cultivate conceptual synthesis, ethical reasoning, and intellectual resilience—the very qualities that make human cognition indispensable in an AI-driven world.

9. Ethical and Institutional Considerations in AI-Integrated Education

The ethical and institutional challenges of AI in higher education extend beyond student performance. Batista, Mesquita and Carnaz (2024) highlight concerns about assessment integrity and the need for stronger institutional guidelines to prevent academic dishonesty. Gammoh (2024) further emphasizes the risks of overreliance on AI, including diminished critical thinking skills and the potential erosion of academic standards. These findings underscore the urgency of developing clear, adaptive policies that balance AI’s potential benefits with safeguards against academic shortcuts that undermine intellectual growth.

Rather than outright prohibiting AI tools, institutions should develop a nuanced ethical framework that promotes AI literacy rather than restriction. AI literacy involves teaching students how to engage critically with AI, recognizing its limitations, and distinguishing between responsible augmentation and intellectual overreliance. Transparency should be central to these efforts—universities should require students to disclose their use of AI in academic work, ensuring accountability while fostering ethical engagement with technology.

10. Redefining Academic Integrity in the AI Era

A key ethical consideration is differentiating between AI-assisted and AI-generated work. AI-assisted learning—such as brainstorming, grammar correction, and preliminary drafting—can enhance students’ intellectual engagement when appropriately supervised. However, AI-generated assignments that bypass critical thinking processes pose a significant threat to academic integrity. Universities should establish clear, enforceable guidelines defining acceptable AI use, such as using AI for research synthesis but not for content creation. This distinction must be reinforced through transparent institutional policies, ensuring that students understand where AI serves as a tool versus where it risks supplanting intellectual effort.

11. AI-Resistant Assessment Models

Institutions must also rethink assessment models to mitigate AI misuse. Traditional take-home essays or online quizzes are increasingly vulnerable to AI-generated content. Instead, universities should prioritize assessments that demand iterative engagement and deep cognitive work, such as:

  • Oral presentations and viva-style defenses, where students must articulate their reasoning beyond AI-generated content.

  • Handwritten reflections and in-class assessments, which require direct student engagement.

  • Portfolio-based evaluations, incorporating self-reflection on how AI was used in research and writing processes.

By embedding AI-resistant assessments, educators can ensure that students remain active participants in learning rather than passive consumers of AI outputs.

12. Equipping Faculty for AI-Ethical Decision-Making

Universities should also prioritize faculty training on AI integration. Educators must be equipped with tools to detect AI-generated work, develop critical engagement strategies, and integrate AI into pedagogy meaningfully. Faculty development programs should focus on:

  • Teaching epistemic vigilance, ensuring students do not accept AI-generated information uncritically.

  • Developing ethical AI policies that align with institutional values while allowing for pedagogical flexibility.

  • Encouraging AI-driven inquiry, where students are asked to evaluate, refine, and critically challenge AI-generated content.

By fostering a balanced approach to AI in education, institutions can uphold academic integrity while preparing students to navigate an AI-driven world responsibly. Ethical AI integration should focus on preserving intellectual rigor, ensuring transparency, and reinforcing the role of educators as facilitators of deep engagement rather than gatekeepers of knowledge. The goal is not to resist AI’s presence in education but to ensure it augments rather than replaces the core cognitive processes that define meaningful learning.

13. Conclusion: The Future of Learning beyond Problem-Solving

Generative AI challenges us to reimagine the educational process without compromising its foundational goals. As educators, we must reaffirm the value of intellectual struggle, deep reflection, and meaning-making, recognizing that the cognitive labor involved in writing, problem-solving, and questioning is not a byproduct of education but its very essence. AI does not simply present a technological challenge; it compels a fundamental rethinking of what it means to learn. If problem-solving can be automated, education must pivot toward expanding human intellectual capacities beyond mere efficiency.

By moving beyond traditional models like “learning by doing” and Bloom’s Taxonomy, educators can reimagine learning as an ongoing dialectical process—one that values ambiguity, contradiction, and the pursuit of wisdom. The challenge ahead is not simply how to incorporate AI into existing frameworks, but how to expand the very purpose of education itself in a world where knowledge is abundant but meaning remains elusive.

Ultimately, the role of educators is not just to transmit knowledge but to curate learning experiences that challenge students to think, create, and engage. AI, if used wisely, can support this mission—but it cannot replace it. Higher-level education must remain anchored in the celebration of deep learning, where students are not passive receivers of information but active participants in shaping knowledge itself. To navigate this shift, faculty must embrace AI with intention, ensuring it serves as a tool for inquiry rather than a substitute for intellectual effort.

The challenge ahead is not just about adapting to AI but about preserving the essence of what makes education transformational—the cultivation of curiosity, ethical reasoning, and conceptual synthesis. By fostering environments where students grapple with complex ideas, engage in meaningful dialogue, and embrace intellectual uncertainty, educators can ensure that technology enhances, rather than diminishes, the irreplaceable human elements of education.

Education is not merely a transmission of facts but a journey of intellectual discovery. By reaffirming our commitment to deep learning, we uphold the integrity of education and empower students to become thoughtful, adaptable, and engaged contributors to society in an AI-integrated world.

Final question: If we are no longer learning to solve problems, what are we learning for?

Acknowledgements

My profound thanks to an initial reviewer who challenged me to question the current educational paradigm.

This document benefited from the use of ChatGPT 4.0, which assisted in refining ideas and enhancing language. All core concepts and cited works are the original contributions of the author.

ChatGPT’s contributions include:

  • Helping to expand and refine ideas from the initial draft.

  • Assisting with language enhancement and structural suggestions.

  • Aiding in the exploration of additional perspectives on the topic.

I critically evaluated and integrated all AI-generated content, ensuring its alignment with my conceptual goals and academic standards. ChatGPT was utilized as a tool. All interpretations and conclusions are solely those of the author.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] American Psychological Association, APA Board of Educational Affairs Working Group to Revise the APA Principles for Quality Undergraduate Education in Psychology (2023). Principles for Quality Undergraduate Education in Psychology.
https://www.apa.org/about/policy/principles-quality-undergraduate-education-psychology.pdf
[2] Anderson, L. W., & Krathwohl, D. R. (2001). A Taxonomy for Learning, Teaching, and Assessment: A Revision of Bloom’s Taxonomy of Educational Objectives. Longman.
[3] Arends, B. (2021). Blooms Taxonomy: Benefits and Limitations. Intentional College Teaching.
[4] Batista, J., Mesquita, A., & Carnaz, G. (2024). Generative AI and Higher Education: Trends, Challenges, and Future Directions from a Systematic Literature Review. Information, 15, Article 676.
https://doi.org/10.3390/info15110676
[5] Bloom, B. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Vol. 1: Cognitive Domain. McKay.
[6] Burns, L. R., Stephenson, P. L., & Bellamy, K. (2016). The Socratic Method: Empirical Assessment of a Psychology Capstone Course. Psychology Learning & Teaching, 15, 370-383.
https://doi.org/10.1177/1475725716671824
[7] Dong, L., Tang, X., & Wang, X. (2025). Examining the Effect of Artificial Intelligence in Relation to Students’ Academic Achievement: A Meta-Analysis. Computers and Education: Artificial Intelligence, 8, Article 100400.
https://doi.org/10.1016/j.caeai.2025.100400
[8] Flanagan, D. P., & Dixon, S. G. (2014). The Cattell-Horn-Carroll (CHC) Theory of Cognitive Abilities. In C. R. Reynolds, K. J. Vannest, & E. Fletcher-Janzen (Eds.), Encyclopedia of Special Education. Wiley.
https://doi.org/10.1002/9781118660584.ese0431
[9] Fortner, M., & Katzarska-Miller, I. (2024). Using Generative AI to Promote the APA’s Five Goals for Undergraduate Majors. Teaching of Psychology.
https://doi.org/10.1177/00986283241264793
[10] Fuller, R. B. (1969). Operating Manual for Spaceship Earth. Southern Illinois University Press.
[11] Gammoh, L. A. (2024). ChatGPT in Academia: Exploring University Students’ Risks, Misuses, and Challenges in Jordan. Journal of Further and Higher Education, 48, 608-624.
https://doi.org/10.1080/0309877x.2024.2378298
[12] Gerlich, M. (2025). AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies, 15, Article 6.
https://doi.org/10.3390/soc15010006
[13] Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers. In Association for Computing Machinery (Ed.), CHI25: Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-23). ACM.
[14] Malkus, N. (2025). Testing Theories of Why: Four Keys to Interpreting US Student Achievement Trends. American Enterprise Institute.
https://www.aei.org/research-products/report/testing-theories-of-why-four-keys-to-interpreting-us-student-achievement-trends/
[15] Meadows, A. M., Rose, M., Burns, L. R., & Bellamy, K. (2021). Using Discussion to Teach Capstone Courses in Psychology. Creative Education, 12, 122-139.
https://doi.org/10.4236/ce.2021.121009
[16] Palmer, P. J. (2011). Healing the Heart of Democracy: The Courage to Create a Politics Worthy of the Human Spirit (p. 84). Wiley.
[17] Pujawan, I. G. N., Rediani, N. N., Antara, I. G. W. S., Putri, N. N. C. A., & Bayu, G. W. (2022). Revised Bloom Taxonomy-Oriented Learning Activities to Develop Scientific Literacy and Creative Thinking Skills. Journal Pendidikan IPA Indonesia, 11, 47-60.
https://doi.org/10.15294/jpii.v11i1.34628
[18] Schipolowski, S., Wilhelm, O., & Schroeders, U. (2014). On the Nature of Crystallized Intelligence: The Relationship between Verbal Ability and Factual Knowledge. Intelligence, 46, 156-168.
https://doi.org/10.1016/j.intell.2014.05.014
[19] Thomas, D. R., Lin, J., Gatz, E., Gurung, A., Gupta, S., Norberg, K. et al. (2024). Improving Student Learning with Hybrid Human-AI Tutoring: A Three-Study Quasi-Experimental Investigation. In Association for Computing Machinery (Ed.), Proceedings of the 14th Learning Analytics and Knowledge Conference (pp. 404-415). ACM.
https://doi.org/10.1145/3636555.3636896
[20] Wang, S., Wang, F., Zhu, Z., Wang, J., Tran, T., & Du, Z. (2024). Artificial Intelligence in Education: A Systematic Literature Review. Expert Systems with Applications, 252, Article 124167.
https://doi.org/10.1016/j.eswa.2024.124167

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.