<?xml version="1.0" encoding="UTF-8"?><!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "http://dtd.nlm.nih.gov/publishing/3.0/journalpublishing3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" dtd-version="3.0" xml:lang="en" article-type="research article">
 <front>
  <journal-meta>
   <journal-id journal-id-type="publisher-id">
    jss
   </journal-id>
   <journal-title-group>
    <journal-title>
     Open Journal of Social Sciences
    </journal-title>
   </journal-title-group>
   <issn pub-type="epub">
    2327-5952
   </issn>
   <issn publication-format="print">
    2327-5960
   </issn>
   <publisher>
    <publisher-name>
     Scientific Research Publishing
    </publisher-name>
   </publisher>
  </journal-meta>
  <article-meta>
   <article-id pub-id-type="doi">
    10.4236/jss.2025.1311030
   </article-id>
   <article-id pub-id-type="publisher-id">
    jss-147484
   </article-id>
   <article-categories>
    <subj-group subj-group-type="heading">
     <subject>
      Articles
     </subject>
    </subj-group>
    <subj-group subj-group-type="Discipline-v2">
     <subject>
      Business 
     </subject>
     <subject>
       Economics, Social Sciences 
     </subject>
     <subject>
       Humanities
     </subject>
    </subj-group>
   </article-categories>
   <title-group>
    Archaic Methods in a Data Rich World: Why Educational Research Must Embrace AI Research Methods
   </title-group>
   <contrib-group>
    <contrib contrib-type="author" xlink:type="simple">
     <name name-style="western">
      <surname>
       Thomas
      </surname>
      <given-names>
       Mgonja
      </given-names>
     </name>
    </contrib>
   </contrib-group> 
   <aff id="affnull">
    <addr-line>
     aFaculty of Mathematics&amp;Data Science, Emirates Aviation University, Dubai, United Arab Emirates
    </addr-line> 
   </aff> 
   <pub-date pub-type="epub">
    <day>
     30
    </day> 
    <month>
     10
    </month>
    <year>
     2025
    </year>
   </pub-date> 
   <volume>
    13
   </volume> 
   <issue>
    11
   </issue>
   <fpage>
    515
   </fpage>
   <lpage>
    522
   </lpage>
   <history>
    <date date-type="received">
     <day>
      13,
     </day>
     <month>
      October
     </month>
     <year>
      2025
     </year>
    </date>
    <date date-type="published">
     <day>
      22,
     </day>
     <month>
      October
     </month>
     <year>
      2025
     </year> 
    </date> 
    <date date-type="accepted">
     <day>
      22,
     </day>
     <month>
      November
     </month>
     <year>
      2025
     </year> 
    </date>
   </history>
   <permissions>
    <copyright-statement>
     © Copyright 2014 by authors and Scientific Research Publishing Inc. 
    </copyright-statement>
    <copyright-year>
     2014
    </copyright-year>
    <license>
     <license-p>
      This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/
     </license-p>
    </license>
   </permissions>
   <abstract>
    Educational research stands at a crossroads that is both methodological and philosophical. The field must decide whether to remain anchored in a toolkit built for small samples and linear assumptions or to integrate approaches suited to high dimensional, complex, and nonlinear data. This commentary argues for methodological bilingualism that combines the strengths of established quantitative, qualitative, and mixed traditions with advances in machine learning and modern causal inference. The commentary reviews literature on learning analytics and AI in education, highlights developments in causal machine learning and model interpretability, and examines the political economy of data that shapes what counts as robust evidence. The commentary ultimately asks whether educational researchers will lead the integration of AI methods in ways that uphold justice and rigor, or whether they will cede authority to corporate actors who define the future of learning on their own terms.
   </abstract>
   <kwd-group> 
    <kwd>
     AI in Education
    </kwd> 
    <kwd>
      Machine Learning
    </kwd> 
    <kwd>
      Educational Research Methods
    </kwd> 
    <kwd>
      Causal Inference
    </kwd> 
    <kwd>
      Explainable AI
    </kwd>
   </kwd-group>
  </article-meta>
 </front>
 <body>
  <sec id="s1">
   <title>1. Introduction</title>
   <p>Educational research has a critical issue to consider: can yesterday’s methods still yield trustworthy evidence in a world flooded with complex, real-time data or is it time to embrace new ways of knowing that reflect the scale, speed, and messiness of contemporary learning? Traditional approaches, which are grounded in small samples, clear models, and controlled settings, have long been valued for their rigor and clarity (<xref ref-type="bibr" rid="scirp.147484-6">
     Byrne, 2016
    </xref>; <xref ref-type="bibr" rid="scirp.147484-1">
     Agresti, 2018
    </xref>). But do these methods still hold when educational life plays out across learning platforms, social media, and institutional systems? This brings us to a paradigm choice: continue investigating education through legacy methods or explore it through tools that can make sense of high-dimensional, digital traces. Emerging work in learning analytics and AI suggests the latter path is not only possible, but necessary (<xref ref-type="bibr" rid="scirp.147484-25">
     Siemens &amp; Long, 2011
    </xref>; <xref ref-type="bibr" rid="scirp.147484-29">
     Zawacki-Richter et al., 2019
    </xref>). The aim of this commentary is to clarify how integrating causal and interpretable machine learning methods can revitalize educational research by bridging predictive and explanatory traditions and by reclaiming methodological authority for scholars within education rather than corporate technologists.</p>
  </sec><sec id="s2">
   <title>2. Literature Review</title>
   <p>The review of the literature is organized to trace the methodological evolution in educational research, beginning with the foundational roles of statistical, qualitative, and mixed methods traditions. Then the review examines the growing integration of machine learning (ML) and natural language processing (NLP), highlighting their technical utility and epistemic significance. Finally, the review explores the broader implications of this shift, focusing on the emergence of hybrid expertise, the need for interpretability, and the ethical and political stakes of AI adoption in education.</p>
   <p>An Epistemic Shift in Educational Research</p>
   <p>Foundational statistical approaches such as regression and structural equation modeling have shaped educational research for decades. They anchor causal inference, ensure clarity in research design, and allow scholars to communicate evidence transparently (<xref ref-type="bibr" rid="scirp.147484-1">
     Agresti, 2018
    </xref>; <xref ref-type="bibr" rid="scirp.147484-6">
     Byrne, 2016
    </xref>). Qualitative traditions likewise provide indispensable contextual depth, cultural interpretation, and narrative insight (<xref ref-type="bibr" rid="scirp.147484-11">
     Creswell &amp; Poth, 2018
    </xref>). Mixed methods research intentionally integrate the strengths of both traditions to produce findings that are simultaneously rigorous and contextually meaningful (<xref ref-type="bibr" rid="scirp.147484-10">
     Creswell &amp; Clark, 2017
    </xref>). These approaches remain central to educational inquiry. However, the truth is their limitation lies in scope: they were not designed for the vast, multimodal, and dynamic datasets that define twenty-first century education. For this reason, empirical reviews have started documenting a sharp increase in studies applying Artificial Intelligence (AI) research methods, specifically ML, in higher education.</p>
   <p>
    <xref ref-type="bibr" rid="scirp.147484-29">
     Zawacki-Richter et al. (2019)
    </xref> synthesized 146 studies and highlighted trends in classification, clustering, and NLP while noting limited theoretical integration. Similarly, systematic reviews of dropout prediction show that random forests, support vector machines, and boosting methods often outperform logistic regression when analyzing engagement and demographic data (e.g., <xref ref-type="bibr" rid="scirp.147484-2">
     Andrade-Girón et al., 2023
    </xref>; <xref ref-type="bibr" rid="scirp.147484-15">
     Hellas et al., 2018
    </xref>; <xref ref-type="bibr" rid="scirp.147484-20">
     Lottering et al., 2020
    </xref>). Furthermore, when the supervised models (models mapping inputs to known outputs) are applied to institutional data, they offer practical utility. For example, <xref ref-type="bibr" rid="scirp.147484-28">
     Xu et al. (2019)
    </xref> showed that decision tree ensembles can classify students’ likelihood of dropout in online environments with high accuracy, while <xref ref-type="bibr" rid="scirp.147484-16">
     Joksimović et al. (2018)
    </xref> used learning analytics to reveal writing processes in student essays at a scale impossible with manual coding. These findings suggest that ML methods are not only technically useful but also epistemically necessary when the research problem involves large numbers of variables and nonlinear interactions.</p>
   <p>More critically, these studies illustrate that ML is not limited to prediction; it can provide new entry points for interpretive work by uncovering the underlying themes in the ways people communicate and behave. In fact, NLP expands the frontier of qualitative scaling. Topic modeling, originally formalized by <xref ref-type="bibr" rid="scirp.147484-3">
     Blei et al. (2003)
    </xref>, has been widely adopted in education to analyze large discussion corpora. More recent applications demonstrate how topic modeling and transformer-based language models can enrich understanding of collaborative learning by uncovering latent discourse structures (<xref ref-type="bibr" rid="scirp.147484-8">
     Chiu &amp; Fujita, 2014
    </xref>; <xref ref-type="bibr" rid="scirp.147484-12">
     Dowell &amp; Kovanović, 2022
    </xref>). These tools do not replace qualitative/interpretive inquiry but rather broaden its scope. But who are the scholars driving this methodological shift? Are these studies primarily the work of computer scientists applying algorithms to education, or of educational researchers deeply grounded in theory? The answer is both, though the number of such scholars is limited, their convergence is quietly reshaping the field.</p>
   <p>Many of the most influential contributions to learning analytics and educational data mining now emerge from interdisciplinary teams where ML is integrated with robust understanding of learning theory, pedagogy, and equity. Scholars like Ryan Baker, Dragan Gašević, and Alyssa Wise exemplify this fusion: their research not only advances technical methods but also interrogates what constitutes meaningful learning and fair assessment. However, scholars with such hybrid expertise are so rare that even educational research journals struggle to find qualified reviewers who can engage with both the technical and pedagogical dimensions of submitted work. That said, the hybridity reflects a broader epistemic shift where knowledge production demands more than computational skill; it requires theoretical grounding in how people learn, teach, and interact with complex systems. As a result, methodological innovation in education is not merely a technical enhancement but a redefinition of what counts as valid insight. Supporting this shift, <xref ref-type="bibr" rid="scirp.147484-23">
     Perrotta and Selwyn (2020)
    </xref> advocate for a relational perspective on AI in education, emphasizing that computational tools must be embedded in institutional contexts and educational values. Without this anchoring, research risks becoming either narrowly technical or detached from the pedagogical realities it seeks to transform.</p>
   <p>Causality, Transparency, and the Urgency of Data Politics in Education</p>
   <p>A persistent critique of ML is that it excels in prediction but falters in explanation (<xref ref-type="bibr" rid="scirp.147484-5">
     Breiman
    </xref><xref ref-type="bibr" rid="scirp.147484-5">
     , 2003
    </xref>). Yet recent advances in causal ML provide tools that directly address this gap. Double or debiased ML enables estimation of treatment effects in high-dimensional settings (<xref ref-type="bibr" rid="scirp.147484-7">
     Chernozhukov
    </xref><xref ref-type="bibr" rid="scirp.147484-7">
     et al., 2018
    </xref>). Double (debiased) machine learning is a statistical framework that isolates causal treatment effects while controlling for many confounders by combining regularized machine-learning estimators with econometric orthogonalization. Causal forests also extend ensemble methods to identify heterogeneous treatment effects, allowing researchers to ask not only whether an intervention works but also for whom and under what conditions (<xref ref-type="bibr" rid="scirp.147484-26">
     Wager &amp; Athey, 2018
    </xref>). For instance, a university might deploy a machine-learning system to predict which students are likely to drop out based on clickstream and demographic data. Such a predictive model flags risk but does not reveal why students disengage. A causal analysis, by contrast, could estimate how much targeted tutoring or mentoring changes the probability of persistence, thereby informing policy decisions rather than merely ranking students by risk.</p>
   <p>A causal forest is an adaptation of the random-forest algorithm that estimates heterogeneous treatment effects for individual observations, revealing how an intervention’s impact varies across subgroups. These methods align with pressing educational priorities such as personalized learning and equity-driven interventions (<xref ref-type="bibr" rid="scirp.147484-17">
     Knaus, 2022
    </xref>). Most importantly, education is a high-stakes context where transparency is essential; black box models raise risks of misinterpretation and a loss of accountability (<xref ref-type="bibr" rid="scirp.147484-19">
     Lipton, 2018
    </xref>). Explainable AI (XAI) methods such as SHAP (<xref ref-type="bibr" rid="scirp.147484-21">
     Lundberg &amp; Lee, 2017
    </xref>) and LIME (<xref ref-type="bibr" rid="scirp.147484-24">
     Ribeiro et al., 2016
    </xref>) attempt to mitigate these risks by providing case-specific insights into how models arrive at predictions. SHAP (SHapley Additive exPlanations) is an interpretability technique that attributes each model prediction to specific input variables, allowing researchers to quantify how strongly each feature influences an outcome. For example, SHAP values can reveal whether prior GPA or peer engagement most strongly influences predicted success, and such findings can then be situated in established theories of academic integration (<xref ref-type="bibr" rid="scirp.147484-14">
     Gašević et al., 2015
    </xref>). Interpretability in this sense is both an epistemic safeguard and an ethical imperative.</p>
   <p>However, the stakes extend far beyond methodology. With the rise of platform-driven education, student data is increasingly commodified for commercial ends, a process <xref ref-type="bibr" rid="scirp.147484-9">
     Couldry and Mejias (2019)
    </xref> term data colonialism. <xref ref-type="bibr" rid="scirp.147484-27">
     Williamson (2017)
    </xref> similarly warns of the corporatization of educational research as learning analytics infrastructures migrate into vendor-controlled ecosystems. These critiques are no longer speculative. Recent analyses show that educational technologies are already shaping how evidence is defined and used in practice, often privileging corporate priorities over pedagogy (<xref ref-type="bibr" rid="scirp.147484-4">
     Bond et al., 2024
    </xref>; <xref ref-type="bibr" rid="scirp.147484-22">
     Miao &amp; Holmes, 2021
    </xref>). The acceleration of large language models further amplifies these risks, making the urgency of interpretability and accountability in education greater than ever (<xref ref-type="bibr" rid="scirp.147484-13">
     Gan et al., 2023
    </xref>). For this reason, critical scholarship emphasizes the need to reclaim interpretive sovereignty (e.g., <xref ref-type="bibr" rid="scirp.147484-18">
     Knox, 2020
    </xref>). Thus, educators must ask themselves: who benefits when classroom data is extracted, packaged, and sold? If scholars remain silent, they risk becoming passive agents in the commodification of learning they claim to critique. The question is not whether AI methods have a place in education, but whether researchers will have the courage to shape their development and application in ways that protect student dignity and advance meaningful learning over corporate interests.</p>
  </sec><sec id="s3">
   <title>3. Discussion</title>
   <p>The integration of ML into educational research demands enthusiasm tempered with caution. Methodologically, ML expands the field’s capacity to analyze scale, capture heterogeneity, and produce interpretable causal estimates. Philosophically, it demands reflection on what counts as valid explanation and who holds authority over knowledge. The most promising path forward is methodological bilingualism, where researchers are trained in both conventional design logics and computational methods (<xref ref-type="bibr" rid="scirp.147484-23">
     Perrotta &amp; Selwyn, 2020
    </xref>). That is, doctoral programs must evolve accordingly: they must abandon the comfort of outdated toolkits and confront the reality that without programming, algorithmic literacy, causal ML, and data ethics at their core, they risk graduating scholars fluent in yesterday’s methods but illiterate in tomorrow’s evidence.</p>
   <p>Additionally, faculty hiring and tenure processes should recognize scholarship that bridges traditions rather than siloing computational work. Journals should encourage submissions that combine interpretive theory with ML analyses and reward methodological transparency. Institutions should also practice transparency by investing in open data infrastructures and resist the outsourcing of analytic capacity to corporations. Without such integration, the field risks bifurcation. Computational researchers may drift toward technical optimization detached from pedagogy, while traditional researchers may struggle to remain relevant in a world defined by data traces. The worst outcome is that educational research cedes authority entirely to platform vendors, allowing data colonialism to shape policy. The best outcome is that scholars reclaim interpretive sovereignty by embedding computational methods in educational values, ensuring that the digital transformation strengthens rather than weakens the field’s commitments to justice and equity.</p>
   <p>Beyond methodological reform, the integration of AI in education raises critical concerns about data privacy, algorithmic bias, and the responsible governance of student information. Current international frameworks such as UNESCO’s (2021) Recommendation on the Ethics of Artificial Intelligence and the Organization for Economic Co-operation and Development’s (OECD) AI Principles stress transparency, human oversight, and fairness as non-negotiable standards. Educational researchers should therefore align analytic pipelines with these ethical benchmarks, ensuring informed consent, auditability of models, and safeguards against demographic bias in training data. While the present commentary advocates methodological reform in education, it should be noted that it offers a conceptual synthesis of theoretical and methodological perspectives rather than presenting new empirical evidence. This limitation suggests future research to empirically test the propositions advanced here, for example, by comparing conventional regression, causal-ML, and interpretability techniques on shared educational datasets. Such validation would clarify not only performance differences but also practical implications for policy and pedagogy.</p>
  </sec><sec id="s4">
   <title>4. Conclusion</title>
   <p>Educational research stands at the cusp of a methodological revolution. Traditional quantitative, qualitative, and mixed methods remain indispensable, but they are increasingly insufficient on their own. ML and causal inference provide tools that can address complexity, scale, and heterogeneity in ways that extend, and not replace, established traditions. Interpretability and fairness must remain non-negotiable, and critical attention to the political economy of educational data is essential. The way forward requires deliberate synthesis, cultivating bilingual scholars who can move fluently between regression tables and neural networks, between ethnographic coding and topic modeling, between explanation and prediction. Only then can educational research remain both scientifically rigorous and socially just in a data rich century. The cost of inaction is irrelevance: if educational research clings to archaic methods, it risks being bypassed by corporate analytics and policy regimes that redefine knowledge without educators at the table.</p>
  </sec>
 </body><back>
  <ref-list>
   <title>References</title>
   <ref id="scirp.147484-ref1">
    <label>1</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Agresti, A. (2018). Statistical Methods for the Social Sciences (5th ed.). Pearson.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref2">
    <label>2</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Andrade-Girón, D., Sandivar-Rosas, J., Marín-Rodriguez, W., Susanibar-Ramirez, E., Toro-Dextre, E., Ausejo-Sanchez, J. et al. (2023). Predicting Student Dropout Based on Machine Learning and Deep Learning: A Systematic Review. EAI Endorsed Transactions on Scalable Information Systems, 10, 1-11. &gt;https://doi.org/10.4108/eetsis.3586
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref3">
    <label>3</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Blei, D. M., Ng, A. Y.,&amp;Jordan, M. I. (2003). Latent Dirichlet Allocation. Journal of Ma-chine Learning Research, 3, 993-1022.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref4">
    <label>4</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E. et al. (2024). A Meta Systematic Review of Artificial Intelligence in Higher Education: A Call for Increased Ethics, Collaboration, and Rigour. International Journal of Educational Technology in Higher Education, 21, Article No. 4. &gt;https://doi.org/10.1186/s41239-023-00436-z
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref5">
    <label>5</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Breiman, L. (2003). Statistical Modeling: The Two Cultures. Quality Control and Applied Statistics, 48, 81-82.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref6">
    <label>6</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Byrne, B. M. (2016). Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming (3rd ed.). Routledge.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref7">
    <label>7</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W. et al. (2018). Double/Debiased Machine Learning for Treatment and Structural Parameters. The Econometrics Journal, 21, C1-C68. &gt;https://doi.org/10.1111/ectj.12097
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref8">
    <label>8</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Chiu, M. M.,&amp;Fujita, N. (2014). Statistical Discourse Analysis: A Method for Modeling Online Discussion Processes. Journal of Learning Analytics, 1, 61-83. &gt;https://doi.org/10.18608/jla.2014.13.5
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref9">
    <label>9</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Couldry, N.,&amp;Mejias, U. A. (2019). The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism. Stanford University Press.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref10">
    <label>10</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Creswell, J. W.,&amp;Clark, V. L. P. (2017). Designing and Conducting Mixed Methods Research. Sage publications.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref11">
    <label>11</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Creswell, J. W.,&amp;Poth, C. N. (2018). Qualitative Inquiry and Research Design: Choosing among Five Approaches (4th ed.). Sage.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref12">
    <label>12</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Dowell, N.,&amp;Kovanović, V. (2022). Modeling Educational Discourse with Natural Language Processing. In The Handbook of Learning Analytics (2nd ed., pp. 105-119). SOLAR. &gt;https://doi.org/10.18608/hla22.011
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref13">
    <label>13</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Gan, W., Qi, Z., Wu, J.,&amp;Lin, J. C. (2023). Large Language Models in Education: Vision and Opportunities. In 2023 IEEE International Conference on Big Data (BigData) (pp. 4776-4785). IEEE. &gt;https://doi.org/10.1109/bigdata59044.2023.10386291
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref14">
    <label>14</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Gašević, D., Dawson, S.,&amp;Siemens, G. (2015). Let’s Not Forget: Learning Analytics Are about Learning. TechTrends, 59, 64-71. &gt;https://doi.org/10.1007/s11528-014-0822-x
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref15">
    <label>15</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Hellas, A., Ihantola, P., Petersen, A., Ajanovski, V. V., Gutica, M., Hynninen, T. et al. (2018). Predicting Academic Performance: A Systematic Literature Review. In Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education (pp. 175-199). ACM. &gt;https://doi.org/10.1145/3293881.3295783
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref16">
    <label>16</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Joksimović, S., Kovanović, V., Gašević, D., Dawson, S.,&amp;Siemens, G. (2018). Using Learning Analytics to Uncover Student Writing Processes in Online Learning Environments. Journal of Learning Analytics, 5, 110-129.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref17">
    <label>17</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Knaus, M. C. (2022). Double Machine Learning-Based Programme Evaluation under Unconfoundedness. The Econometrics Journal, 25, 602-627. &gt;https://doi.org/10.1093/ectj/utac015
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref18">
    <label>18</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Knox, J. (2020). Artificial Intelligence and Education in China. Learning, Media and Technology, 45, 298-311. &gt;https://doi.org/10.1080/17439884.2020.1754236
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref19">
    <label>19</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Lipton, Z. C. (2018). The Mythos of Model Interpretability. Communications of the ACM, 61, 36-43. &gt;https://doi.org/10.1145/3233231
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref20">
    <label>20</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Lottering, R., Hans, R.,&amp;Lall, M. (2020). A Machine Learning Approach to Identifying Students at Risk of Dropout: A Case Study. International Journal of Advanced Computer Science and Applications, 11, 417-422. &gt;https://doi.org/10.14569/ijacsa.2020.0111052
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref21">
    <label>21</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Lundberg, S. M.,&amp;Lee, S. I. (2017). A Unified Approach to Interpreting Model Predictions. In I. Guyon, et al. (Eds.), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017 (pp. 4765-4774). NIPS 2017. 
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref22">
    <label>22</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Miao, F.,&amp;Holmes, W. (2021). Artificial Intelligence and Education. Guidance for Poli-cy-Makers. United Nations Educational, Scientific and Cultural Organization (UNESCO).
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref23">
    <label>23</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Perrotta, C.,&amp;Selwyn, N. (2020). Deep Learning Goes to School: Toward a Relational Understanding of AI in Education. Learning, Media and Technology, 45, 251-269. &gt;https://doi.org/10.1080/17439884.2020.1686017
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref24">
    <label>24</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Ribeiro, M. T., Singh, S.,&amp;Guestrin, C. (2016). Why Should I Trust You? Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref25">
    <label>25</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Siemens, G.,&amp;Long, P. (2011). Penetrating the Fog: Analytics in Learning and Education. EDUCAUSE Review, 46, 30-40.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref26">
    <label>26</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Wager, S.,&amp;Athey, S. (2018). Estimation and Inference of Heterogeneous Treatment Effects Using Random Forests. Journal of the American Statistical Association, 113, 1228-1242. &gt;https://doi.org/10.1080/01621459.2017.1319839
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref27">
    <label>27</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Williamson, B. (2017). Big Data in Education: The Digital Future of Learning, Policy and Practice. SAGE Publications Ltd. &gt;https://doi.org/10.4135/9781529714920
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref28">
    <label>28</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Xu, B., Chen, N. S.,&amp;Chen, G. (2019). Effects of Teacher Role on Student Engagement in Online Learning Environments. Computers&amp;Education, 131, 49-60.
    </mixed-citation>
   </ref>
   <ref id="scirp.147484-ref29">
    <label>29</label>
    <mixed-citation publication-type="other" xlink:type="simple">
     Zawacki-Richter, O., Marín, V. I., Bond, M.,&amp;Gouverneur, F. (2019). Systematic Review of Research on Artificial Intelligence Applications in Higher Education—Where Are the Educators? International Journal of Educational Technology in Higher Education, 16, 1-27. &gt;https://doi.org/10.1186/s41239-019-0171-0
    </mixed-citation>
   </ref>
  </ref-list>
 </back>
</article>