<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN" "JATS-journalpublishing1-4.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="research-article" dtd-version="1.4" xml:lang="en">
  <front>
    <journal-meta>
      <journal-id journal-id-type="publisher-id">jss</journal-id>
      <journal-title-group>
        <journal-title>Open Journal of Social Sciences</journal-title>
      </journal-title-group>
      <issn pub-type="epub">2327-5960</issn>
      <issn pub-type="ppub">2327-5952</issn>
      <publisher>
        <publisher-name>Scientific Research Publishing</publisher-name>
      </publisher>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.4236/jss.2026.142015</article-id>
      <article-id pub-id-type="publisher-id">jss-149608</article-id>
      <article-categories>
        <subj-group>
          <subject>Article</subject>
        </subj-group>
        <subj-group>
          <subject>Business</subject>
          <subject>Economics</subject>
          <subject>Social Sciences</subject>
          <subject>Humanities</subject>
        </subj-group>
      </article-categories>
      <title-group>
        <article-title>The Perceived Objectivity of AI and Its Impact on Decision Authority in HR and Administrative Contexts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name name-style="western">
            <surname>Karanfiloska</surname>
            <given-names>Marijana</given-names>
          </name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
      </contrib-group>
      <aff id="aff1"><label>1</label> SBS Swiss Business School, Kloten-Zurich, Switzerland </aff>
      <author-notes>
        <fn fn-type="conflict" id="fn-conflict">
          <p>The authors declare no conflicts of interest regarding the publication of this paper.</p>
        </fn>
      </author-notes>
      <pub-date pub-type="epub">
        <day>02</day>
        <month>02</month>
        <year>2026</year>
      </pub-date>
      <pub-date pub-type="collection">
        <month>02</month>
        <year>2026</year>
      </pub-date>
      <volume>14</volume>
      <issue>02</issue>
      <fpage>239</fpage>
      <lpage>248</lpage>
      <history>
        <date date-type="received">
          <day>21</day>
          <month>01</month>
          <year>2026</year>
        </date>
        <date date-type="accepted">
          <day>10</day>
          <month>02</month>
          <year>2026</year>
        </date>
        <date date-type="published">
          <day>13</day>
          <month>02</month>
          <year>2026</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>© 2026 by the authors and Scientific Research Publishing Inc.</copyright-statement>
        <copyright-year>2026</copyright-year>
        <license license-type="open-access">
          <license-p> This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link> ). </license-p>
        </license>
      </permissions>
      <self-uri content-type="doi" xlink:href="https://doi.org/10.4236/jss.2026.142015">https://doi.org/10.4236/jss.2026.142015</self-uri>
      <abstract>
        <p>Artificial intelligence has become deeply embedded in human resource management and administrative decision-making, particularly in areas such as recruitment, performance evaluation, promotion, and workforce analytics. While these technologies are often introduced to enhance efficiency and consistency, their organisational impact extends beyond process optimisation. This opinion paper argues that AI systems increasingly shape decision authority in HR and administrative contexts because they are perceived as objective, neutral, and rational. This perceived objectivity legitimises reliance on algorithmic outputs, subtly redistributes authority away from human judgement, and complicates accountability without formal changes to responsibility structures. Drawing on contemporary literature on algorithmic control, trust in AI, and governance, the paper critically examines how objectivity narratives reshape organisational decision-making and proposes a human-centric reassertion of judgement as a governance imperative.</p>
      </abstract>
      <kwd-group kwd-group-type="author-generated" xml:lang="en">
        <kwd>Artificial Intelligence</kwd>
        <kwd>Perceived Objectivity</kwd>
        <kwd>Decision Authority</kwd>
        <kwd>Human Resource Management</kwd>
        <kwd>Algorithmic Governance</kwd>
        <kwd>Organisational Accountability</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec1">
      <title>1. Introduction</title>
      <p>The adoption of artificial intelligence in HR and administrative contexts has accelerated rapidly in recent years. Automated systems now support or structure recruitment decisions, performance evaluations, promotion pathways, and workforce planning. These technologies promise efficiency, speed, and consistency, particularly in organisational environments characterised by time pressure and high application volumes. As a result, AI-based tools are often presented as pragmatic solutions to long-standing administrative challenges.</p>
      <p>The governance implications discussed in this paper intensify where AI is applied to normative, career-shaping decisions such as hiring, promotion, or dismissal, while the automation of routine administrative tasks primarily raises questions of efficiency and procedural consistency rather than authority redistribution.</p>
      <p>Much of the early discussion surrounding AI in HR focused on automation as a functional improvement. The dominant narrative emphasised faster processing, reduced administrative burden, and the mitigation of individual bias. However, as AI systems have become more deeply embedded in organisational decision processes, it has become increasingly evident that their influence extends beyond technical assistance. AI does not simply support decisions. It contributes to defining how decisions are justified, defended, and perceived as legitimate. In earlier work, automation in recruitment has been critically examined as a contemporary manifestation of Tayloristic management logic, where efficiency and standardisation take precedence over judgement, creativity, and human discernment ([<xref ref-type="bibr" rid="B9">9</xref>]). From this perspective, AI-driven systems risk replicating historical patterns of managerial homogeneity by privileging measurable and reproducible criteria. Yet the implications of AI adoption reach further than recruitment alone. They touch on the foundational question of decision authority within organisations.</p>
      <p>Decision authority refers not only to who formally decides, but also to whose judgement is trusted, whose reasoning is considered legitimate, and whose responsibility is recognised when outcomes are contested. In HR and administrative contexts, decision authority has traditionally rested with human actors, even when structured by procedures and frameworks. The introduction of AI complicates this arrangement. While formal responsibility often remains human, algorithmic outputs increasingly serve as epistemic reference points that shape decisions before human deliberation begins.</p>
      <p>This paper argues that perceived objectivity plays a central role in this shift. AI systems gain organisational authority not because they are inherently superior decision-makers, but because they are widely perceived as neutral, rational, and free from personal agendas. This perception fosters reliance, reduces contestation, and quietly redistributes authority within decision processes. By focusing on perceived objectivity rather than technical performance, this paper seeks to contribute to ongoing debates about AI governance, accountability, and human judgment in HR and administrative decision-making.</p>
      <p>The argument that follows traces how narratives of objectivity translate into organizational authority, examines their consequences for decision control, accountability, and adaptability, and concludes by advancing human judgement as a governance imperative in AI-supported contexts.</p>
    </sec>
    <sec id="sec2">
      <title>2. Objectivity as an Organisational Ideal</title>
      <p>Objectivity has long occupied a privileged position in administrative and HR discourse. In bureaucratic traditions, objectivity is associated with fairness, consistency, and rational governance. Decisions grounded in measurable criteria and standardised procedures are often perceived as more legitimate than those relying on individual judgement or discretion. This ideal has shaped administrative systems for decades and continues to influence contemporary organisational practices.</p>
      <p>In this context, AI appears as a technological embodiment of objectivity. Algorithms process large volumes of data, apply consistent rules, and generate outputs that seem detached from personal preferences or emotional influences. This understanding aligns with a broader body of research that conceptualises algorithmic systems as socio-technical governance instruments whose authority emerges not from technical superiority alone, but from institutional narratives of neutrality, efficiency, and rational control ([<xref ref-type="bibr" rid="B2">2</xref>]; [<xref ref-type="bibr" rid="B8">8</xref>]). As a result, AI-based systems are frequently framed as instruments that enhance objectivity rather than as tools that embed new forms of judgement. However, critical scholarship has challenged this perception. [<xref ref-type="bibr" rid="B13">13</xref>] has shown that algorithmic systems often function as opaque decision-making structures whose authority rests precisely on their apparent neutrality. When decision logic is hidden or inaccessible, outputs acquire legitimacy through technical mystique rather than through transparent reasoning. Objectivity, in this sense, becomes less a property of the system and more a narrative that shields it from scrutiny. Similarly, [<xref ref-type="bibr" rid="B3">3</xref>] have demonstrated that systems presented as neutral frequently reproduce structural biases embedded in data and institutional practices. The claim of objectivity does not eliminate value judgements; it renders them less visible. In HR and administrative settings, this invisibility is particularly consequential, as decisions often carry significant implications for individuals’ careers, livelihoods, and professional identities.</p>
      <p>The organisational appeal of objectivity, therefore, rests on a paradox. While objectivity is invoked to ensure fairness and accountability, it can simultaneously limit critical engagement with the assumptions and consequences of decision systems. AI intensifies this paradox by operationalising objectivity in technical form, thereby reinforcing the belief that algorithmic outputs deserve deference.</p>
    </sec>
    <sec id="sec3">
      <title>3. AI and the Reallocation of Decision Authority</title>
      <p>The growing reliance on AI in HR and administration has implications for organisational power structures. [<xref ref-type="bibr" rid="B10">10</xref>] describe algorithmic systems as a new contested terrain of control, where authority is negotiated rather than explicitly transferred. Algorithms do not replace managers or administrators outright, yet they reshape the conditions under which decisions are made. In practice, AI systems often structure decision environments by prioritising certain variables, ranking candidates, flagging risks, or recommending actions. These outputs frame subsequent human deliberation. Even when final decisions are formally made by individuals, the cognitive and organisational weight of algorithmic recommendations can be substantial. Over time, this dynamic can lead to a reallocation of authority from human judgment to algorithmic framing.</p>
      <p>Evidence from consumer decision-making contexts suggests that AI tools gain influence not through persuasion or relational trust, but through perceived functional legitimacy ([<xref ref-type="bibr" rid="B5">5</xref>]). Users are inclined to trust AI-generated recommendations when they appear coherent, consistent, and free from personal interest. In organisational settings, this dynamic translates into a tendency to accept algorithmic outputs as rational baselines against which deviations must be justified. In HR and administrative contexts, this shift has subtle but important consequences. Human decision-makers may retain formal authority, yet their role increasingly involves validating or implementing algorithmic recommendations rather than actively shaping decisions. Authority thus migrates without explicit organisational acknowledgement. This process rarely provokes resistance because it aligns with existing values of efficiency and objectivity.</p>
    </sec>
    <sec id="sec4">
      <title>4. Trust, Perceived Neutrality, and Algorithmic Preference</title>
      <p>Trust plays a central role in the acceptance of AI in organisational decision-making. Contemporary research on human trust in AI highlights that trust often emerges from perceptions of consistency, reliability, and objectivity rather than from transparency or explainability ([<xref ref-type="bibr" rid="B7">7</xref>]). In organisational contexts, these perceptions can outweigh concerns about opacity or limited understanding.</p>
      <p>Empirical studies on algorithm appreciation further reinforce this insight. [<xref ref-type="bibr" rid="B12">12</xref>] demonstrate that individuals frequently prefer algorithmic judgment to human judgment, even when performance differences are minimal or unknown. This preference is partly driven by the belief that algorithms are less biased and more impartial than humans. Such findings have direct relevance for HR and administrative decision-making, where fairness and neutrality are highly valued. Perceived neutrality thus functions as a powerful trust mechanism. When AI systems are framed as objective tools, their outputs acquire an aura of correctness that discourages critical evaluation. This does not imply blind trust, but rather a lowered threshold for acceptance. In practice, algorithmic recommendations often become default options, particularly in environments where time pressure and standardisation are prioritised. This dynamic reinforces the reallocation of decision authority discussed earlier. Trust in AI does not replace human responsibility, but it reshapes the conditions under which responsibility is exercised. Decisions become harder to contest, not because contestation is formally prohibited, but because algorithmic outputs appear self-evident.</p>
    </sec>
    <sec id="sec5">
      <title>5. Accountability without Control: The Governance Paradox</title>
      <p>The increasing authority of AI systems in HR and administration raises pressing questions about accountability. While human actors remain formally responsible for decisions, their capacity to meaningfully interrogate algorithmic outputs is often limited. This tension creates what may be described as a governance paradox: responsibility persists without full control.</p>
      <p>In organizational practice, this decoupling of responsibility and control complicates processes of justification and contestation. Employees affected by AI-supported decisions may encounter ambiguous grievance pathways when the decisive rationale rests on an opaque or system-generated assessment rather than a clearly articulated human judgement. Similarly, organizations may find that procedural reliance on algorithmic recommendations offers limited protection when decisions require defensible explanation, as responsibility ultimately remains assigned to human actors.</p>
      <p>Legal and regulatory frameworks have struggled to resolve this issue. [<xref ref-type="bibr" rid="B14">14</xref>] argue that existing data protection regimes do not establish a robust right to explanation for automated decision-making. Even when explanations are provided, they often remain insufficient for substantive accountability. This limitation is particularly problematic in HR contexts, where individuals affected by decisions may seek justification or redress. [<xref ref-type="bibr" rid="B13">13</xref>] analysis of the black box society further illustrates how opacity enables authority without transparency. When algorithms operate as inscrutable systems, organisations can invoke technical complexity as a defence against scrutiny. This dynamic undermines the very ideals of objectivity and fairness that justify AI adoption in the first place. In HR and administrative decision-making, the governance paradox manifests in practical dilemmas. Managers may defer to algorithmic outputs to protect themselves from accusations of bias or inconsistency. Administrators may rely on AI recommendations to demonstrate procedural compliance. In doing so, decision authority becomes diffused, and accountability becomes increasingly difficult to localise.</p>
    </sec>
    <sec id="sec6">
      <title>6. Standardisation, Inequality, and Administrative Risk</title>
      <p>Beyond questions of individual accountability, the perceived objectivity of AI carries broader organisational and societal risks that extend into patterns of inequality and institutional inertia. [<xref ref-type="bibr" rid="B3">3</xref>] demonstrate that algorithmic systems may generate disparate outcomes even when they are designed and deployed with ostensibly neutral intentions. These systems rely on historical data, predefined criteria, and proxy variables that reflect existing social and organisational structures. When such structures are encoded into automated decision-making, standardisation does not eliminate inequality but can instead stabilise and amplify it.</p>
      <p>In administrative and HR contexts, standardisation often privileges profiles, trajectories, and behaviours that have been historically dominant or institutionally rewarded. AI systems trained on past recruitment, promotion, or performance data are therefore more likely to reproduce these patterns. The perceived objectivity of algorithmic outputs makes this reproduction less visible and less contestable. Decisions appear rational and evidence-based, even when they systematically disadvantage individuals whose backgrounds, career paths, or competencies fall outside established norms. [<xref ref-type="bibr" rid="B4">4</xref>] provides compelling illustrations of how automated systems in public administration entrench disadvantage while presenting themselves as efficient and rational. Although HR contexts differ from welfare administration in mandate and scope, the underlying mechanisms are comparable. When automated systems govern access to employment opportunities, leadership development, or internal mobility, they shape life chances in ways that extend beyond the organisational boundary. The administrative logic of efficiency and consistency may thus produce consequences that contradict organisational commitments to inclusion, fairness, and social responsibility.</p>
      <p>From an organisational perspective, these dynamics represent a strategic rather than merely ethical risk. Standardisation can deliver short-term gains in efficiency and predictability, yet it may simultaneously erode the diversity of perspectives and experiences that organisations require in order to adapt to complex and volatile environments. Organizational research has long warned that excessive standardization, even when efficiency-driven, can undermine adaptive capacity by privileging historical patterns over contextual judgement and interpretive flexibility ([<xref ref-type="bibr" rid="B1">1</xref>]; [<xref ref-type="bibr" rid="B11">11</xref>]). When perceived objectivity discourages questioning and reflection, organisations may become increasingly reliant on historical patterns to navigate future challenges. Innovation and adaptability depend on the capacity to recognise when established criteria no longer serve emerging realities. The suppression of contestation can weaken organisational learning. Decisions that are framed as objectively correct leave little room for critical dialogue or reinterpretation. Over time, this can foster compliance rather than engagement, reducing the organisation’s ability to identify blind spots or emerging risks. In this sense, AI-driven standardisation does not merely affect individual outcomes but reshapes organisational culture, reinforcing stability at the expense of responsiveness and creative problem-solving.</p>
      <p>By way of illustration, consider an organization that deploys an AI-supported promotion screening system trained on historical performance and career progression data. While formally neutral, the system consistently ranks candidates with non-linear career paths, interdisciplinary profiles, or atypical experience lower than those who resemble previously promoted managers. Over time, decision-makers come to rely on these rankings as rational baselines, narrowing the internal leadership pipeline and reinforcing homogeneity. The resulting rigidity may remain invisible in the short term, yet it gradually constrains the organization’s capacity to respond to new strategic, technological, or societal demands.</p>
      <p>When standardised outcomes are being presented as neutral and authoritative, AI systems can thus obscure the strategic costs of rigidity. Organisations that fail to recognise this risk may find themselves optimised for past conditions while remaining ill-equipped to respond to new social, technological, and market dynamics.</p>
    </sec>
    <sec id="sec7">
      <title>7. Human Judgement in an AI-Supported Organisation</title>
      <p>Critiquing the growing authority of AI in organisational decision-making does not imply a rejection of technological support. Rather, it calls for deliberate and explicit organisational choices regarding the role and status of human judgement. AI systems can undoubtedly enhance analytical capacity, consistency, and efficiency, particularly in complex administrative environments. However, when algorithmic outputs begin to define what counts as legitimate judgement by default, the function of human decision-makers risks being reduced to validation rather than deliberation.</p>
      <p>Research on human-AI interaction indicates that unstructured reliance on AI systems encourages acceptance rather than reflective engagement ([<xref ref-type="bibr" rid="B6">6</xref>]). When this insight is translated into organisational settings, it highlights a critical governance challenge. HR and administrative decision processes often involve normative judgements about competence, potential, fairness, and organisational fit. These judgements cannot be fully specified through predefined criteria without loss of contextual sensitivity. If AI-generated recommendations are treated as authoritative baselines, human actors may engage less in critical evaluation and more in procedural confirmation. This dynamic is particularly consequential because human responsibility remains formally intact. Managers and administrators are still accountable for decisions, yet the epistemic basis of those decisions increasingly originates from algorithmic systems. Without intentional process design, this creates a mismatch between responsibility and agency. Human decision-makers may feel compelled to align with algorithmic recommendations in order to demonstrate rationality, consistency, or compliance, even when they harbour reservations. Over time, such dynamics can weaken professional judgement and reduce confidence in discretionary decision-making. In practice, such a positioning can be supported through lightweight governance mechanisms, for example by requiring brief written justifications when algorithmic rankings are adopted or deliberately overridden in HR decisions. These procedural moments do not aim to second-guess technical outputs, but to render human reasoning explicit and accountable rather than implicit or assumed.</p>
      <p>[<xref ref-type="bibr" rid="B9">9</xref>] argues that recruitment processes require a balance between automation and human-centric evaluation to avoid the replication of standardised managerial profiles. This argument can be extended beyond recruitment to a wide range of administrative domains, including performance assessment, promotion, talent development, and organisational restructuring. In each of these areas, decisions involve not only measurable indicators but also interpretive assessments that depend on organisational context, ethical considerations, and future-oriented judgement. Preserving space for human deliberation, therefore, requires more than symbolic oversight. Organisations must actively design decision processes that invite justification, disagreement, and contextual interpretation. AI outputs should be framed explicitly as decision inputs rather than decision outcomes. This framing signals that algorithmic recommendations are provisional and contingent, not authoritative truths. Such an approach supports reflective engagement and reinforces the legitimacy of human judgment without discarding the analytical benefits of AI. Reasserting human judgement should not be understood as a return to intuition or subjectivity in opposition to data-driven decision-making. It represents a governance choice that recognises the limitations of objectivity narratives and the organisational value of accountable reasoning. Human judgement remains essential precisely because organisational decisions are embedded in social relations, power structures, and ethical commitments that technical models cannot fully capture.</p>
      <p>When human judgment is positioned as an integral component of AI-supported decision-making rather than as an exception or override mechanism, organisations can mitigate the risks associated with perceived objectivity. Such an approach strengthens accountability, supports organisational learning, and preserves the capacity for critical reflection. In this sense, human judgement is not a barrier to effective administration but a necessary condition for responsible authority in AI-mediated organisations.</p>
    </sec>
    <sec id="sec8">
      <title>8. Conclusion</title>
      <p>AI has become a powerful and influential presence in HR and administrative decision-making not simply because it automates processes, but because it embodies and institutionalises claims of objectivity. This perceived objectivity operates as a legitimising force. It shapes how decisions are framed, how authority is exercised, and how responsibility is allocated within organisations. By presenting algorithmic outputs as neutral, rational, and evidence-based, organisations normalise reliance on AI while rendering the resulting redistribution of decision authority less visible and less contested.</p>
      <p>The central challenge identified in this paper lies not in automation itself, but in the organisational narratives that accompany AI adoption. When AI is framed primarily as an objective decision aid, its outputs acquire epistemic authority that subtly displaces human judgment. Contestation becomes harder to justify, deviations appear irrational, and accountability becomes diffused across technical systems and organisational procedures. In such contexts, decision-makers may remain formally responsible while becoming increasingly constrained in their capacity to exercise meaningful judgement. This dynamic has implications that extend beyond individual decisions. Over time, the combination of perceived objectivity and standardisation risks reshaping organisational cultures. It can privilege consistency over interpretation, compliance over reflection, and historical patterns over future-oriented reasoning. Organisations that rely uncritically on algorithmic objectivity may find themselves optimised for efficiency while becoming less capable of adaptation, learning, and innovation. What appears as rational governance in the short term may thus undermine strategic resilience in the long term. Addressing these risks requires a shift in how AI is positioned within organisational decision architectures. Efficiency and consistency are legitimate goals, but they cannot serve as the sole criteria for evaluating AI-supported decision-making. Organisations must engage more explicitly with questions of authority, legitimacy, and accountability. This involves recognising that AI systems do not merely inform decisions but actively participate in defining what counts as acceptable judgment.</p>
      <p>Preserving human judgement in AI-supported contexts should therefore be understood as a governance imperative rather than a resistance to technological progress. Human judgement remains indispensable because organisational decisions are inherently normative, contextual, and future-oriented. They involve evaluations of potential, fairness, responsibility, and risk that cannot be fully captured by algorithmic criteria. Reasserting human judgement allows organisations to retain ownership of these evaluative dimensions and to align decisions with their ethical commitments and strategic aspirations.</p>
      <p>Ultimately, responsible authority in AI-mediated organisations depends on the willingness to interrogate objectivity claims and to design decision processes that maintain space for reflection, justification, and disagreement. By doing so, organisations can harness the analytical strengths of AI without surrendering the very capacities that underpin accountable and resilient decision-making.</p>
    </sec>
    <sec id="sec9">
      <title>Acknowledgements</title>
      <p>During the preparation of this manuscript/study, the author used ChatGPT 5.1 solely for the purpose of language improvement. The author has reviewed and edited the output and takes full responsibility for the content of this publication.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <title>References</title>
      <ref id="B1">
        <label>1.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Adler, P. S., &amp; Borys, B. (1996). Two Types of Bureaucracy: Enabling and Coercive. <italic>Administrative</italic><italic>Science</italic><italic>Quarterly,</italic><italic>41,</italic> 61-89. https://doi.org/10.2307/2393986 <pub-id pub-id-type="doi">10.2307/2393986</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2307/2393986">https://doi.org/10.2307/2393986</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Adler, P.</string-name>
              <string-name>Borys, B.</string-name>
            </person-group>
            <year>1996</year>
            <pub-id pub-id-type="doi">10.2307/2393986</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B2">
        <label>2.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Alaimo, C., &amp; Kallinikos, J. (2021). Managing by Data: Algorithmic Categories and Organizing. <italic>Organization</italic><italic>Studies,</italic><italic>42,</italic> 1385-1407. https://doi.org/10.1177/0170840620934062 <pub-id pub-id-type="doi">10.1177/0170840620934062</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1177/0170840620934062">https://doi.org/10.1177/0170840620934062</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Alaimo, C.</string-name>
              <string-name>Kallinikos, J.</string-name>
            </person-group>
            <year>2021</year>
            <pub-id pub-id-type="doi">10.1177/0170840620934062</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B3">
        <label>3.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Barocas, S., &amp; Selbst, A. D. (2016). Big Data’s Disparate Impact. <italic>California Law Review</italic><italic>,</italic><italic>104</italic><italic>,</italic> 671-732.</mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Barocas, S.</string-name>
              <string-name>Selbst, A.</string-name>
            </person-group>
            <year>2016</year>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B4">
        <label>4.</label>
        <citation-alternatives>
          <mixed-citation publication-type="book">Eubanks, V. (2018). <italic>Automating</italic><italic>Inequality</italic><italic>:</italic><italic>How</italic><italic>High</italic><italic>-</italic><italic>Tech Tools Profile</italic><italic>,</italic><italic>Police</italic><italic>,</italic><italic>and</italic><italic>Punish</italic><italic>the</italic><italic>Poor</italic><italic>.</italic> St. Martin’s Press.</mixed-citation>
          <element-citation publication-type="book">
            <person-group person-group-type="author">
              <string-name>Eubanks, V.</string-name>
              <string-name>Profile, P</string-name>
            </person-group>
            <year>2018</year>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B5">
        <label>5.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Gerlich, M. (2025a). The Shifting Influence: Comparing AI Tools and Human Influencers in Consumer Decision-Making. <italic>AI,</italic><italic>6,</italic> Article 11. https://doi.org/10.3390/ai6010011 <pub-id pub-id-type="doi">10.3390/ai6010011</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/ai6010011">https://doi.org/10.3390/ai6010011</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Gerlich, M.</string-name>
            </person-group>
            <elocation-id>11</elocation-id>
            <pub-id pub-id-type="doi">10.3390/ai6010011</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B6">
        <label>6.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Gerlich, M. (2025b). From Offloading to Engagement: An Experimental Study on Structured Prompting and Critical Reasoning with Generative Ai. <italic>Data,</italic><italic>10,</italic> Article 172. https://doi.org/10.3390/data10110172 <pub-id pub-id-type="doi">10.3390/data10110172</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3390/data10110172">https://doi.org/10.3390/data10110172</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Gerlich, M.</string-name>
            </person-group>
            <elocation-id>172</elocation-id>
            <pub-id pub-id-type="doi">10.3390/data10110172</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B7">
        <label>7.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Glikson, E., &amp; Woolley, A. W. (2020). Human Trust in Artificial Intelligence: Review of Empirical Research. <italic>Academy</italic><italic>of</italic><italic>Management</italic><italic>Annals,</italic><italic>14,</italic> 627-660. https://doi.org/10.5465/annals.2018.0057 <pub-id pub-id-type="doi">10.5465/annals.2018.0057</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.5465/annals.2018.0057">https://doi.org/10.5465/annals.2018.0057</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Glikson, E.</string-name>
              <string-name>Woolley, A.</string-name>
            </person-group>
            <year>2020</year>
            <pub-id pub-id-type="doi">10.5465/annals.2018.0057</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B8">
        <label>8.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Janssen, M., &amp; Kuk, G. (2016). The Challenges and Limits of Big Data Algorithms in Technocratic Governance. <italic>Government</italic><italic>Information</italic><italic>Quarterly,</italic><italic>33,</italic> 371-377. https://doi.org/10.1016/j.giq.2016.08.011 <pub-id pub-id-type="doi">10.1016/j.giq.2016.08.011</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.giq.2016.08.011">https://doi.org/10.1016/j.giq.2016.08.011</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Janssen, M.</string-name>
              <string-name>Kuk, G.</string-name>
            </person-group>
            <year>2016</year>
            <pub-id pub-id-type="doi">10.1016/j.giq.2016.08.011</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B9">
        <label>9.</label>
        <citation-alternatives>
          <mixed-citation publication-type="journal">Karanfiloska, M. (2024). The Tayloristic Trap: The Failure of HR Automation in Hiring and Its Impact on Organisational Outcomes. <italic>Open</italic><italic>Journal</italic><italic>of</italic><italic>Business</italic><italic>and</italic><italic>Management,</italic><italic>12,</italic> 4254-4259. https://doi.org/10.4236/ojbm.2024.126213 <pub-id pub-id-type="doi">10.4236/ojbm.2024.126213</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.4236/ojbm.2024.126213">https://doi.org/10.4236/ojbm.2024.126213</ext-link></mixed-citation>
          <element-citation publication-type="journal">
            <person-group person-group-type="author">
              <string-name>Karanfiloska, M.</string-name>
            </person-group>
            <year>2024</year>
            <pub-id pub-id-type="doi">10.4236/ojbm.2024.126213</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B10">
        <label>10.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Kellogg, K. C., Valentine, M. A., &amp; Christin, A. (2020). Algorithms at Work: The New Contested Terrain of Control. <italic>Academy</italic><italic>of</italic><italic>Management</italic><italic>Annals,</italic><italic>14,</italic> 366-410. https://doi.org/10.5465/annals.2018.0174 <pub-id pub-id-type="doi">10.5465/annals.2018.0174</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.5465/annals.2018.0174">https://doi.org/10.5465/annals.2018.0174</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Kellogg, K.</string-name>
              <string-name>Valentine, M.</string-name>
              <string-name>Christin, A.</string-name>
            </person-group>
            <year>2020</year>
            <pub-id pub-id-type="doi">10.5465/annals.2018.0174</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B11">
        <label>11.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Leonardi, P. M. (2011). When Flexible Routines Meet Flexible Technologies: Affordance, Constraint, and the Imbrication of Human and Material Agencies1. <italic>MIS</italic><italic>Quarterly,</italic><italic>35,</italic> 147-167. https://doi.org/10.2307/23043493 <pub-id pub-id-type="doi">10.2307/23043493</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.2307/23043493">https://doi.org/10.2307/23043493</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Leonardi, P.</string-name>
              <string-name>Affordance, C</string-name>
            </person-group>
            <year>2011</year>
            <pub-id pub-id-type="doi">10.2307/23043493</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B12">
        <label>12.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Logg, J. M., Minson, J. A., &amp; Moore, D. A. (2019). Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. <italic>Organizational</italic><italic>Behavior</italic><italic>and</italic><italic>Human</italic><italic>Decision</italic><italic>Processes,</italic><italic>151,</italic> 90-103. https://doi.org/10.1016/j.obhdp.2018.12.005 <pub-id pub-id-type="doi">10.1016/j.obhdp.2018.12.005</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1016/j.obhdp.2018.12.005">https://doi.org/10.1016/j.obhdp.2018.12.005</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Logg, J.</string-name>
              <string-name>Minson, J.</string-name>
              <string-name>Moore, D.</string-name>
            </person-group>
            <year>2019</year>
            <pub-id pub-id-type="doi">10.1016/j.obhdp.2018.12.005</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B13">
        <label>13.</label>
        <citation-alternatives>
          <mixed-citation publication-type="book">Pasquale, F. (2015). <italic>The</italic><italic>Black</italic><italic>Box</italic><italic>Society.</italic> Harvard University Press. https://doi.org/10.4159/harvard.9780674736061 <pub-id pub-id-type="doi">10.4159/harvard.9780674736061</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.4159/harvard.9780674736061">https://doi.org/10.4159/harvard.9780674736061</ext-link></mixed-citation>
          <element-citation publication-type="book">
            <person-group person-group-type="author">
              <string-name>Pasquale, F.</string-name>
            </person-group>
            <year>2015</year>
            <pub-id pub-id-type="doi">10.4159/harvard.9780674736061</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
      <ref id="B14">
        <label>14.</label>
        <citation-alternatives>
          <mixed-citation publication-type="other">Wachter, S., Mittelstadt, B., &amp; Floridi, L. (2017). Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. <italic>International</italic><italic>Data</italic><italic>Privacy</italic><italic>Law,</italic><italic>7,</italic> 76-99. https://doi.org/10.1093/idpl/ipx005 <pub-id pub-id-type="doi">10.1093/idpl/ipx005</pub-id><ext-link ext-link-type="uri" xlink:href="https://doi.org/10.1093/idpl/ipx005">https://doi.org/10.1093/idpl/ipx005</ext-link></mixed-citation>
          <element-citation publication-type="other">
            <person-group person-group-type="author">
              <string-name>Wachter, S.</string-name>
              <string-name>Mittelstadt, B.</string-name>
              <string-name>Floridi, L.</string-name>
            </person-group>
            <year>2017</year>
            <pub-id pub-id-type="doi">10.1093/idpl/ipx005</pub-id>
          </element-citation>
        </citation-alternatives>
      </ref>
    </ref-list>
  </back>
</article>