Artificial Intelligence Empowering Human Resource Management in the New Liberal Arts Paradigm: Theoretical Research and Practical Exploration ()
1. Introduction
Traditional liberal arts education emphasizes refined training in professional directions, resulting in distinct boundaries and high disciplinary barriers. This model often leads to a narrow knowledge structure for talents, potentially limiting the development of interdisciplinary capabilities. Emerging technologies—from the Internet to AI—have profoundly transformed production and lifestyles while generating complex challenges that transcend single-discipline frameworks. Addressing these challenges demands multidisciplinary collaboration, giving rise to the “New Liberal Arts” initiative. This approach integrates modern information technology into traditional humanities curricula to foster interdisciplinary convergence between arts and sciences. Through comprehensive, cross-disciplinary learning, it broadens students’ intellectual horizons and stimulates innovative thinking.
Within this New Liberal Arts framework, the development of HRM programmes faces dual challenges: integrating cross-disciplinary knowledge systems and comprehensively cultivating integrated competencies. From “Internet Plus” to AI-enabled approaches, emerging technologies have permeated every stage of HRM—recruitment, development, deployment, and retention. This trend is propelling traditional HR practices towards a data-driven, intelligent decision-making paradigm characterized by digital and intelligent HRM models (Mao, 2024).
Currently, the application of emerging technological methods in HRM is gaining momentum, primarily concentrated in recruitment and training domains. Regarding talent acquisition, modern technologies can significantly enhance the precision of CV-to-role matching, assisting recruiters in swiftly identifying and screening suitable candidates, thereby improving recruitment efficiency. Concurrently, AI interview systems are increasingly adopted by enterprises. By creating virtual interviewer personas that mimic human communication patterns, these systems conduct preliminary candidate assessments, thereby boosting talent screening efficiency within HR departments. In employee training, AI can establish bespoke training databases based on individual capabilities and career plans, generating personalised development programmes. Furthermore, AI facilitates the simulation of real-world work scenarios for interactive training with staff. This not only enhances professional skills but also optimises the immersiveness and effectiveness of training, thereby comprehensively improving the learning experience (He et al., 2024).
Against the backdrop of digital era development and the concurrent advancement of New Liberal Arts initiatives, systematically examining the opportunities and challenges arising from AI applications in HRM holds significant theoretical value and practical relevance. How to construct a teaching system that aligns with contemporary developments and closely meets industrial demands has become an urgent issue in current professional development. This study aims to summarise existing research on the application of AI technology in HRM, further identify key variables commonly used in empirical studies within this field, and elucidate the insights and pathways provided by the New Liberal Arts development philosophy for the construction of teaching systems in HRM programmes.
2. Review of AI Technology Applications across Different Modules in HRM
2.1. AI-Driven Recruitment Management
Within recruitment processes in HRM, AI technology has achieved near-complete penetration across the entire workflow, with its technical sophistication continually advancing. During CV screening, AI can automatically parse vast volumes of applications and match them to job roles, significantly enhancing screening efficiency. Some business units have further integrated speech recognition and sentiment analysis models to assist in evaluating interview audio recordings, conducting in-depth analyses of candidates’ communication skills and emotional states (Ruo, 2025; Su, 2025). Concurrently, current research transcends traditional keyword matching by incorporating deep learning models. Su (2025) achieves deep semantic analysis of CV text through models such as BERT, CNN-LSTM, and XGBoost. This converts candidate information into vectors and outputs matching scores, resolving the issue of overlooking potential talent inherent in traditional keyword screening (Su, 2025). During candidate communication, AI communication assistants not only provide real-time responses at speeds 5 - 10 times faster than human agents but also dynamically update candidate lists. They match job information to applicant requirements, thereby optimizing the interaction experience (Ruo, 2025).
The application of AI in recruitment has also raised a series of new challenges, primarily concerning data privacy and security, the fairness of algorithmic decision-making, and the establishment of human-machine collaboration models. Some candidates, when interacting with AI recruitment assistants, may question whether AI interviews involve appearance bias or accurately recognize dialects. How to incorporate human-centric elements into AI interviews is a key focus for AI applications in recruitment. In the field of human resources management, designing artificial intelligence based on the “people-oriented principle” centres on ensuring AI consistently serves as an auxiliary tool to meet human needs and safeguard employee rights, rather than becoming the primary decision-maker. Its specific implications must unfold around the logic of “transparency—Accountability—Rights—Control”: AI must demonstrate explainability in core scenarios such as recruitment screening and performance evaluation. This includes providing candidates with specific reasons for CV rejection (e.g., mismatch with role requirements) and explaining the behavioural data underpinning performance ratings to employees, thereby preventing distrust stemming from “black-box decision-making”. Concurrently, a clear accountability mechanism must be established. When AI exhibits decision-making biases (such as implicit discrimination or data misjudgements), the enterprise bears overall compliance responsibility, the HR team is accountable for the accuracy of input data, and the AI supplier is responsible for algorithmic fairness, thereby preventing buck-passing; Furthermore, robust human appeal channels must be established. When employees contest AI decisions, they should be able to submit appeals through designated entry points (e.g., HR system modules). Organisations must ensure human teams (such as HR Business Partners and department heads) review such appeals within stipulated timeframes, providing written feedback with the basis for the review to guarantee closed-loop resolution of disputes. Human oversight must remain paramount throughout. AI should only undertake auxiliary tasks such as initial CV screening and basic data aggregation. Critical decisions regarding hiring, promotion, or dismissal must remain under human control. Strict safeguards must be implemented for sensitive employee data (e.g., national ID numbers, remuneration details), adhering to the principle of data minimisation and implementing tiered access controls. This approach ultimately achieves a balance where AI enhances HR efficiency while safeguarding employee rights. Furthermore, candidates’ CVs, assessment data, and background information constitute private data. Enterprises must also ensure compliance management in the storage and processing of such information (Liang et al., 2025).
The application of AI in recruitment has shifted HRM from “transactional processing” towards “human-machine collaboration”. From the perspective of professional HRM education, this necessitates moving beyond traditional theoretical frameworks. On one hand, conventional teaching emphasizes recruitment processes, interview techniques, CV analysis, and legal compliance. Yet in the digital era, students must focus on designing AI with “human-centric” principles—such as creating efficient yet empathetic AI recruitment workflows that maintain warmth in automated communications. Concurrently, the “New Liberal Arts” framework demands greater integration of practical technology into teaching. Instructors can utilize specialized virtual simulation training labs to introduce simplified AI recruitment tools. Through hands-on configuration, students observe algorithmic biases inherent in recruitment and experiment with corrections via data cleansing and parameter adjustments. These opportunities and challenges in AI recruitment practice offer valuable insights and direction for reforming HRM curricula.
2.2. Personalized Intelligent Training Systems
As organizational environments evolve, employees must continually update their skills and acquire new knowledge for businesses to sustain core competitiveness. Traditional, one-size-fits-all training models struggle to meet employees’ personalized career development needs. AI, however, can analyse data to tailor training programmes to individual employees. This not only supports employee growth but also effectively underpins the organization’s sustainable development strategy.
Employee training constitutes a pivotal step in an organization’s human resource development and enhancement of organizational effectiveness. Scholars investigating the upgrading pathways of employee training through AI technology have concluded that during the planning phase, AI can analyse employee data to tailor personalized learning pathways. In the implementation phase, it can generate interactive scenario simulations for practical tasks and automatically produce supporting training materials. During the support and evaluation phase, it functions as an intelligent assistant providing instant consultation and continuous learning feedback. This suite of functions collectively forms an end-to-end employee development support system, enhancing the relevance and effectiveness of training whilst improving staff retention rates (Sui et al., 2024; Guo, 2025). Furthermore, regarding training repositories, AI technology can integrate complex information such as meeting minutes and operational expertise, continuously optimizing and updating the knowledge base to foster a dynamic, interactive, and perpetually evolving learning environment (Luo, 2025).
In practical teaching, these technologies provide new tools and methodologies to enhance students’ training design capabilities. For instance, students must understand how to utilize generative AI to rapidly generate, adapt, and update training content—including scenario-based cases, micro-lesson scripts, and assessment questions—while intelligently filtering learning materials from vast resources to best meet requirements. These competencies will form the core competitive edge for future human resources professionals.
2.3. “AI+” Performance Evaluation
With the advancement of AI, performance management will integrate human-centred approaches with AI capabilities. Employees can receive personalized guidance from AI virtual coaches, while HR departments utilize AI for real-time performance empowerment and dynamic talent matching. This collaboration establishes a performance chain characterized by aligned objectives, meaningful dialogue, and equitable rewards, fostering mutual success for both business operations and employee development. “AI+” performance management delivers real-time, personalized feedback and development guidance, enhancing assessment accuracy and timeliness while assisting employees in charting future career trajectories. For practical teaching, this necessitates cultivating students’ data analysis capabilities to enable informed talent management decisions in future roles. Curriculum must encompass analyzing workplace communications between employees through natural language processing to deliver timely developmental advice and predict potential performance risks, thereby shifting from reactive feedback control to proactive intervention.
2.4. The “AI+” HRBP Model
The HRBP model signifies the evolution of the human resources department beyond traditional administrative support roles, assuming multifaceted responsibilities as strategic partners, change agents, and employee advocates. Its typical organisational structure is the three-pillar model, comprising Human Resources Business Partners (HRBPs), Shared Service Centres (HRSSCs), and Centres of Expertise (HRCOEs). Within this framework: HRSSCs primarily handle standardised processes to enhance operational efficiency, while HRCOEs focus on designing systems and planning policies. These three components operate synergistically to deliver comprehensive human resources solutions for the organisation.
With the deepening application of AI technology, AI-driven Shared Service Centres (AI-HRSSCs) can efficiently handle tasks such as recruitment searches, rostering optimisation, payroll processing, and records management through intelligent knowledge bases, voice interaction systems, optical character recognition technology, and automated databases. At the business partner level, AI-HRBP leverages big data and sentiment analysis to assist in human resource planning, employee relations coordination, training needs assessment, and management system development, significantly enhancing responsiveness to business requirements. At the centre of expertise level, AI-HRCOE constructs predictive models, integrates expert systems with policy case libraries, and simulates human resource development trends to optimize policy formulation. It provides data-driven decision-making recommendations for complex management issues (Zhang & Huo, 2021).
This transformation also imposes new demands on the cultivation of HRM professionals. In practical training, students must not only master how to understand business needs through interviews and observation, but also learn to utilize AI tools to analyse business data—including project progress, market trends, and performance metrics—thereby proactively diagnosing talent management risks within business departments. Examples include gaps in succession planning for core positions and bottlenecks in team collaboration efficiency.
3. Systematic Challenges in AI-Driven HRM
While AI brings transformative potential to HRM, its implementation is fraught with multifaceted challenges that extend beyond individual modules. These challenges can be systematically categorized into technical, ethical, organizational, and legal dimensions, providing a clearer framework for understanding the risks and prerequisites for successful AI integration.
3.1. Technical Challenges
The foundation of effective AI is quality data. HR departments often grapple with incomplete, siloed, or low-quality data, leading to unreliable algorithmic outputs (“garbage in, garbage out”). Furthermore, the “black-box” nature of many complex AI models, such as deep learning networks, poses a significant hurdle. The lack of explainability in decisions—like why a candidate was rejected or a performance score was assigned—can erode trust and complicate debugging.
3.2. Ethical Challenges
Regarding ethical challenges, this dimension is paramount. Algorithmic bias is a critical concern; if AI is trained on historical data that reflects past discriminatory hiring or promotion practices, it can perpetuate and even amplify these biases. Ensuring fairness, accountability, and transparency in AI-driven decisions is a major ethical imperative. Additionally, the extensive data collection required for personalized HR services raises serious questions about employee privacy and informed consent.
3.3. Organizational Challenges
The introduction of AI often triggers employee anxiety about job displacement, deskilling, or a loss of autonomy. This “AI anxiety” can manifest as resistance to change and lower engagement. Cultivating a digital-ready culture that views AI as a tool for augmentation rather than replacement is essential. This requires significant change management, upskilling initiatives, and redefining roles to foster effective human-machine collaboration.
3.4. Legal Challenges
The regulatory landscape for AI in HR is still evolving. Organizations must navigate compliance with emerging laws concerning data protection, algorithmic transparency, and non-discrimination. The absence of clear legal precedents for liability in cases of algorithmic error creates uncertainty for businesses.
4. Summary of Empirical Research Variables
Currently, the application of AI technology primarily impacts employees at psychological and behavioral levels.
Psychologically, research primarily examines employees’ perceptions of fairness regarding AI-driven managerial decisions. Pei et al. (2021) demonstrated through procedural justice theory that employees generally perceive AI algorithmic decisions as less transparent than those made by human managers, leading to lower perceived procedural fairness. Concurrently, organizational inclusivity moderates this relationship: employees in less inclusive environments experience a more pronounced negative impact on perceived fairness from AI decisions. Lou (2025) conducted research in recruitment scenarios, further revealing that only 34.6% of job seekers perceived AI systems as fair. This figure is significantly lower than the acceptance rate for traditional manual screening, with lower-educated and older groups being more likely to question the fairness of AI systems. Tortorella et al. (2025) recognized the importance of organizational culture, with lean organizations emphasizing a people-centred ethos that encourages employee participation in continuous improvement. AI is more readily embraced within such a cultural environment and can enhance staff engagement. These findings impose new requirements on practical teaching. Routine teaching content must transcend basic technical operations, emphasizing guidance for students to deeply contemplate ethical issues arising from AI applications. For instance, courses could employ simulated case studies to explore mitigating negative employee sentiments triggered by AI through enhancing algorithmic transparency, optimizing feedback mechanisms, and fostering inclusive organizational cultures.
At the behavioral level, research focuses on employees’ adaptive behaviour and innovation performance within AI environments. Ye et al. (2025) found that when experiencing AI-related stress, employees employ distinct work restructuring strategies that influence their dual innovation behaviour, with risk propensity playing a key moderating role in this process. Zhang and Chen (2024) revealed that AI anxiety exerts a double-edged sword effect on employee innovation through work restructuring. Furthermore, multiple studies indicate that employees’ digital self-efficacy and self-leadership are key variables in helping staff proactively address AI challenges and stimulate digital creativity (Chi et al., 2025; Xie et al., 2025; Wang et al., 2025). Su (2025) found that organizational training effectively alleviates anxiety and resistance stemming from technological complexity.
The empirical findings on key variables such as AI anxiety, perceptions of procedural fairness, and digital self-efficacy provide a critical evidence-based foundation for designing the HRM curriculum. Understanding these psychological and behavioral responses is not merely academic; it must directly inform pedagogical content and approaches. For instance:
Courses must address AI anxiety by incorporating modules that demystify how AI works, showcasing its role as an assistive tool, and teaching change management techniques that students can later use to support employees.
The recurring issue of low perceived procedural fairness necessitates that teaching emphasizes the “human-in-the-loop” principle. Practical exercises should focus on designing transparent HR processes where AI provides recommendations, but human managers provide explanations and handle exceptions, thereby enhancing fairness perceptions.
To foster digital self-efficacy and innovation behavior, the curriculum should move beyond theory to include hands-on projects where students use AI tools for tasks like analyzing employee sentiment data or designing a digital upskilling pathway. This builds their confidence and competence in leveraging technology to solve real HR problems.
These studies provide concrete focal points for practical teaching. Instructional design should prioritise cultivating students’ ability to “empower employees and stimulate innovation.” Through project tasks such as “Employee Empowerment Scheme Design,” students should be trained to formulate training plans, design personalized development pathways, and assist employees in achieving co-evolution with AI through work redesign—all aimed at enhancing employees’ digital self-efficacy. This requires students not only to comprehend the technology itself but also to master intervention strategies for guiding positive employee psychology and helping staff adapt to their environment.
5. Example Teaching Module: AI Recruitment Algorithm Audit
To translate the core themes of the thesis into practical learning, a dedicated simulation teaching module has been established for reference purposes. In this project, students are tasked with designing and then critically auditing a hypothetical AI-powered recruitment algorithm for a given company.
Phase 1: Algorithm Design. Student teams first design an algorithm’s logic for screening resumes for a specific role (e.g., a marketing manager). They must define the key criteria (skills, experience, education) and their relative weights.
Phase 2: Bias Testing and Fairness Audit. Teams are then provided with a synthetic dataset of resumes that includes protected attributes (e.g., gender, age, university tier). They must run their algorithm and analyze the output for potential disparate impact. This involves checking if the selection rate for any demographic group is significantly lower than others.
Phase 3: Mitigation and Redesign. Based on their audit findings, students must propose and implement technical and procedural modifications to mitigate identified biases. This could involve adjusting weightings, using bias-detection toolkits, or designing a “de-biasing” intermediate step.
Phase 4: Ethical Reflection and Reporting. Finally, students submit a report justifying their design choices, detailing the audit results, explaining their mitigation strategies, and outlining a governance plan for the algorithm’s ongoing monitoring in an organizational context. This project directly addresses challenges like algorithmic fairness and transparency while cultivating critical technical, analytical, and ethical competencies.
6. Research Summary and Future Outlook
This paper systematically reviews the latest theoretical research and practical applications in the field of “AI + HRM,” focusing on specific AI applications within core modules such as recruitment, training, performance evaluation, and HR Business Partnership (HRBP), alongside their dual implications. Research indicates that AI technology, leveraging data-driven and intelligent decision-making mechanisms, significantly enhances the efficiency and personalization of HR services. It demonstrates strong application potential particularly in scenarios such as CV screening, precise training content recommendations, and dynamic performance feedback. However, the widespread penetration of AI has also given rise to issues, including insufficient algorithmic transparency, weakened perceptions of fairness among employees, heightened data security risks, and obstacles to human-machine collaboration, increasingly highlighting its double-edged sword characteristics.
Within empirical research, existing literature predominantly focuses on employees’ cognitive and behavioural responses to AI, such as perceptions of procedural fairness, AI-related anxiety, job redesign behaviours, and innovation performance. This reveals the complex psychological logic and behavioural mechanisms underlying technological adoption. These findings provide crucial theoretical underpinnings and empirical evidence for understanding AI’s practical impact within organizational contexts.
Crucially, against the backdrop of interdisciplinary convergence advocated by the New Liberal Arts initiative, this paper further explores the implications of AI technological advancement for the practical teaching framework of HRM programmes. Findings indicate that future talent development in HRM must transcend the limitations of traditional liberal arts education models. It necessitates establishing an interdisciplinary practical teaching system that integrates technological application capabilities, data analytics literacy, ethical considerations, and humanistic values. This approach cultivates students’ comprehensive competencies to navigate complex management scenarios in the intelligent era.
Looking ahead, research and practice in “AI + HRM” must advance in depth across the following dimensions. Technologically, efforts should strengthen the explainability and operational transparency of AI algorithms, developing more inclusive and equitable intelligent management systems to guard against algorithmic bias and discrimination. Organisationally, a new “human-machine collaboration” model for HRM should be established, clearly delineating the roles of AI and human practitioners to enhance overall management decision quality and employee service experience. Ethics and governance require robust data security safeguards and privacy protection mechanisms, alongside the formulation of ethical guidelines and policy frameworks for AI deployment in HR contexts. Theoretical research should broaden its scope to examine AI’s profound impacts on employee career trajectories, organizational cultural evolution, and the reconfiguration of labour relations, exploring differentiated effects across cultural and industry contexts. Practical innovation should drive the integrated application of AI with emerging technologies such as VR/AR and blockchain, fostering a more intelligent, immersive, and credible HRM ecosystem.
Practical teaching and talent cultivation are pivotal to addressing these challenges and seizing developmental opportunities. Future efforts should focus on deepening interdisciplinary curriculum integration, systematically promoting cross-fertilisation between core HRM courses and subjects like data science, AI fundamentals, and business ethics. Innovative teaching models should be developed, vigorously promoting practice-based learning through real-world corporate projects. Leveraging virtual simulation and VR/AR technologies to construct highly realistic management decision-making scenarios will enable students to enhance their technical application skills and humanistic literacy while tackling practical challenges such as “algorithmic bias correction” and “human-machine collaboration process design”. A new ecosystem for industry-education integration must be established, strengthening deep collaboration with HR technology enterprises. Joint initiatives should include establishing practical teaching bases, developing distinctive teaching cases, and co-building specialised faculty teams to ensure practical teaching content dynamically aligns with industry technological trends and talent demand standards. Ethical cultivation must be reinforced by embedding technology ethics education throughout practical teaching, fostering students’ sensitivity and sense of responsibility towards data privacy protection and algorithmic fairness.
The deep integration of AI and HRM has become an irreversible developmental trend. Future efforts must seek a dynamic equilibrium between efficiency and equity, technology and humanism, innovation and regulation, thereby achieving synergistic advancement in technology-enabled empowerment and people-centred management.
Funding
Project 1:
Anhui University of Applied Technology 2024 Institutional Quality Engineering Project:
Project Title: Exploring the Construction of Practical Teaching Systems for Vocational Higher Education Institutions’ Human Resource Management Programmes in the Context of New Liberal Arts and Rural Revitalisation.
Project Number: 2024xjjxyjy28.
Project 2:
Anhui University of Applied Technology 2024 Institutional Quality Engineering Key Project:
Project Title: Research on Digital and Intelligent Upgrading Pathways for Vocational University HR Programmes Based on the DPSIR Model.
Project Number: 2024xjjxyjz18.