Emotional Predictors of AI Adaptation: A Quantitative Analysis of Fear, Uncertainty, and Resistance among U.S. Adults

Abstract

A number of psychological issues, such as worries about job displacement, the perceived danger to human autonomy, and fears of bias and misuse, are the root causes of the fear and anxiety surrounding the adoption of AI. It is essential to comprehend these psychological obstacles in order to promote adoption and the responsible integration of AI technologies. With this in mind, the study intends to quantitatively examine how people’s adoption response behaviours among US adults relate to their perceived fear and uncertainty about AI. The study employs a large-scale quantitative research design, with a sample size 5000 participants. A robust descriptive quantitative, correlational research design is adopted in this study. The quantitative results indicate that worry, perplexity, and a tendency to mistrust AI are the main issues and cause substantial societal adoption friction. These issues go beyond technical skill deficiencies to encompass extremely complex psychological issues and pinpoint erosion worries. By proving that uncertainty is a crucial and independent predictor of technology adoption, the acquired results support the inclusion of the existing Technology Acceptance Model in the larger theoretical framework for research on digital transformation.

Share and Cite:

Rozen, M. (2025) Emotional Predictors of AI Adaptation: A Quantitative Analysis of Fear, Uncertainty, and Resistance among U.S. Adults. Open Journal of Social Sciences, 13, 1-18. doi: 10.4236/jss.2025.139001.

1. Introduction

The advent of advancing innovative Artificial Intelligence (AI) technology has rapidly evolved into a perversive transformational societal force, fundamentally transforming labour markets, reshaping industries, influencing operational processes, and altering individual personal experiences, while notably redefining professional value and identities across sectors (Dwivedi et al., 2021; Vogel et al., 2023). The AI discourse is increasingly defined by dual narratives juxtaposing its promising potential with its impact on exacerbating societal societies, perceived threats to personal and professional identities, and job replacement (Arboh et al., 2025). The exponential rates of AI integration in critical sector domains offers prospects for significantly enhancing efficiency and innovative transformative capabilities in industries and sectors; however, it simultaneously amplifies profound psychological complexities that fundamentally influence personal perceptions of individuals, influencing emotional states, with notable impacts on behavioural responses to the novel innovative technology (Grassini, 2023; Lițan, 2025).

Research highlights the impact and evolving role of AI integration in amplifying societal anxieties, which often outpaces the capabilities of an individual for comprehensive understanding and adaptability (Bala & Venkatesh, 2016). These psychological phenomena are predominantly conceptualised in the evolving workforce demographics in organisations through established constructs, specifically “AI anxiety” (AIA) and “technostress” (Riemer et al., 2022; Wang & Wang, 2022; Schulte et al., 2020). This highlights emotional and cognitive dissonance commonly associated with its perceived uncertainties regarding its impact on job security, skill obsolescence, and loss of professional identity in modern fast-changing workplaces (Riemer et al., 2022; Wang & Wang, 2022; Schulte et al., 2020; Soulami et al., 2024). Integration of AI across diverse social life domains has created an environment of uncertainties and fears, revealing significant barriers to effective adoption and successful utilisation (Borges do Nascimento et al., 2023; Kim et al., 2025).

The emerging fears imply inherent underpinning critical concerns attributed to skill obsolescence, job security, and declines in professional value due to burgeoning adoption of algorithmic and automation systems, which corroborates the empirical findings that note that job-related anxieties are often intensified by disruptive perceptions of AI to stability of workplace and general occupation systems (Cheng & Jin, 2024; Wei & Li, 2022).

This study aims to quantitatively investigate the relationship between perceived uncertainty and fear regarding AI and adoption response behaviours of individuals among the US adult population, addressing the critical literature gap defined by lack of large-scale empirical data on the interplay between these phenomena. The study employs a large-scale quantitative research design. This will contribute robust empirical evidence on the evolving field of psychology on adaptation of AI, and advances the richness of the current discourses often defined by limited context-specificity and smaller sample sizes. The findings are expected to offer guidance for informed approach to targeted development of custom-designed interventions that specifically seek to address particular psychological barriers, hence promoting readiness for adoption of AI and ensuring alignment of technological advancement with the psychological well-being of the evolving society.

2. Literature Review

2.1. Conceptualising AI Anxiety and Technostress

The increasingly pervasive nature of robust adoption and integration of AI in the contemporary workplace environment has resulted in rise of unique psychological phenomena, primarily including “AI anxiety” and “technostress”, which notably influence the behavioural response environments of adoptions (Wang & Wang, 2022; Grassini, 2023). AI anxiety within this context is conceptualised as a dynamic stress response to perceived potential impacts of Artificial Intelligence applications on the future of an individual, and may be characterised by fears attributed to possibilities for job security, redundancy and irrelevance of skills, and constantly broadening societal implication of technological advancement (Chen et al., 2025). Technostress, in broader terms, is defined as psychological strain that an individual may experience due to exposure to rapidly overwhelming advancement and changes in the fast-paced, exponentially evolving technology, at a rate that outpaces adaptations of individuals, yielding perceived helplessness, loss of control over the technological developments and adaptation, and cognitive overload (Lițan, 2025; McDonald & Schweinsberg, 2025).

Numerous empirical studies have also demonstrated that multifaceted multidimensional manifestations of the increasingly intricate AI anxiety (AIA) phenomenon, which comprises a broad range of emotional exhaustions, perception of declined personal value, and fear of redundancy in the rapidly evolving AI-driven organisational and broader workplace environments (Li & Huang, 2020; Wang & Wang, 2022). Lițan (2025) highlights that prolonged exposure to a largely AI-augmented work can potentially aggravate the depressive symptoms attributed to constant monitoring and perceived insecurities about job. Perceived erosion of personal and professional identity is a prevalent concerning challenge in the workplace environments with integrated AI solutions, hence contributing to identifying crises, subjecting employees to questioning of the value and contributions in the organisation in the age of advancing capabilities of AI (Arboh et al., 2025). AI anxiety is an intricate psychological construct that includes professional identity, competence, and security of future livelihoods in the contexts of exponential rates in deployment of AI technologies.

2.2. Psychological Barriers to AI Adoption

Psychological barriers impeding effective and seamless adoption of AI solutions into workplace environments extend beyond the general domains of the broader AI Anxiety spectrum, comprising specific subjective perceptions and fears that hinder effective successful robust adoption and integration. The most significant psychological barrier is the conventionally founded perceptions about the “black box” nature of AI, in which its operations and functionalities are perceived as opaque and unexplainable to amateur users, characterising it as opaque in decision making (Jack et al., 2015; Lee et al., 2025). Dwivedi et al., (2021) emphasise the role of “black box” characterisation of AI in subsequently fostering heightened distrust levels and limited transparency in the entirety of the intricate organisational and workplace-context data-driven decision-making processes (Dwivedi et al., 2021). Psychologically, the opaqueness of AI exacerbates hesitancy and organisational resistance, despite the potential of AI in enhancing efficiency and performance (Benbya et al., 2020; Na et al., 2022). Evidently, it is imperative to prioritise the adoption of accessible explainable AI to effectively mitigate resultant organisational resistance (Suseno, Laurell, & Sick., 2023), essential for enhancing transparency in decision-making using AI systems and algorithms; however, the core suspicion persists because of the uncertainties associated with the implications and functioning of the technology.

2.3. Theoretical Frameworks for Technology Acceptance

The Technology Acceptance Model (TAM) provides a robust foundational theoretical framework for a holistic understanding of how perceived usefulness and ease of use could quantifiably influence the intentions of the buyer to adopt a specific AI-driven technology (Lee et al., 2025). Significantly, perceived usefulness—a construct referring to subjective belief that innovation will contribute to improved job performance—are crucial determinant factors in fostering adoption of technology (Sugandini et al., 2018; Gursoy et al., 2019). An important extension of TAM framework for adoption of AI is the incorporation of the uncertainty component as a major determining factor of AI-adoption’s behavioural responses. Jack et al. (2015) highlights these uncertainties influencing adoption, particularly associated with risks’ costs, and long-term impacts of adoption of the emerging innovative technologies, emphasising on their potential to significantly impede adoption, even when there are high scores for perceived usefulness and ease of use. This finding is corroborated by Sugandini et al. (2018), noting that uncertainty has direct influence on decisions involving adoption, both as a crucial moderationing factor, and as a significant barrier that can dilute the favourable perceptions.

2.4. Potential Psychological Benefits of AI

Effective integration of AI can foster human well-being. AI tools have demonstrated capabilities in positively enhancing happiness, reducing loneliness feelings, and fostering self-esteem (Wei & Li, 2022). Advanced AI tools empowers users and reinforce psychological well-being by enhancing capability of users, facilitating automation, and streamlining of repetitive tasks, hence boosting confidence of the user through real improvements in performance outcomes, and thus reduces psychological depression scores for other worker demographic such as low-skilled and older employees (Arboh et al., 2025). The paradoxical dualistic nature of AI adoption as a potential critical stressor and significant instrumental tool to improve well-being of workers underscore the need for a holistic strategic implementation that harnesses its key benefits while incorporating mitigation mechanisms to address inherent risks (Kim et al., 2025).

3. Research Methodology

3.1. Research Design

A robust descriptive quantitative, correlational research design is adopted in this study to conduct a holistic investigation on the correlations between fear of uncertainty and adaptation to artificial intelligence using a sample size of 5000 adults within geographical scopes of the United States. This choice of design is justifiably relevant and appropriate for this study’s focus on examination of the strength of directions of relationships between dependent and independent variables across this study’s large samples. This therefore enables a systematic approach to exploration of divergent perceptions and behavioural responses towards adoption of AI (Wang & Wang, 2022). The descriptive component of the research design effectively enables a robust summary of the present cognitive and emotional response states of the participants, understanding of the phenomenon, behavioural adaptation responses to AI adoption, perceived uncertainties, and challenges inherent in AI adoption. The correlational dimension of the research design is aimed at enabling the research to identify statistical correlations between the variables, the perceived fear of uncertainty and critical key performance indicators of effective AI adaptation (Wang et al., 2025).

We consider the phenomenon of fear of uncertainty as a diffidence in both own skills of its appropriate usage and lack of confidence in the mechanisms of its work and their reliance.

In turn, adapting to AI implies adjusting to the growing presence and capabilities of AI in various aspects of life, particularly in the workplace, based on trust in AI tools mechanisms and openness to their usage.

3.2. Participants

The study targeted a large target population size of 5,000 current residents in the United States. The participants included in recruitment were aged between 25 years and 60 years, with active employment status in an industry. Recruitment and screening of participants was performed using SurveyMonkey platform, with mandatory requirements for informed consent and verification of age and status of employment before the actual participation in the survey (Rozen, 2023). The demographic profiles are presented in Table 1 and Figures 1-5.

Formation of a sample was carried out based on Facebook announcement in the following core (most active in industries functioning and development) states in every region of the USA:

North: Montana, North Dakota, Minnesota, Wisconsin, Michigan.

South: Florida, Texas, Louisiana, Mississippi, Alabama.

East: New York, Pennsylvania, Massachusetts, Maine.

West: California, Oregon, Washington, Nevada.

The announcement contained brief description and intention of survey, criteria of inclusion, SurveyMonkey link (for screening) and contact data.

Table 1. Demographic characteristics of survey respondents (N = 5000).

Characteristic

Category

Count

Percentage

Age Group

Under 25

0

0%

25 - 34

1400

28%

35 - 44

1550

31%

45 - 54

950

19%

55 - 64

500

10%

65+

0

0%

Gender

Female

2600

52%

Male

2250

45%

Non-binary/Third gender

100

2%

Prefer not to say

50

1%

Highest Education

High school diploma or equivalent

400

8%

Some college

850

17%

Bachelor’s degree

2100

42%

Master’s degree

1350

27%

Doctorate or equivalent

300

6%

Current Role/Job Level

Student

0

0%

Entry-level employee

900

18%

Mid-level professional

1950

39%

Senior leader or executive

1100

22%

Business owner or entrepreneur

600

12%

Unemployed/Retired

0

0%

Industry

Healthcare

900

18%

Education

750

15%

Financial services

650

13%

Government/Public sector

450

9%

Technology/Software

700

14%

Legal

250

5%

Retail or customer service

550

11%

Hospitality

350

7%

Other

400

8%

Source: Survey Data, 2025.

Figure 1. Distribution of age group.

Figure 2. Distribution of gender.

Figure 3. Distribution of highest education

Figure 4. Distribution of current role/job level.

Figure 5. Distribution of industry.

3.3. Instrument

A researcher-designed questionnaire was the primary instrument used for data collection and comprised of two critically structured sections: Section 1 of the instrument aimed at assessing the key Challenges, Perceptions, & Needs associated to adoption of AI, and comprising a set of 10 multiple-choice questions (Q1 - Q10). Section 2 collects the participants’ demographic data (Q11 - Q15), as presented in Table 1. Several items from Section 1 were used to operationalise the key variables of study, “Fear of Uncertainty” and “Adapting to AI,” to collect participants’ intricate, heterogeneous, and multifaceted nature. These primary constructs for fear of uncertainty and AI adaptation are particularly operationalised across these items to reflect cognitive, emotional, and behavioural dimensions associated with fear and AI technology adaptation (Grassini, 2023). The operationalisation of variables is illustrated in Table 2.

Table 2. Operationalisation of variables: fear of uncertainty and adapting to AI.

Variable

Dimension

Survey Item (Question No.)

Emotional response (IV)

Q2

What emotion best describes how you feel when you think about AI evolving rapidly?

Q3

What is the biggest obstacle for you in adapting to AI?

Lack of confidence

Distrust/Risk perception

Q4

Which of the following best describes your level of trust in AI?

Specific worries

Q6

When comes to AI, what worries you most?

Adapting to AI (DV)

Q1

How would you describe your current understanding of how AI works?

Q5

How confident are you could keep up with AI developments in your industry in the next 2 years?

Openness (proactive engagement)

Q6

What would most help you become more open to using AI tools?

Low avoidance

Q7

Have you ever avoided using an AI because you didn’t understand it?

3.4. Procedure

Collection of data was initiated after screening participants for eligibility which was solely based on participants’ status of employment and age. The included eligible participants were mandated to access pages on informed consent in the SurveyMonkey platform, requiring their consent prior to active participation in the research questionnaire tool (Rozen, 2023). Completion of self-reported surveys was filled in two parts, with automated recording of data after submission. Cleaning of data was performed to identify and mitigate missing data and inconsistencies through imputation and listwise deletion as needed.

3.5. Data Analysis

Descriptive and inferential statistical techniques were incorporated in the data analysis involving the collected data. Pearson’s Chi-square tests (χ2) were used for inferential statistical analysis aimed at evaluating the relationships between operationalised fear of uncertainty and quantifiable AI adaptation measures, with complementary employment of effect size analysis using Phi (ϕ) and Cramer’s V, essential to facilitate qualification of the strength of associations between the variables (Rozen, 2023). After the thresholds for interpretations of effect sizes, robust guidelines were established. The relationships were categorised from weak to very strong, crucial to ensure clarity during the interpretation of identified strength of correlations between variables. Significant levels of p < 0.05 for purposes of this analysis was set for all inferential tests conducted.

4. Results and Analysis

4.1. Demographic Profile of Respondents

As it was mentioned above, a sample size of 5,000 adults residing in the US was used in this study for examination of how fear and uncertainty influence adoption of AI. The sample is diverse to capture a predominantly actively employed, aged, professionally actively engaged, well-educated demographic. The respondents distribution comprised predominantly the ages 35 - 44 (31%), 25 - 34 (28%), and 45 - 54 (19%), which represents the category of age cohorts with higher likelihoods for agreement in exponentially evolving transformational AI-driven technologies with specific workforce populations. Females accounted for the largest proportion of the respondent demographics, constituting (52%), with males accounting for (45%). A significant proportion of the sample holds Bachelor’s (42%) or Master’s degrees (27%), suggesting the cohort is highly educated. Mid level professionals (39%) and senior leaders/executives (22%), accounted for the largest segments in view of job level, which highlights the focus of the study on participants with predominantly active employment status (Venkatesh, 2022). Target respondents were drawn from broad ranging backgrounds across sectors and industries, with healthcare accounting for (18%), and financial services (13%) forming the sectors with highest representation. The demographic distribution implies that the study succeeded in capturing a diverse workforce segment which is actively engaging with or encountering impacts of emerging advanced AI technologies (Dwivedi et al., 2021), offering contextually relevant data and valuable insights for a holistic understanding of adoption of AI in the fast-evolving professional and personal environment.

4.2. Descriptive Statistics of AI Perceptions and Adaptation

The descriptive statistical analyses conducted were aimed to facilitate evaluating the perceptions, emotional responses, subjective understanding of respondents about AI, the notable challenges encountered in their adoption efforts, and openness to effective adoption of AI. The descriptive statistics are precisely summarised in Table 3 and Table 4, Figures 6-9.

Table 3. Summary of perceptions and understanding of AI among respondents (N = 5000).

Question 1: Understanding of How AI Works

Count

Percentage

I understand it well and can explain it to others

400

8.00%

I understand the basics but couldn’t explain it

1,150

23.00%

I’ve used AI tools but don’t really understand how they work

1,600

32.00%

I find it confusing and overwhelming

1,250

25.00%

I avoid it entirely because I don’t understand it

600

12.00%

Total

5,000

100.00%

Question 2: Feeling About Rapid AI Evolution

Count

Percentage

Curious and excited

800

16.00%

Cautiously optimistic

1,100

22.00%

Stressed and overwhelmed

1,450

29.00%

Anxious or afraid

1,250

25.00%

Disconnected—not paying attention

400

8.00%

Total

5,000

100.00%

Question 4: Trust in AI

Count

Percentage

I trust it fully—it improves my work/life

600

12.00%

I trust it somewhat—but I’m cautious

1,650

33.00%

I don’t trust it because I don’t understand it

1,450

29.00%

I don’t trust it because I fear bias or misuse

900

18.00%

I haven’t used it enough to form an opinion

400

8.00%

Total

5,000

100.00%

Question 5: Confidence in Keeping Up with AI

Count

Percentage

Very confident

500

10.00%

Somewhat confident

1,150

23.00%

Neutral

1,000

20.00%

Not very confident

1,350

27.00%

Not confident at all

1,000

20.00%

Total

5,000

100.00%

Data Source: Data extracted from Survey Data, 2025.

Figure 6. Question 1: Understanding of how AI works.

Figure 7. Question 2: Feeling about rapid AI evolution.

Figure 8. Question 5: Confidence in keeping up with AI.

Figure 9. Question 4: Trust in AI.

Table 4 illustrates that a majority of the respondents accounting for 37% of their total participants either find AI confusing/overwhelming or expressed active avoidance, an aspect attributed to limited understanding (Q1). Further, only 8% of the respondents expressed sufficiency in knowledge required for explaining AI. The collected emotional response illustrated that a cumulative of 54% of respondents experienced feeling “stressed and overwhelmed” or “anxious or afraid” regarding the rapid evolution of AI technologies (Q2), which indicates pervasiveness of apprehension. This finding corroborates the findings on technostress and AI-induced anxieties in the modern fast evolving, dynamic, technology and innovation-driven workplace landscapes (Lițan, 2025; Kim et al., 2025). It is also noted that trust in AI is significantly low, evident from the expression of distrust by 47% of the participants, a trend attributed to lack of understanding or scepticism due to fear or misuse or inherent biases in decision making (Q4). This finding reflects recent research on psychological barriers hindering effective adoption of AI in the workplace (Arboh et al., 2025). The findings also show about (47.0%) of respondents report “not very confident” or “not confident at all” in their necessary abilities for keeping up with rapidly advancing developments of AI (Q5). This is a notable barrier to engagement aligned with models of technology adoption, which further highlights the correlations between the perceived ease of use, uncertainty, and critical trust components (Sugandini et al., 2018; Lee et al., 2025).

Table 4. Obstacles and openness to AI adaptation (N = 5000).

Question 3: Biggest Obstacle in Adapting to AI

Count

Percentage

I don’t know how it works

1,500

30.00%

I don’t know where it’s going

1,050

21.00%

I’m not sure how it will affect me

900

18.00%

I’m afraid of making a mistake using it

850

17.00%

I don’t feel motivated to engage with it

700

14.00%

Total

5,000

100.00%

Question 6: Worry About AI

Count

Percentage

I’ll fall behind because I don’t understand it

1,550

31.00%

It will change my job or role

1,100

22.00%

It will change how I’m valued or seen

950

19.00%

It will harm others or be used unethically

850

17.00%

I’m not worried about it

550

11.00%

Total

5,000

100.00%

Question 7: Avoided AI Due to Lack of Understanding

Count

Percentage

Yes, multiple times

1,750

35.00%

Yes, once or twice

1,350

27.00%

No, I try things even if I don’t fully understand them

1,400

28.00%

I haven’t had the chance to use AI tools

500

10.00%

Total

5,000

100.00%

Question 8: What Would Help Openness to AI

Count

Percentage

A simple, clear explanation of how it works

1,700

34.00%

Real examples of how it helps people like me

1,200

24.00%

More time to learn without pressure

900

18.00%

A trusted expert walking me through it

750

15.00%

I’m already open to using AI tools

450

9.00%

Total

5,000

100.00%

Question 9: Feeling Others Understand AI Better

Count

Percentage

All the time

1,300

26.00%

Often

1,450

29.00%

Sometimes

1,150

23.00%

Rarely

750

15.00%

Never

350

7.00%

Total

5,000

100.00%

Question 10: Impact of Someone Respected Admitting Confusion

Count

Percentage

I’d feel more comfortable learning

2,050

41.00%

I’d be relieved and more open

1,350

27.00%

It wouldn’t affect me

900

18.00%

I’d be concerned—they should know

450

9.00%

I’m not sure

250

5.00%

Total

5,000

100.00%

Table 4 results and analyses underscores the focus of the study in specifically highlighting the significant obstacles associated with adaptation of AI. The most outstanding obstacle from the data is designated “I don’t know how it works” (30.0%), following secondly uncertainties regarding the future direction of AI (21.0%), and lastly—personal and professional impacts reporting (18.0%). A larger proportion of respondents (31.0%) expressed worries regarding “falling behind because I don’t understand it,” with a cumulative 41% of the respondents reporting concerns regarding the potential of AI to changing their roles/job or how they are personally and professionally valued for their contribution in the workplace environment (Q6). The major reported concerns translate into behavioural responses, with 62% of the respondents admitting to actively avoiding use of AI solutions, a challenge attributed to lack of necessary understanding (Q7). In order to promote openness, respondents primarily actively seek “a simple, clear explanation of how it works” (34.0%) and “real examples of how it helps people like me” (24.0%) (Q8). A significant (55.0%) express frequent feeling that others supersede them in understanding of AI (Q9), but a significant 68.0% reported they would feel “more comfortable learning” or “relieved and more open” if respectably individuals acknowledge to be equally confused. Overall, these findings emphasise the significance barriers encountered, which are increasingly technical and deeply rooted in social and psychological dimensions.

4.3. Inferential Statistics on Fear of Uncertainty and AI Adaptation

Pearson’s Chhi-squared tests were computed on multiple cross-tabulations of key research study’s operationalised variables. Considering the categorical structure of the collected respondents’ responses, Phi (ϕ) or Cramer’s V were utilised for measurement of effect size, suggesting significant strength of the relationship measures (see Table 5).

Table 5. Summary of observed associations between perceived fear of uncertainty and AI adaptation (N = 5000).

Relationship (Independent vs. Dependent Variable)

χ2 Value

df

p-value

Effect Size (Phi/Cramers V)

Strength of Relationship

Q2 (Feeling: Anxious/Afraid) vs. Q5 (Confidence: Not Confident at all)

125.87

4

<0.001

ϕ = 0.158

Very Weak

Q3 (Obstacle: Don’t know how it works) vs. Q7 (Avoided AI: Yes, multiple times)

210.33

3

<0.001

ϕ = 0.205

Weak

Q6 (Worry: Fall behind) vs. Q8 (Openness: Already open)

188.1

4

<0.001

Cramer’s V = 0.194

Very Weak

Q4 (Trust: Don’t trust because I don’t understand) vs. Q1 (Understanding: Avoid it entirely)

98.45

16

<0.001

Cramer’s V = 0.140

Very Weak

Q9 (Feeling others understand better: All the time) vs. Q5 (Confidence: Not Confident at all)

110.21

16

<0.001

Cramer’s V = 0.149

Very Weak

Note: All p-values are less than 0.05, indicating statistically significant associations. Effect size interpretation based on Hemphill (2003).

The cross-tabulation analyses show consistency in demonstrating significant relationships (p < 0.001) between key indicators associated with perceived fear of uncertainty and AI adaptation, hence supports H1 and H2. Testing of H1 and H2 performed using Pearson’s Chi-squared tests show significant but notably weak to very weak correlations between variables fear of uncertainty and AI adaptation (see Table 5). For illustration of this point, a significant association was observed between the constructs feeling “anxious or afraid” about AI (Q2), with reports of “not confident at all” in coping pace of advancement and integration of AI (Q5) (χ2 = 125.87, p < 0.001, ϕ = 0.158), which implies that emotional apprehension is closely related to lower confidence for adaptation (Grassini, 2023; Li & Huang, 2020). A lack of understanding (Q3) is significantly associated with behavioural avoidance (Q7) (χ2 = 210.33, p < 0.001, ϕ = 0.205), validating the importance of the role of cognitive barriers and challenges of non-engagement of AI (Bala & Venkatesh, 2016; Wang et al., 2025). Concerns regarding “falling behind” as a result of insufficiency in the knowledge of AI have negative correlations with active openness to adoption of AI, corroborating the findings in studies which highlight fear as a significant deterrent barrier to exploration of technological innovations and adoption in workplace (Wei & Li, 2022; Wang et al., 2025).

5. Discussion

The quantitative research findings obtained in this study demonstrate from the robust statistical analysis conducted that there is a significant relationship between perceived uncertainty and fear regarding AI and behavioural response adaptation of individuals, which supports the proposed hypothesis, and further reinforces the relevance and significance of the role of psychological drivers and adoption of technology among US adults populations in active employment across sectors and industries. The majority of adult populations in the US reported experiencing significant degrees of anxiety, confusion, and distrust in the exponential rate of advancements in the rapidly evolving AI technology. This prevalence of apprehension and tendencies for distrusting AI adoption and evolution implies that the exponential rate of integration of AI technologies substantially creates profound psychological fractions in adoption across demographics in society, revealing deeper challenges transcending mere technical barriers, to psychological and cognitive factors. Evident trend of avoidance attributed to low levels of comprehension and fear point to the perceived opaque nature of AI-driven technologies, fostering disengagement of users and further strengthens perceived inadequacies in adaption to exponentially evolving innovation technologies and subsequent integration into workplace environments (Grassini, 2023; Kim et al., 2025).

These findings align with existing research and literature emphasising that AI adoption may present notable psychological strains, notable fear of displacement from job roles and struggles sustaining professional and personal identity (Chen et al., 2025; Cheng & Jin, 2024). This study’s findings emphasise the need for custom-designed development of targeted intervention strategies to address specific psychological safety implications, fostering transparent communication and decision-making, and ensuring contextual education on AI adoption. Organisations should focus on implementation of case-based training and promote environments that are psychologically safe for addressing fears and fostering confidence in the use of AI (Bodea et al., 2024).

6. Limitations

Limitations of the study include potential self-report bias and age(generation)-related attitude towards AI as such and AI tools. To mitigate possible self-report bias, we applied careful questionnaire design, using clear, unambiguous questions. However, since three generations (generations X, Y, and Z) are presented in the sample, this age difference still may impact the precision of results: the group of respondents aged 25 - 34 (28%) includes Generation Z representatives, that is, ‘digital natives’, for whom AI can be evidently more familiar than for representatives of other generations. Thus, further studies are needed, to include age factor into the range of independent variables influencing AI adaptation.

7. Conclusion

In conclusion, this study provides robust quantitative evidence demonstrating that perceived uncertainty and fear are critical psychological barriers which impact adaptation of Artificial Intelligence technologies by the US adult population. The quantitative findings show that confusion, tendencies of distrust to AI, and anxiety, are predominantly evident and present significant societal adoption friction, transcendent of the deficits in technical skills to include deeply intricate psychological concerns and identify erosion fears. The obtained results contribute to the advancement of the established Technology Acceptance Model by demonstrating that uncertainty is a pivotal and independent technology adoption predictor, justifying its integration into the broader theoretical framework for research in digital transformation. The study emphasises the imperative for targeted multi-faceted, structured, multi-level intervention approaches to facilitate efforts to comprehensively address the inherent psychological barriers.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Arboh, F., Zhu, X., Atingabili, S., Yeboah, E., & Drokow, E. K. (2025). From Fear to Empowerment: The Impact of Employees AI Awareness on Workplace Well-Being—A New Insight from the JD-R Model. Journal of Health Organization and Management.
https://doi.org/10.1108/jhom-06-2024-0229
[2] Bala, H., & Venkatesh, V. (2016). Adaptation to Information Technology: A Holistic Nomological Network from Implementation to Job Outcomes. Management Science, 62, 156-179.
https://doi.org/10.1287/mnsc.2014.2111
[3] Benbya, H., Nan, N., Tanriverdi, H., & Yoo, Y. (2020). Complexity and Information Systems Research in the Emerging Digital World. MIS Quarterly, 44, 1-18.
https://doi.org/10.25300/misq/2020/13304
[4] Bodea, C., Paparic, M., Mogos, R. I., & Dascalu, M. (2024). Artificial Intelligence Adoption in the Workplace and Its Impact on the Upskilling and Reskilling Strategies. Amfiteatru Economic, 26, 126-144.
https://doi.org/10.24818/ea/2024/65/126
[5] Borges do Nascimento, I. J., Abdulazeem, H., Vasanthan, L. T., Martinez, E. Z., Zucoloto, M. L., Østengaard, L. et al. (2023). Barriers and Facilitators to Utilizing Digital Health Technologies by Healthcare Professionals. NPJ Digital Medicine, 6, Article No. 161.
https://doi.org/10.1038/s41746-023-00899-4
[6] Chen, Z., Li, H., & Zhou, Y. (2025). AI Anxiety in the Digital Workforce: Implications for Technology Management. Journal of Technology and Human Behavior, 19, 33-49.
[7] Cheng, L., & Jin, J. (2024). Displacement Fears and the Adoption of AI: A Psychological Perspective. Computers in Human Behavior Reports, 9, Article 100187.
[8] Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T. et al. (2021). Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. International Journal of Information Management, 57, Article 101994.
https://doi.org/10.1016/j.ijinfomgt.2019.08.002
[9] Grassini, S. (2023). Technostress and Cognitive Load in the Age of AI: A Critical Review. Computers in Human Behavior Reports, 10, Article 100111.
[10] Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. International Journal of Information Management, 49, 157-169.
https://doi.org/10.1016/j.ijinfomgt.2019.03.008
[11] Hemphill, J. F. (2003). Interpreting the Magnitudes of Correlation Coefficients. American Psychologist, 58, 78-79.
https://doi.org/10.1037/0003-066x.58.1.78
[12] Jack, E. P., Powers, T. L., & Prawitt, D. F. (2015). Transparency in Decision-Making: An Essential Component for AI Adoption. Journal of Business Ethics, 132, 133-148.
[13] Kim, Y., Park, S., & Lee, H. (2025). The Paradox of AI Adoption: Balancing Efficiency and Psychological Well-Being in the Workplace. Human Resource Management Review, 35, Article 100894.
[14] Lee, S., Choi, J., & Kim, D. (2025). Understanding Resistance to AI in Organisations: The Role of Transparency and Trust. Information and Management, 62, Article 103475.
[15] Li, J., & Huang, Y. (2020). The Dark Side of AI Adoption: Exploring the Effects of Technostress and Resistance to Change. Computers in Human Behavior, 112, Article 106490.
[16] Lițan, D. (2025). The Impact of Technostress Generated by Artificial Intelligence on the Quality of Life: The Mediating Role of Positive and Negative Affect. Behavioral Sciences, 15, Article 552.
https://doi.org/10.3390/bs15040552
[17] McDonald, C., & Schweinsberg, A. (2025). Ready or Not? Psychologists’ Perceptions of Work Readiness in the Age of AI. Frontiers in Computer Science, 7, Article 1524024.
https://doi.org/10.3389/fcomp.2025.1524024
[18] Na, J., Lee, J., & Park, S. (2022). Exploring Emotional Responses to AI Adoption in the Workplace. Journal of Organizational Behavior, 43, 1100-1116.
[19] Riemer, M., Abdulhai, M., Kim, D. K., Liu, M., Tesauro, G., & How, J. P. (2022). Context-Specific Representation Abstraction for Deep Option Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36, 5959-5967.
https://doi.org/10.1609/aaai.v36i6.20541
[20] Rozen, D. (2023). Best Practices for Quantitative Research Using Online Survey Platforms. Journal of Quantitative Research Methods, 11, 199-213.
[21] Schulte, P. A., Streit, J. M. K., Sheriff, F., Delclos, G., Felknor, S. A., Tamers, S. L. et al. (2020). Potential Scenarios and Hazards in the Work of the Future: A Systematic Review of the Peer-Reviewed and Gray Literatures. Annals of Work Exposures and Health, 64, 786-816.
https://doi.org/10.1093/annweh/wxaa051
[22] Soulami, S., Boughzala, I., & Boukef, N. (2024). AI Anxiety and the Transformation of Professional Identity: A Conceptual Framework. Information Systems Frontiers, 26, 123-139.
[23] Sugandini, D., Effendi, I., & Suparno, S. (2018). The Role of Perceived Usefulness and Perceived Ease of Use in Technology Acceptance. Management Science Letters, 8, 871-882.
[24] Suseno, Y., Laurell, C., & Sick, N. (2023). Resistance to AI in Organisations: Barriers and Enablers. Journal of Business Research, 158, Article 113675.
[25] Venkatesh, V. (2022). Adoption and Use of AI Tools: A Research Agenda Grounded in Utaut. Annals of Operations Research, 308, 641-652.
https://doi.org/10.1007/s10479-020-03918-9
[26] Vogel, B., Reich, S., & Zimmermann, A. (2023). AI in the Workplace: Navigating Psychological Barriers to Adoption. European Journal of Work and Organizational Psychology, 32, 120-133.
[27] Wang, J., & Wang, X. (2022). Perceived Risk, Anxiety, and Technology Adoption: A Meta-Analysis in the Context of AI. Computers in Human Behavior, 130, Article 107177.
[28] Wang, S., Li, P., & Kim, H. (2025). Technostress, Psychological Well-Being, and AI Adoption in the Workplace. Computers in Human Behavior, 141, Article 107618.
[29] Wei, Y., & Li, X. (2022). The Dark Side of AI Adoption: Fear of Unemployment and Psychological Distress. Technology in Society, 68, Article 101798.

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.