Emotional Predictors of AI Adaptation: A Quantitative Analysis of Fear, Uncertainty, and Resistance among U.S. Adults ()
1. Introduction
The advent of advancing innovative Artificial Intelligence (AI) technology has rapidly evolved into a perversive transformational societal force, fundamentally transforming labour markets, reshaping industries, influencing operational processes, and altering individual personal experiences, while notably redefining professional value and identities across sectors (Dwivedi et al., 2021; Vogel et al., 2023). The AI discourse is increasingly defined by dual narratives juxtaposing its promising potential with its impact on exacerbating societal societies, perceived threats to personal and professional identities, and job replacement (Arboh et al., 2025). The exponential rates of AI integration in critical sector domains offers prospects for significantly enhancing efficiency and innovative transformative capabilities in industries and sectors; however, it simultaneously amplifies profound psychological complexities that fundamentally influence personal perceptions of individuals, influencing emotional states, with notable impacts on behavioural responses to the novel innovative technology (Grassini, 2023; Lițan, 2025).
Research highlights the impact and evolving role of AI integration in amplifying societal anxieties, which often outpaces the capabilities of an individual for comprehensive understanding and adaptability (Bala & Venkatesh, 2016). These psychological phenomena are predominantly conceptualised in the evolving workforce demographics in organisations through established constructs, specifically “AI anxiety” (AIA) and “technostress” (Riemer et al., 2022; Wang & Wang, 2022; Schulte et al., 2020). This highlights emotional and cognitive dissonance commonly associated with its perceived uncertainties regarding its impact on job security, skill obsolescence, and loss of professional identity in modern fast-changing workplaces (Riemer et al., 2022; Wang & Wang, 2022; Schulte et al., 2020; Soulami et al., 2024). Integration of AI across diverse social life domains has created an environment of uncertainties and fears, revealing significant barriers to effective adoption and successful utilisation (Borges do Nascimento et al., 2023; Kim et al., 2025).
The emerging fears imply inherent underpinning critical concerns attributed to skill obsolescence, job security, and declines in professional value due to burgeoning adoption of algorithmic and automation systems, which corroborates the empirical findings that note that job-related anxieties are often intensified by disruptive perceptions of AI to stability of workplace and general occupation systems (Cheng & Jin, 2024; Wei & Li, 2022).
This study aims to quantitatively investigate the relationship between perceived uncertainty and fear regarding AI and adoption response behaviours of individuals among the US adult population, addressing the critical literature gap defined by lack of large-scale empirical data on the interplay between these phenomena. The study employs a large-scale quantitative research design. This will contribute robust empirical evidence on the evolving field of psychology on adaptation of AI, and advances the richness of the current discourses often defined by limited context-specificity and smaller sample sizes. The findings are expected to offer guidance for informed approach to targeted development of custom-designed interventions that specifically seek to address particular psychological barriers, hence promoting readiness for adoption of AI and ensuring alignment of technological advancement with the psychological well-being of the evolving society.
2. Literature Review
2.1. Conceptualising AI Anxiety and Technostress
The increasingly pervasive nature of robust adoption and integration of AI in the contemporary workplace environment has resulted in rise of unique psychological phenomena, primarily including “AI anxiety” and “technostress”, which notably influence the behavioural response environments of adoptions (Wang & Wang, 2022; Grassini, 2023). AI anxiety within this context is conceptualised as a dynamic stress response to perceived potential impacts of Artificial Intelligence applications on the future of an individual, and may be characterised by fears attributed to possibilities for job security, redundancy and irrelevance of skills, and constantly broadening societal implication of technological advancement (Chen et al., 2025). Technostress, in broader terms, is defined as psychological strain that an individual may experience due to exposure to rapidly overwhelming advancement and changes in the fast-paced, exponentially evolving technology, at a rate that outpaces adaptations of individuals, yielding perceived helplessness, loss of control over the technological developments and adaptation, and cognitive overload (Lițan, 2025; McDonald & Schweinsberg, 2025).
Numerous empirical studies have also demonstrated that multifaceted multidimensional manifestations of the increasingly intricate AI anxiety (AIA) phenomenon, which comprises a broad range of emotional exhaustions, perception of declined personal value, and fear of redundancy in the rapidly evolving AI-driven organisational and broader workplace environments (Li & Huang, 2020; Wang & Wang, 2022). Lițan (2025) highlights that prolonged exposure to a largely AI-augmented work can potentially aggravate the depressive symptoms attributed to constant monitoring and perceived insecurities about job. Perceived erosion of personal and professional identity is a prevalent concerning challenge in the workplace environments with integrated AI solutions, hence contributing to identifying crises, subjecting employees to questioning of the value and contributions in the organisation in the age of advancing capabilities of AI (Arboh et al., 2025). AI anxiety is an intricate psychological construct that includes professional identity, competence, and security of future livelihoods in the contexts of exponential rates in deployment of AI technologies.
2.2. Psychological Barriers to AI Adoption
Psychological barriers impeding effective and seamless adoption of AI solutions into workplace environments extend beyond the general domains of the broader AI Anxiety spectrum, comprising specific subjective perceptions and fears that hinder effective successful robust adoption and integration. The most significant psychological barrier is the conventionally founded perceptions about the “black box” nature of AI, in which its operations and functionalities are perceived as opaque and unexplainable to amateur users, characterising it as opaque in decision making (Jack et al., 2015; Lee et al., 2025). Dwivedi et al., (2021) emphasise the role of “black box” characterisation of AI in subsequently fostering heightened distrust levels and limited transparency in the entirety of the intricate organisational and workplace-context data-driven decision-making processes (Dwivedi et al., 2021). Psychologically, the opaqueness of AI exacerbates hesitancy and organisational resistance, despite the potential of AI in enhancing efficiency and performance (Benbya et al., 2020; Na et al., 2022). Evidently, it is imperative to prioritise the adoption of accessible explainable AI to effectively mitigate resultant organisational resistance (Suseno, Laurell, & Sick., 2023), essential for enhancing transparency in decision-making using AI systems and algorithms; however, the core suspicion persists because of the uncertainties associated with the implications and functioning of the technology.
2.3. Theoretical Frameworks for Technology Acceptance
The Technology Acceptance Model (TAM) provides a robust foundational theoretical framework for a holistic understanding of how perceived usefulness and ease of use could quantifiably influence the intentions of the buyer to adopt a specific AI-driven technology (Lee et al., 2025). Significantly, perceived usefulness—a construct referring to subjective belief that innovation will contribute to improved job performance—are crucial determinant factors in fostering adoption of technology (Sugandini et al., 2018; Gursoy et al., 2019). An important extension of TAM framework for adoption of AI is the incorporation of the uncertainty component as a major determining factor of AI-adoption’s behavioural responses. Jack et al. (2015) highlights these uncertainties influencing adoption, particularly associated with risks’ costs, and long-term impacts of adoption of the emerging innovative technologies, emphasising on their potential to significantly impede adoption, even when there are high scores for perceived usefulness and ease of use. This finding is corroborated by Sugandini et al. (2018), noting that uncertainty has direct influence on decisions involving adoption, both as a crucial moderationing factor, and as a significant barrier that can dilute the favourable perceptions.
2.4. Potential Psychological Benefits of AI
Effective integration of AI can foster human well-being. AI tools have demonstrated capabilities in positively enhancing happiness, reducing loneliness feelings, and fostering self-esteem (Wei & Li, 2022). Advanced AI tools empowers users and reinforce psychological well-being by enhancing capability of users, facilitating automation, and streamlining of repetitive tasks, hence boosting confidence of the user through real improvements in performance outcomes, and thus reduces psychological depression scores for other worker demographic such as low-skilled and older employees (Arboh et al., 2025). The paradoxical dualistic nature of AI adoption as a potential critical stressor and significant instrumental tool to improve well-being of workers underscore the need for a holistic strategic implementation that harnesses its key benefits while incorporating mitigation mechanisms to address inherent risks (Kim et al., 2025).
3. Research Methodology
3.1. Research Design
A robust descriptive quantitative, correlational research design is adopted in this study to conduct a holistic investigation on the correlations between fear of uncertainty and adaptation to artificial intelligence using a sample size of 5000 adults within geographical scopes of the United States. This choice of design is justifiably relevant and appropriate for this study’s focus on examination of the strength of directions of relationships between dependent and independent variables across this study’s large samples. This therefore enables a systematic approach to exploration of divergent perceptions and behavioural responses towards adoption of AI (Wang & Wang, 2022). The descriptive component of the research design effectively enables a robust summary of the present cognitive and emotional response states of the participants, understanding of the phenomenon, behavioural adaptation responses to AI adoption, perceived uncertainties, and challenges inherent in AI adoption. The correlational dimension of the research design is aimed at enabling the research to identify statistical correlations between the variables, the perceived fear of uncertainty and critical key performance indicators of effective AI adaptation (Wang et al., 2025).
We consider the phenomenon of fear of uncertainty as a diffidence in both own skills of its appropriate usage and lack of confidence in the mechanisms of its work and their reliance.
In turn, adapting to AI implies adjusting to the growing presence and capabilities of AI in various aspects of life, particularly in the workplace, based on trust in AI tools mechanisms and openness to their usage.
3.2. Participants
The study targeted a large target population size of 5,000 current residents in the United States. The participants included in recruitment were aged between 25 years and 60 years, with active employment status in an industry. Recruitment and screening of participants was performed using SurveyMonkey platform, with mandatory requirements for informed consent and verification of age and status of employment before the actual participation in the survey (Rozen, 2023). The demographic profiles are presented in Table 1 and Figures 1-5.
Formation of a sample was carried out based on Facebook announcement in the following core (most active in industries functioning and development) states in every region of the USA:
North: Montana, North Dakota, Minnesota, Wisconsin, Michigan.
South: Florida, Texas, Louisiana, Mississippi, Alabama.
East: New York, Pennsylvania, Massachusetts, Maine.
West: California, Oregon, Washington, Nevada.
The announcement contained brief description and intention of survey, criteria of inclusion, SurveyMonkey link (for screening) and contact data.
Table 1. Demographic characteristics of survey respondents (N = 5000).
Characteristic |
Category |
Count |
Percentage |
Age Group |
Under 25 |
0 |
0% |
|
25 - 34 |
1400 |
28% |
|
35 - 44 |
1550 |
31% |
|
45 - 54 |
950 |
19% |
|
55 - 64 |
500 |
10% |
|
65+ |
0 |
0% |
Gender |
Female |
2600 |
52% |
|
Male |
2250 |
45% |
|
Non-binary/Third gender |
100 |
2% |
|
Prefer not to say |
50 |
1% |
Highest Education |
High school diploma or equivalent |
400 |
8% |
|
Some college |
850 |
17% |
|
Bachelor’s degree |
2100 |
42% |
|
Master’s degree |
1350 |
27% |
|
Doctorate or equivalent |
300 |
6% |
Current Role/Job Level |
Student |
0 |
0% |
|
Entry-level employee |
900 |
18% |
|
Mid-level professional |
1950 |
39% |
|
Senior leader or executive |
1100 |
22% |
|
Business owner or
entrepreneur |
600 |
12% |
|
Unemployed/Retired |
0 |
0% |
Industry |
Healthcare |
900 |
18% |
|
Education |
750 |
15% |
|
Financial services |
650 |
13% |
|
Government/Public sector |
450 |
9% |
|
Technology/Software |
700 |
14% |
|
Legal |
250 |
5% |
|
Retail or customer service |
550 |
11% |
|
Hospitality |
350 |
7% |
|
Other |
400 |
8% |
Source: Survey Data, 2025.
Figure 1. Distribution of age group.
Figure 2. Distribution of gender.
Figure 3. Distribution of highest education
Figure 4. Distribution of current role/job level.
Figure 5. Distribution of industry.
3.3. Instrument
A researcher-designed questionnaire was the primary instrument used for data collection and comprised of two critically structured sections: Section 1 of the instrument aimed at assessing the key Challenges, Perceptions, & Needs associated to adoption of AI, and comprising a set of 10 multiple-choice questions (Q1 - Q10). Section 2 collects the participants’ demographic data (Q11 - Q15), as presented in Table 1. Several items from Section 1 were used to operationalise the key variables of study, “Fear of Uncertainty” and “Adapting to AI,” to collect participants’ intricate, heterogeneous, and multifaceted nature. These primary constructs for fear of uncertainty and AI adaptation are particularly operationalised across these items to reflect cognitive, emotional, and behavioural dimensions associated with fear and AI technology adaptation (Grassini, 2023). The operationalisation of variables is illustrated in Table 2.
Table 2. Operationalisation of variables: fear of uncertainty and adapting to AI.
Variable |
Dimension |
Survey Item (Question No.) |
Emotional response (IV) |
Q2 |
What emotion best describes how you feel when you think about AI evolving rapidly? |
Q3 |
What is the biggest obstacle for you in adapting to AI? |
Lack of confidence |
Distrust/Risk perception |
Q4 |
Which of the following best describes your level of trust in AI? |
Specific worries |
Q6 |
When comes to AI, what worries you most? |
Adapting to AI (DV) |
Q1 |
How would you describe your current understanding of how AI works? |
Q5 |
How confident are you could keep up with AI developments in your industry in the next 2 years? |
|
Openness (proactive engagement) |
Q6 |
What would most help you become more open to using AI tools? |
|
Low avoidance |
Q7 |
Have you ever avoided using an AI because you didn’t understand it? |
3.4. Procedure
Collection of data was initiated after screening participants for eligibility which was solely based on participants’ status of employment and age. The included eligible participants were mandated to access pages on informed consent in the SurveyMonkey platform, requiring their consent prior to active participation in the research questionnaire tool (Rozen, 2023). Completion of self-reported surveys was filled in two parts, with automated recording of data after submission. Cleaning of data was performed to identify and mitigate missing data and inconsistencies through imputation and listwise deletion as needed.
3.5. Data Analysis
Descriptive and inferential statistical techniques were incorporated in the data analysis involving the collected data. Pearson’s Chi-square tests (χ2) were used for inferential statistical analysis aimed at evaluating the relationships between operationalised fear of uncertainty and quantifiable AI adaptation measures, with complementary employment of effect size analysis using Phi (ϕ) and Cramer’s V, essential to facilitate qualification of the strength of associations between the variables (Rozen, 2023). After the thresholds for interpretations of effect sizes, robust guidelines were established. The relationships were categorised from weak to very strong, crucial to ensure clarity during the interpretation of identified strength of correlations between variables. Significant levels of p < 0.05 for purposes of this analysis was set for all inferential tests conducted.
4. Results and Analysis
4.1. Demographic Profile of Respondents
As it was mentioned above, a sample size of 5,000 adults residing in the US was used in this study for examination of how fear and uncertainty influence adoption of AI. The sample is diverse to capture a predominantly actively employed, aged, professionally actively engaged, well-educated demographic. The respondents distribution comprised predominantly the ages 35 - 44 (31%), 25 - 34 (28%), and 45 - 54 (19%), which represents the category of age cohorts with higher likelihoods for agreement in exponentially evolving transformational AI-driven technologies with specific workforce populations. Females accounted for the largest proportion of the respondent demographics, constituting (52%), with males accounting for (45%). A significant proportion of the sample holds Bachelor’s (42%) or Master’s degrees (27%), suggesting the cohort is highly educated. Mid level professionals (39%) and senior leaders/executives (22%), accounted for the largest segments in view of job level, which highlights the focus of the study on participants with predominantly active employment status (Venkatesh, 2022). Target respondents were drawn from broad ranging backgrounds across sectors and industries, with healthcare accounting for (18%), and financial services (13%) forming the sectors with highest representation. The demographic distribution implies that the study succeeded in capturing a diverse workforce segment which is actively engaging with or encountering impacts of emerging advanced AI technologies (Dwivedi et al., 2021), offering contextually relevant data and valuable insights for a holistic understanding of adoption of AI in the fast-evolving professional and personal environment.
4.2. Descriptive Statistics of AI Perceptions and Adaptation
The descriptive statistical analyses conducted were aimed to facilitate evaluating the perceptions, emotional responses, subjective understanding of respondents about AI, the notable challenges encountered in their adoption efforts, and openness to effective adoption of AI. The descriptive statistics are precisely summarised in Table 3 and Table 4, Figures 6-9.
Table 3. Summary of perceptions and understanding of AI among respondents (N = 5000).
Question 1: Understanding of How AI Works |
Count |
Percentage |
I understand it well and can explain it to others |
400 |
8.00% |
I understand the basics but couldn’t explain it |
1,150 |
23.00% |
I’ve used AI tools but don’t really understand how they work |
1,600 |
32.00% |
I find it confusing and overwhelming |
1,250 |
25.00% |
I avoid it entirely because I don’t understand it |
600 |
12.00% |
Total |
5,000 |
100.00% |
Question 2: Feeling About Rapid AI Evolution |
Count |
Percentage |
Curious and excited |
800 |
16.00% |
Cautiously optimistic |
1,100 |
22.00% |
Stressed and overwhelmed |
1,450 |
29.00% |
Anxious or afraid |
1,250 |
25.00% |
Disconnected—not paying attention |
400 |
8.00% |
Total |
5,000 |
100.00% |
Question 4: Trust in AI |
Count |
Percentage |
I trust it fully—it improves my work/life |
600 |
12.00% |
I trust it somewhat—but I’m cautious |
1,650 |
33.00% |
I don’t trust it because I don’t understand it |
1,450 |
29.00% |
I don’t trust it because I fear bias or misuse |
900 |
18.00% |
I haven’t used it enough to form an opinion |
400 |
8.00% |
Total |
5,000 |
100.00% |
Question 5: Confidence in Keeping Up with AI |
Count |
Percentage |
Very confident |
500 |
10.00% |
Somewhat confident |
1,150 |
23.00% |
Neutral |
1,000 |
20.00% |
Not very confident |
1,350 |
27.00% |
Not confident at all |
1,000 |
20.00% |
Total |
5,000 |
100.00% |
Data Source: Data extracted from Survey Data, 2025.
Figure 6. Question 1: Understanding of how AI works.
Figure 7. Question 2: Feeling about rapid AI evolution.
Figure 8. Question 5: Confidence in keeping up with AI.
Figure 9. Question 4: Trust in AI.
Table 4 illustrates that a majority of the respondents accounting for 37% of their total participants either find AI confusing/overwhelming or expressed active avoidance, an aspect attributed to limited understanding (Q1). Further, only 8% of the respondents expressed sufficiency in knowledge required for explaining AI. The collected emotional response illustrated that a cumulative of 54% of respondents experienced feeling “stressed and overwhelmed” or “anxious or afraid” regarding the rapid evolution of AI technologies (Q2), which indicates pervasiveness of apprehension. This finding corroborates the findings on technostress and AI-induced anxieties in the modern fast evolving, dynamic, technology and innovation-driven workplace landscapes (Lițan, 2025; Kim et al., 2025). It is also noted that trust in AI is significantly low, evident from the expression of distrust by 47% of the participants, a trend attributed to lack of understanding or scepticism due to fear or misuse or inherent biases in decision making (Q4). This finding reflects recent research on psychological barriers hindering effective adoption of AI in the workplace (Arboh et al., 2025). The findings also show about (47.0%) of respondents report “not very confident” or “not confident at all” in their necessary abilities for keeping up with rapidly advancing developments of AI (Q5). This is a notable barrier to engagement aligned with models of technology adoption, which further highlights the correlations between the perceived ease of use, uncertainty, and critical trust components (Sugandini et al., 2018; Lee et al., 2025).
Table 4. Obstacles and openness to AI adaptation (N = 5000).
Question 3: Biggest Obstacle in Adapting to AI |
Count |
Percentage |
I don’t know how it works |
1,500 |
30.00% |
I don’t know where it’s going |
1,050 |
21.00% |
I’m not sure how it will affect me |
900 |
18.00% |
I’m afraid of making a mistake using it |
850 |
17.00% |
I don’t feel motivated to engage with it |
700 |
14.00% |
Total |
5,000 |
100.00% |
Question 6: Worry About AI |
Count |
Percentage |
I’ll fall behind because I don’t understand it |
1,550 |
31.00% |
It will change my job or role |
1,100 |
22.00% |
It will change how I’m valued or seen |
950 |
19.00% |
It will harm others or be used unethically |
850 |
17.00% |
I’m not worried about it |
550 |
11.00% |
Total |
5,000 |
100.00% |
Question 7: Avoided AI Due to Lack of Understanding |
Count |
Percentage |
Yes, multiple times |
1,750 |
35.00% |
Yes, once or twice |
1,350 |
27.00% |
No, I try things even if I don’t fully understand them |
1,400 |
28.00% |
I haven’t had the chance to use AI tools |
500 |
10.00% |
Total |
5,000 |
100.00% |
Question 8: What Would Help Openness to AI |
Count |
Percentage |
A simple, clear explanation of how it works |
1,700 |
34.00% |
Real examples of how it helps people like me |
1,200 |
24.00% |
More time to learn without pressure |
900 |
18.00% |
A trusted expert walking me through it |
750 |
15.00% |
I’m already open to using AI tools |
450 |
9.00% |
Total |
5,000 |
100.00% |
Question 9: Feeling Others Understand AI Better |
Count |
Percentage |
All the time |
1,300 |
26.00% |
Often |
1,450 |
29.00% |
Sometimes |
1,150 |
23.00% |
Rarely |
750 |
15.00% |
Never |
350 |
7.00% |
Total |
5,000 |
100.00% |
Question 10: Impact of Someone Respected Admitting Confusion |
Count |
Percentage |
I’d feel more comfortable learning |
2,050 |
41.00% |
I’d be relieved and more open |
1,350 |
27.00% |
It wouldn’t affect me |
900 |
18.00% |
I’d be concerned—they should know |
450 |
9.00% |
I’m not sure |
250 |
5.00% |
Total |
5,000 |
100.00% |
Table 4 results and analyses underscores the focus of the study in specifically highlighting the significant obstacles associated with adaptation of AI. The most outstanding obstacle from the data is designated “I don’t know how it works” (30.0%), following secondly uncertainties regarding the future direction of AI (21.0%), and lastly—personal and professional impacts reporting (18.0%). A larger proportion of respondents (31.0%) expressed worries regarding “falling behind because I don’t understand it,” with a cumulative 41% of the respondents reporting concerns regarding the potential of AI to changing their roles/job or how they are personally and professionally valued for their contribution in the workplace environment (Q6). The major reported concerns translate into behavioural responses, with 62% of the respondents admitting to actively avoiding use of AI solutions, a challenge attributed to lack of necessary understanding (Q7). In order to promote openness, respondents primarily actively seek “a simple, clear explanation of how it works” (34.0%) and “real examples of how it helps people like me” (24.0%) (Q8). A significant (55.0%) express frequent feeling that others supersede them in understanding of AI (Q9), but a significant 68.0% reported they would feel “more comfortable learning” or “relieved and more open” if respectably individuals acknowledge to be equally confused. Overall, these findings emphasise the significance barriers encountered, which are increasingly technical and deeply rooted in social and psychological dimensions.
4.3. Inferential Statistics on Fear of Uncertainty and AI Adaptation
Pearson’s Chhi-squared tests were computed on multiple cross-tabulations of key research study’s operationalised variables. Considering the categorical structure of the collected respondents’ responses, Phi (ϕ) or Cramer’s V were utilised for measurement of effect size, suggesting significant strength of the relationship measures (see Table 5).
Table 5. Summary of observed associations between perceived fear of uncertainty and AI adaptation (N = 5000).
Relationship (Independent vs. Dependent
Variable) |
χ2 Value |
df |
p-value |
Effect Size (Phi/Cramer’s V) |
Strength of
Relationship |
Q2 (Feeling: Anxious/Afraid) vs. Q5 (Confidence: Not Confident at all) |
125.87 |
4 |
<0.001 |
ϕ = 0.158 |
Very Weak |
Q3 (Obstacle: Don’t know how it works) vs. Q7 (Avoided AI: Yes, multiple times) |
210.33 |
3 |
<0.001 |
ϕ = 0.205 |
Weak |
Q6 (Worry: Fall behind) vs. Q8 (Openness: Already open) |
188.1 |
4 |
<0.001 |
Cramer’s V = 0.194 |
Very Weak |
Q4 (Trust: Don’t trust because I don’t understand) vs. Q1 (Understanding: Avoid it entirely) |
98.45 |
16 |
<0.001 |
Cramer’s V = 0.140 |
Very Weak |
Q9 (Feeling others understand better: All the time) vs. Q5 (Confidence: Not Confident at all) |
110.21 |
16 |
<0.001 |
Cramer’s V = 0.149 |
Very Weak |
Note: All p-values are less than 0.05, indicating statistically significant associations. Effect size interpretation based on Hemphill (2003).
The cross-tabulation analyses show consistency in demonstrating significant relationships (p < 0.001) between key indicators associated with perceived fear of uncertainty and AI adaptation, hence supports H1 and H2. Testing of H1 and H2 performed using Pearson’s Chi-squared tests show significant but notably weak to very weak correlations between variables fear of uncertainty and AI adaptation (see Table 5). For illustration of this point, a significant association was observed between the constructs feeling “anxious or afraid” about AI (Q2), with reports of “not confident at all” in coping pace of advancement and integration of AI (Q5) (χ2 = 125.87, p < 0.001, ϕ = 0.158), which implies that emotional apprehension is closely related to lower confidence for adaptation (Grassini, 2023; Li & Huang, 2020). A lack of understanding (Q3) is significantly associated with behavioural avoidance (Q7) (χ2 = 210.33, p < 0.001, ϕ = 0.205), validating the importance of the role of cognitive barriers and challenges of non-engagement of AI (Bala & Venkatesh, 2016; Wang et al., 2025). Concerns regarding “falling behind” as a result of insufficiency in the knowledge of AI have negative correlations with active openness to adoption of AI, corroborating the findings in studies which highlight fear as a significant deterrent barrier to exploration of technological innovations and adoption in workplace (Wei & Li, 2022; Wang et al., 2025).
5. Discussion
The quantitative research findings obtained in this study demonstrate from the robust statistical analysis conducted that there is a significant relationship between perceived uncertainty and fear regarding AI and behavioural response adaptation of individuals, which supports the proposed hypothesis, and further reinforces the relevance and significance of the role of psychological drivers and adoption of technology among US adults populations in active employment across sectors and industries. The majority of adult populations in the US reported experiencing significant degrees of anxiety, confusion, and distrust in the exponential rate of advancements in the rapidly evolving AI technology. This prevalence of apprehension and tendencies for distrusting AI adoption and evolution implies that the exponential rate of integration of AI technologies substantially creates profound psychological fractions in adoption across demographics in society, revealing deeper challenges transcending mere technical barriers, to psychological and cognitive factors. Evident trend of avoidance attributed to low levels of comprehension and fear point to the perceived opaque nature of AI-driven technologies, fostering disengagement of users and further strengthens perceived inadequacies in adaption to exponentially evolving innovation technologies and subsequent integration into workplace environments (Grassini, 2023; Kim et al., 2025).
These findings align with existing research and literature emphasising that AI adoption may present notable psychological strains, notable fear of displacement from job roles and struggles sustaining professional and personal identity (Chen et al., 2025; Cheng & Jin, 2024). This study’s findings emphasise the need for custom-designed development of targeted intervention strategies to address specific psychological safety implications, fostering transparent communication and decision-making, and ensuring contextual education on AI adoption. Organisations should focus on implementation of case-based training and promote environments that are psychologically safe for addressing fears and fostering confidence in the use of AI (Bodea et al., 2024).
6. Limitations
Limitations of the study include potential self-report bias and age(generation)-related attitude towards AI as such and AI tools. To mitigate possible self-report bias, we applied careful questionnaire design, using clear, unambiguous questions. However, since three generations (generations X, Y, and Z) are presented in the sample, this age difference still may impact the precision of results: the group of respondents aged 25 - 34 (28%) includes Generation Z representatives, that is, ‘digital natives’, for whom AI can be evidently more familiar than for representatives of other generations. Thus, further studies are needed, to include age factor into the range of independent variables influencing AI adaptation.
7. Conclusion
In conclusion, this study provides robust quantitative evidence demonstrating that perceived uncertainty and fear are critical psychological barriers which impact adaptation of Artificial Intelligence technologies by the US adult population. The quantitative findings show that confusion, tendencies of distrust to AI, and anxiety, are predominantly evident and present significant societal adoption friction, transcendent of the deficits in technical skills to include deeply intricate psychological concerns and identify erosion fears. The obtained results contribute to the advancement of the established Technology Acceptance Model by demonstrating that uncertainty is a pivotal and independent technology adoption predictor, justifying its integration into the broader theoretical framework for research in digital transformation. The study emphasises the imperative for targeted multi-faceted, structured, multi-level intervention approaches to facilitate efforts to comprehensively address the inherent psychological barriers.