1. Introduction
Artificial intelligence (AI), a branch of computer science, enables systems to perform tasks that require human-like intelligence, including learning, problem-solving, and adapting to new situations (Khaleel & Jebrel, 2024). Despite AI’s rapid integration into various domains, consumer perceptions of AI remain polarized, shaped by optimism or skepticism about its potential. While optimism often fosters adoption, extreme expectations—whether overly positive or negative—can lead to misconceptions that hinder research and deployment (Cave, Stephen, et al., 2019).
This review addresses critical questions about AI’s influence on consumer behavior, the factors driving acceptance or resistance, and strategies to mitigate challenges like algorithm aversion and bias. The paper is structured as follows: Section 2 discusses AI’s integration into consumer life, Section 3 reviews its accuracy and challenges, Section 4 examines consumer acceptance and strategies to increase it, and Section 5 concludes with future directions and ethical considerations.
Methodology
This study undertakes a systematic review of the extant literature concerning consumer acceptance of artificial intelligence (AI). A comprehensive search was conducted to identify peer-reviewed journal articles, industry reports, and relevant books published between 2010 and 2024. Keywords such as “algorithm aversion,” “consumer trust in AI,” and “AI decision-making” guided the selection process. The studies included in this review were chosen based on their focus on AI applications within consumer contexts and the subsequent responses of consumers to these technologies. Additionally, the review critically evaluates various strategies proposed to improve AI acceptance and address algorithmic biases.
While this review incorporates a broad range of sources, certain limitations must be acknowledged. The majority of the studies reviewed were published in English, which may introduce a language bias. Furthermore, the findings predominantly reflect trends observed in Western markets, potentially limiting their applicability to non-Western perspectives.
2. AI in Consumers’ Lives
Artificial intelligence has become an integral part of consumers’ daily lives, especially through personalized services such as recommendation systems. For example, platforms like Spotify and Netflix use sophisticated algorithms to analyze user data and provide tailored content suggestions, improving user engagement and satisfaction (Puntoni et al., 2021). In healthcare, AI-powered applications such as Apple’s HealthKit collect health metrics from wearable devices, offering personalized wellness recommendations based on individual data. Similarly, AI-driven virtual assistants, such as Google Assistant and Amazon’s Alexa, provide users with tailored services, from managing daily schedules to controlling smart home devices.
These examples illustrate the potential of AI to enhance the consumer experience by automating tasks and reducing decision fatigue. However, the reliance on consumer data raises concerns about privacy and trust, further complicating AI acceptance.
3. AI’s Accuracy
While AI can offer remarkable accuracy in decision-making, particularly in fields like healthcare and finance, its use is not without challenges. Recent studies have shown that AI systems can perpetuate and even exacerbate biases, especially when trained on unrepresentative data sets. For example, facial recognition algorithms have been shown to have higher error rates when identifying people of color, a reflection of the data imbalance used to train these systems. Similarly, AI systems used in hiring processes have been found to disadvantage women and minority candidates, reproducing existing inequalities.
To mitigate these issues, several strategies have been proposed. One approach involves using more diverse and representative data sets to train algorithms, which helps reduce bias. Algorithm auditing—regularly assessing AI systems for fairness—can also help identify and correct biases before they become embedded in decision-making processes. Additionally, increasing transparency by providing explanations for algorithmic decisions (explainable AI or XAI) can help build trust and ensure accountability.
4. Consumer Acceptance of AI
4.1. Factors Influencing Acceptance
Consumer trust in AI is shaped by perceptions of its capabilities. Studies reveal that while AI is trusted for analytical tasks, consumers prefer human input for decisions requiring emotional or intuitive judgment (Castelo et al., 2019). This preference stems from concerns that AI lacks the ability to consider unique personal contexts, learn from errors, or process qualitative nuances effectively (Longoni et al., 2019). Additionally, consumers’ trust is influenced by their relationship with service providers; transactional interactions favor AI, while friendship-like relationships resist it (von Walter et al., 2023).
4.2. Strategies to Increase Acceptance
Several strategies to enhance AI acceptance include consumer education, personalization, and anthropomorphism. Educating users about AI functionality reduces aversion by demystifying its processes (Bonezzi & Ostinelli, 2022). Emphasizing human input in algorithm design and providing fallback options for human assistance can align AI use with relationship norms, fostering trust (von Walter et al., 2023).
Anthropomorphism—assigning human-like traits to AI—has also been effective in reducing psychological distance and improving evaluations. However, it risks perpetuating stereotypes and fostering misplaced trust, necessitating alternative approaches like emphasizing AI’s analytical strengths.
5. Conclusion
In summary, this review highlights the complex relationship between consumers and AI, emphasizing that consumer acceptance is shaped by trust, perceived capability, and relational dynamics. While AI excels in objective and data-driven tasks, resistance persists in more subjective and emotional contexts. To improve consumer acceptance, it is crucial to consider individual consumer beliefs, provide transparent explanations for AI decisions, and ensure that AI systems are developed with fairness in mind.
Looking ahead, emerging AI trends such as explainable AI (XAI) and affective computing offer promising avenues to enhance consumer trust. XAI, which aims to make AI decisions more understandable to users, could alleviate concerns about algorithmic “black boxes,” while affective computing could enable AI to better recognize and respond to human emotions, making interactions feel more natural. As AI continues to evolve, further research is needed to explore how these innovations can be integrated into consumer experiences in ways that foster trust and acceptance.
Acknowledgements
Ryan Yu thanks Professor Bonezzi from NYU STERN School of Business for taking him on for this paper and for his terrific guidance and advice throughout the paper’s making.