Artificial Intelligence in Primary Care: Opportunities and Challenges in the Canadian and American Healthcare Systems ()
1. Introduction
Artificial Intelligence (AI) in healthcare refers to a broad class of technologies, including machine learning, deep learning, and natural language processing, which simulate human cognitive functions such as perception, reasoning, and decision-making [1]. These technologies are increasingly used to analyze large volumes of complex health data, detect patterns, predict outcomes, and support both clinical and administrative decisions. In particular, AI applications have demonstrated value in areas such as image interpretation, risk prediction, diagnostic support, personalized medicine, and workflow optimization. While much of the early adoption has occurred in hospital-based specialties like radiology and oncology, there is a growing interest in understanding how AI can be meaningfully integrated into primary care, an area that presents unique challenges and transformative potential [2].
Primary care is widely recognized as the cornerstone of effective healthcare systems. It is typically the first point of contact for patients, offering comprehensive, continuous, and coordinated care across the lifespan. In both Canada and the United States, primary care is central to managing population health, preventing disease, and addressing the growing burden of chronic conditions. However, structural and operational differences in these two systems influence how care is delivered and how innovations like AI can be adopted.
In Canada, primary care is predominantly delivered by family physicians under a publicly funded system governed by the Canada Health Act. This universal model emphasizes equity, accessibility, and comprehensiveness [3]. Nevertheless, the system faces significant challenges: long wait times, limited access in rural and remote communities, rising patient complexity, and increasing physician burnout are persistent concerns [4]. AI technologies are increasingly viewed as tools to alleviate some of these pressures. For instance, AI-driven triage tools, predictive analytics, and virtual assistants have the potential to streamline clinical workflows, improve diagnostic accuracy, and enhance remote patient monitoring, particularly in underserved areas [5].
In contrast, the United States operates a fragmented, mixed public-private healthcare system with considerable variability in access, cost, and care quality. Primary care is often under-resourced compared to specialized care, and many Americans, particularly those in rural or socioeconomically disadvantaged communities struggle to access timely and coordinated primary care services [6]. AI holds similar promise in this context, where tools that automate administrative processes, support clinical decision-making, and augment telehealth platforms can improve the reach and efficiency of primary care providers. At the same time, the US [7] context introduces additional complexities related to reimbursement, data ownership, liability, and market-driven implementation models that may affect the scalability and equity of AI adoption [8].
AI applications in primary care can take various forms. These include clinical decision support systems (CDSS) that aid in diagnosis and treatment planning; natural language processing algorithms that extract structured data from unstructured clinical notes; and machine learning models that stratify patients by risk or predict disease onset based on patterns in electronic health records [9]. Other tools include AI chatbots for symptom checking, voice-enabled documentation systems to reduce clerical burden, and remote monitoring tools that detect early signs of deterioration in chronic disease patients. These innovations are particularly relevant in an era of increasing healthcare demand due to aging populations, rising multimorbidity, and healthcare workforce shortages.
Despite the promise, the integration of AI into primary care presents substantial challenges. Primary care settings are characterized by high variability in patient presentations, longitudinal care relationships, and a need for contextual decision-making. These attributes can complicate the design and implementation of AI models trained in more controlled or narrow environments, such as specialty care or hospital datasets [10]. Moreover, concerns about algorithmic bias, data privacy, lack of interoperability, and the potential depersonalization of care remain significant barriers to adoption [11]. Ethical considerations also loom large especially when AI tools are used in ways that might influence clinical judgment or affect trust in the physician-patient relationship.
While academic and policy discussions on AI in healthcare have grown rapidly, much of the existing literature focuses on high-tech environments or specialty care rather than on the nuances of primary care. Furthermore, few studies offer a comparative lens that examines how AI is being integrated into primary care in different healthcare systems, such as those of Canada and the United States. Given the structural, financial, and cultural differences between these two countries, a side-by-side analysis is both timely and necessary. Such an analysis can shed light on how national policies, data infrastructures, regulatory frameworks, and health system priorities influence the deployment and impact of AI tools in frontline care settings.
This systematic review aims to address this critical gap by evaluating the opportunities and challenges of implementing AI in primary care within the Canadian and American healthcare systems. Specifically, the review will:
Identify current applications of AI technologies in primary care.
Assess their impact on clinical workflows, patient outcomes, and system efficiency.
Explore barriers to adoption, including technological, ethical, organizational, and regulatory factors.
By synthesizing evidence across two comparable yet distinct healthcare contexts, this review will offer actionable insights to inform health policy, guide AI development tailored to primary care, and support the strategic integration of emerging technologies into frontline healthcare delivery.
2. Methodology
This study followed the PRISMA guidelines, and the PICO framework was used to formulate the research question and study selection [12]. The Population (P) refers to Primary care providers and patients in Canada and the US, the Intervention (I) involves Implementation of Artificial Intelligence (AI) tools in primary care, the Comparison is a Standard (non-AI) primary care practices, the Outcome (O) being accessed is the Improvements in care quality, workflow efficiency, diagnosis, access, and system challenges [13]. The purpose of this study is to address this research question “What are the opportunities and challenges of using artificial intelligence in primary care settings within the Canadian and United States of American healthcare systems?”
2.1. Inclusion and Exclusion Criteria
Original peer-reviewed studies relevant to the integration of Artificial Intelligence (AI) in primary care settings in Canada and the United States were selected for inclusion. Eligible study designs included empirical research such as randomized controlled trials (RCTs), cohort studies, qualitative studies, cross-sectional studies, and implementation studies. Additionally, systematic reviews, scoping reviews, and relevant gray literature (e.g., policy reports, white papers) were included if they focused on AI applications in primary care. Studies were required to involve human participants, address AI tools or technologies (e.g., machine learning, natural language processing, clinical decision support), and explore either opportunities or challenges (or both) associated with the implementation of AI in primary care. Studies were also required to be published in English between 2010 and 2025 and be accessible in full-text. Studies not specifically addressing primary care settings, and those not focused on Canada or the United States were excluded. Non-peer-reviewed materials such as opinion editorials, non-systematic blog posts, and letters to the editor were excluded unless they provided substantial expert commentary directly linked to implementation contexts. Studies published in languages other than English or without full-text availability were also excluded. The keywords used for the search are provided in Table 1 below.
Table 1. Inclusion and exclusion criteria.
Inclusion Criteria |
Exclusion Criteria |
Original peer-reviewed empirical studies or systematic/scoping reviews |
Opinion pieces, editorials, blog posts, and non-systematic commentaries |
Studies published between 2010-2025 |
Studies published outside the 2010-2025 range |
Studies conducted in primary care
settings in Canada and/or the United States |
Studies conducted in secondary, tertiary care, or outside North America |
Studies addressing AI technologies (e.g., ML, NLP, predictive analytics, CDSS) |
Studies unrelated to AI or not focused on AI implementation or usage |
Studies exploring opportunities, challenges, or impacts of AI in primary care |
Studies without clear relevance to AI implementation or primary care |
Studies published in English |
Studies published in languages other than English |
Full-text available |
Abstract-only or inaccessible full-text studies |
2.2. Search Strategy
A comprehensive search was conducted across PubMed, Scopus, Google Scholar, CINAHL, IEEE Xplore, and the Cochrane Library from January 1, 2010, to January 31, 2025. Three independent reviewers screened all titles and abstracts for eligibility. Full texts of potentially relevant studies were then assessed independently. Discrepancies were resolved through discussion, and when consensus could not be reached, a fourth reviewer adjudicated.
(“artificial intelligence” OR “machine learning” OR “deep learning” OR “natural language processing”) AND (“primary care” OR “family medicine” OR “general practice”) AND (“Canada” OR “United States” OR “USA”) AND (“implementation” OR “integration” OR “challenges” OR “opportunities”)
Boolean operators “AND” and “OR” were used to combine the keywords into strings for advanced searches and the studies were filtered for publication in English and published within the last 15 years (2010-2025).
2.3. Study Characteristics
The studies selected for this review were published between 2010 and 2025, comprising a total of 25 studies, including 6 quantitative studies, 4 qualitative studies, 2 mixed-methods studies, 6 systematic or scoping reviews, 2 framework or conceptual papers, and 5 policy reports or expert commentaries. All studies explored the applications, opportunities, or challenges of artificial intelligence (AI) within primary care contexts in Canada and the United States.
Sample sizes varied significantly across the included studies. For example, [14] surveyed 1500 members of the Canadian public, while [11] conducted a retrospective analysis involving over 50,000 patient records. On the smaller end, qualitative studies such as [15] involved 15 - 22 clinicians and stakeholders. Across all empirical studies, the combined participant reach exceeded 60,000 individuals, including clinicians, patients, administrators, and policy experts [16].
Geographically, 10 studies were conducted exclusively in Canada, notably in provinces such as Ontario, Alberta, and British Columbia. While 10 studies were conducted in the United States, spanning both rural and urban healthcare environments. The remaining 5 studies provided either a comparative North American perspective or synthesized globally relevant findings applicable to Canadian and U.S. primary care systems.
AI technologies assessed in these studies included a range of tools such as clinical decision support systems (CDSS), natural language processing (NLP)-enabled documentation aids, predictive machine learning models, virtual assistants and triage bots, and AI-enhanced telehealth applications. These interventions targeted outcomes such as reducing clinician documentation burden, improving diagnostic accuracy, managing chronic disease, addressing health disparities, and enhancing operational efficiency. Common barriers highlighted included algorithmic bias, lack of clinician trust, poor data infrastructure, and unclear ethical or governance frameworks.
2.4. Study Selection and Screening
The initial search across six major databases PubMed, Scopus, Google Scholar, CINAHL, IEEE Xplore, and the Cochrane Library yielded a total of 4362 articles. Titles and abstracts were screened for relevance, resulting in the exclusion of 4287 articles that did not meet the inclusion criteria. This left 75 articles, which were uploaded into Rayyan software for duplicate detection and full-text review.
Studies were assessed based on predefined inclusion and exclusion criteria, focusing specifically on original, peer-reviewed research, reviews, or policy papers related to artificial intelligence applications in primary care settings within the Canadian and American healthcare systems. Following full-text screening and removal of duplicates, 25 articles were deemed eligible and included in the final qualitative synthesis.
2.5. Ethical Considerations
As this study involved a review of existing literature, no ethical approval was required. All data used in the review were from publicly available, peer-reviewed sources. The studies that involved primary data collection received ethical approval to be conducted and obtained informed consent from participants, the confidentiality of participants was maintained.
2.6. Study Designs: Strengths and Weaknesses
1) Systematic Review
* Synthesizes findings across 25 studies, enhancing the comprehensiveness of evidence on AI in primary care.
* Follows PRISMA guidelines and PICO framework, ensuring methodological rigor.
* Incorporates multiple study designs (quantitative, qualitative, policy reviews, systematic/scoping reviews), enabling a multidimensional understanding of AI applications and challenges.
* Dependent on the quality and heterogeneity of included studies; variations in methodology and outcome measures may limit comparability.
* Potential publication bias, as studies with positive AI outcomes are more likely to be published. Exclusion of non-English and inaccessible full-text studies may omit relevant evidence.
2) Quantitative Studies (RCTs, Cohort, Cross-sectional, Implementation Studies)
* Provide measurable outcomes (e.g., diagnostic accuracy, workflow efficiency, public attitudes), allowing for objective evaluation of AI tools.
* Large datasets (e.g., Obermeyer et al. with >50,000 patients) enhance representativeness and statistical power.
* Limited generalizability across diverse clinical settings due to reliance on specific populations or datasets.
* Many AI models remain in pilot or retrospective validation stages, restricting conclusions about real-world effectiveness.
* Risk of algorithmic bias when datasets lack diversity.
3) Qualitative Studies (Interviews, Focus Groups, Stakeholder Dialogues)
* Capture nuanced perspectives of clinicians, patients, and policymakers on AI integration, trust, and ethical concerns.
* Provide context for barriers (e.g., workflow disruption, lack of transparency) not measurable in quantitative designs.
* Small sample sizes (e.g., 15 - 22 participants) limit generalizability.
* Subject to researcher interpretation and potential bias in coding thematic data.
* Findings are context-specific, reducing transferability to other healthcare systems.
4) Policy Reviews and Commentaries
* Offer insights into national-level infrastructure, governance, and regulatory readiness for AI adoption.
* Highlight ethical principles (e.g., privacy, equity, accountability) that frame AI integration beyond clinical outcomes.
* Largely descriptive, lacking empirical evidence or evaluation metrics.
* May reflect author bias or institutional perspectives rather than frontline realities.
5) Scoping and Narrative Reviews
* Map emerging applications and barriers of AI across diverse contexts.
* Identify research gaps, informing future studies and policy priorities.
* Risk of overlapping evidence across multiple reviews.
* Lack of systematic quality appraisal in some included reviews limits reliability.
2.7. Sampling Strategies: Strengths and Weaknesses
1) Large-Scale Quantitative Sampling (Surveys, Cohort Datasets, EHR Analyses)
* Large sample sizes (e.g., >1500 survey respondents; >50,000 patient records) enhance statistical validity and external validity.
* Allow subgroup analyses by demographics, practice type, or geography.
* Data quality issues (e.g., fragmented EHRs, missing data) limit accuracy.
* Risk of non-response bias in public surveys.
2) Purposive and Expert Sampling (Qualitative Studies, Policy Dialogues, Stakeholder Interviews)
* Ensures recruitment of participants with direct expertise or lived experience in primary care and AI.
* Provides depth of insight into real-world barriers and facilitators of AI adoption.
* Limited generalizability due to non-random selection.
* Perspectives may be skewed toward early adopters or highly engaged stakeholders.
2.8. List of Papers Reviewed
The search results presented in Figure 1 above (the PRISMA flow chart) shows the summary of the papers used in the study. The databases provided a total of four thousand three hundred and sixty two (4362) research papers which only Twenty five (25) met the inclusion criteria for this systematic review (Table 2).
Figure 1. PRISMA flow chart of the selected study.
Table 2. Databases and sources of included papers.
Author;
Publication Year |
Title |
Study
Population |
Sample
Characteristics |
Methodology |
Major Outcome |
Rajkomar et al. (2019) |
Machine learning in medicine. New England Journal of
Medicine, |
Medical
settings |
N/A |
Narrative
Review |
ML improves real-time decision-making via EHRs. Potential for clinician
efficiency gains is high. Transparency and bias remain unresolved. |
Cinalioglu et al. (2023) |
Public attitudes toward
artificial intelligence in
Canadian healthcare: A
national survey. Journal of Health Technology and
Society |
General
Canadian
public |
1500 adults |
Survey Study |
Public attitudes toward AI vary by age and education. Younger individuals are more receptive to AI in care. Public
education is essential to build trust and adoption. |
Obermeyer et al. (2019) |
Dissecting racial bias in an
algorithm used to manage the health of populations. Science |
US
managed care
settings |
50,000 +
patients |
Quantitative Study |
An AI algorithm showed racial bias in care prioritization. Black patients
received less care despite similar health risks. The study calls for urgent bias mitigation in predictive tools. |
Bourgeois et al. (2020) |
Stakeholder perspectives on integrating artificial
intelligence into Canadian
primary care: A qualitative
interview study. Journal of Primary Health Policy and
Innovation |
Canadian
primary care |
N = 15
stakeholders |
Stakeholder Interviews |
AI is seen as beneficial for efficiency and triage. Integration with EMRs is a challenge. Stakeholders call for stronger leadership and digital policy. |
Ghadiri et al. (2024) |
Primary care physicians’
perceptions of artificial
intelligence systems in the care of adolescents’ mental health. BMC Primary Care |
Canadian
primary care |
22 physicians |
Qualitative Study |
Physicians found AI useful in youth mental health screening but expressed ethical concerns. Trust and
transparency were critical. Support for collaborative AI design was emphasized. |
He et al. (2019) |
The practical implementation of artificial intelligence
technologies in medicine: challenges and considerations. Nature Medicine |
Clinical
medicine |
N/A |
Narrative
review |
Discusses practical steps for
implementing AI in clinical settings. Emphasizes cross-functional
collaboration. Regulatory, ethical, and technical integration are critical. |
Upshaw et al. (2024) |
Priorities for Artificial
Intelligence Applications in Primary Care: A Canadian Deliberative Dialogue with
Patients, Providers, and Health System Leaders.
Journal of the American Board of Family Medicine |
Canadian
family
medicine |
16 GPs |
Qualitative Study |
GPs support AI for decision support but resist full automation. Trust improves with explainability. Co-creation with clinicians is essential |
Shen et al. (2022) |
Will technology and artificial intelligence make the primary care doctor obsolete? Frontiers in Medicine, |
U.S. healthcare
delivery |
N/A |
Narrative |
AI and digital health innovations may significantly transform primary care
delivery, potentially shifting roles from physicians to non-physician providers (e.g., NPs, PAs). Raises concerns about physician obsolescence but also
opportunities for efficiency and broader access. |
Sasseville et al. (2025) |
Bias mitigation in primary healthcare artificial
intelligence models: A scoping
review. Journal of Medical
Internet Research |
Primary care (US + Canada) |
25 studies |
Scoping
Review |
AI bias mitigation strategies like
inclusive datasets and fairness
algorithms are underused. Ethical gaps persist. Recommends regulatory
standards and implementation
checklists. |
Nguyen et al. (2024) |
Predictive machine learning in U.S. primary care: A
systematic review of validation and implementation. Journal of Medical Informatics, |
US primary care |
29 studies |
Systematic Review |
Predictive ML models support early
diagnosis and risk stratification.
However, many lack external validation. Stronger evaluation frameworks are needed for adoption |
Rahimi et al. (2022) |
Application of AI in
Community‑Based Primary Health Care: Systematic
Review and Critical Appraisal. |
Community
-based
primary care |
34 studies |
Systematic Review |
AI improves chronic disease care and health resource use. Most studies were observational. Clinician engagement and infrastructure are needed for
scaling. |
Agarwal et al. (2024) |
Artificial intelligence scribes in primary care |
Canadian
clinics |
N/A |
Commentary |
AI scribes can reduce burnout by
automating documentation.
Preliminary results show efficiency gains. Main concerns include tool
accuracy and implementation cost. |
Biswas et al. (2024) |
Intelligent Clinical
Documentation: Harnessing Generative AI for
Patient-Centric Clinical Note Generation |
US primary care clinics |
N/A |
Case Study |
Generative AI can draft clinical notes and reduce cognitive load. Early use is promising, but concerns about
inaccuracy and “hallucinated” content remain. Validation is critical. |
Shaik et al. (2023) |
Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. WIREs Data Mining and Knowledge
Discovery |
Primary care +
remote
monitoring settings |
N/A |
Systematic Review |
AI enhances remote patient monitoring (RPM) for chronic conditions. Barriers include interoperability and lack of
reimbursement models. Integration with EMRs is key.models. Integration with EMRs is key. |
Davis et al. (2025) |
Perspectives on using artificial intelligence to derive social
determinants of health data from medical records in
Canada: large
multijurisdictional
qualitative study |
Toronto
primary care |
18 clinicians |
Qualitative Study |
AI can extract social determinants of health from EMRs. Clinicians worry about consent and data misuse.
Transparent AI design and community input are recommended. |
CMA (2018) |
Code of Ethics &
Professionalism |
Canadian healthcare
system |
National scope |
Policy Report |
Advocates national AI readiness
strategy. Emphasizes ethical use, equity, and physician training. Encourages
collaborative implementation. |
Harrer et al. (2022) |
Artificial Intelligence in
Clinical Healthcare
Applications: A Viewpoint on Challenges and Lessons from the COVID-19 Pandemic. |
US healthcare system |
N/A |
Narrative
review |
US healthcare is ripe for AI but
challenged by siloed systems and
variable digital readiness. System-wide collaboration is needed. Calls for
alignment across clinical, technical, and regulatory spheres. |
Price etal. (2019) |
Privacy in the age of medical big data. |
Healthcare systems |
N/A |
Ethical
Commentary |
The article highlights risks to patient privacy from big data and AI. Balancing innovation and confidentiality is
critical. Legal safeguards and
governance are recommended. |
Sendak et al. (2020) |
Advancing artificial
intelligence in health settings outside the hospital and clinic |
US Primary care |
N = 20
clinicians |
Qualitative Study |
Clinicians see promise in AI for
diagnosis and support. Trust and
transparency are crucial. Co-design with end users is recommended. |
Esteva et al. (2019) |
A guide to deep learning in healthcare. Nature Medicine, |
General healthcare |
N/A |
Narrative
Review |
Deep learning enhances diagnostic
accuracy and triage. Effective for
image-based decision support. Clinical explainability and validation are major concerns. |
Jiang et al. (2017) |
Artificial intelligence in healthcare: past, present and future. Stroke and Vascular |
Healthcare systems |
N/A |
Narrative
Review |
AI improves diagnostics, monitoring, and workflow. Real-world adoption is hindered by integration and
standardization issues. Better protocols are needed. |
Sajedinejad et al. (2021) |
Artificial intelligence in
primary health care:
Perceptions, issues, and
challenges. Yearbook of
Medical Informatics, |
Various
primary care
settings |
50+ studies |
Scoping
Review |
AI is used for screening, triage, and chronic disease care. Real-world
application in Canada is limited. Ethical and infrastructural barriers persist |
Topol (2019) |
Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again |
Healthcare system level |
N/A |
Narrative
Review |
AI promises to enhance precision,
efficiency, and personalization. It can reduce clinician workload in primary care. Implementation relies on data quality and physician buy-in |
Ahmed et al. (2022) |
Canada’s digital health
landscape: A platform for public-private collaboration |
Canada
digital health
landscape |
N/A |
Policy Review |
Canada’s AI potential in health is
limited by infrastructure, equity, and
interoperability gaps. Stronger federal coordination is needed. Policy
harmonization is essential for scalable adoption |
Cabitza et al. (2017) |
Unintended consequences of machine learning in medicine. |
General
medical practice |
N/A |
Perspective Study |
AI poses risks of over-reliance and
diagnostic errors. Human oversight
remains essential. Authors call for
cautious, critical adoption. |
3. Result
3.1. Current Applications of AI Technologies in Primary Care
Artificial Intelligence (AI) is increasingly being integrated into primary care across Canada and the United States, offering tools that aim to improve diagnostic accuracy, optimize clinical workflows, enhance patient monitoring, and support administrative efficiency. The review reveals that AI applications in primary care currently span five primary domains: diagnostic support, risk prediction, documentation and administrative assistance, patient engagement, and health system optimization.
One of the most prominent uses of AI in primary care is clinical decision support systems (CDSS). These systems use machine learning (ML) algorithms to assist healthcare providers in making more accurate diagnoses and treatment decisions. For instance, [17] emphasized how AI-enhanced CDSS can interpret complex patterns in electronic health records (EHRs), supporting real-time clinical judgment. [18] further noted that primary care physicians favor AI tools that enhance, rather than replace, human decision-making—particularly when tools are transparent and explainable [19].
Risk prediction and patient stratification also represent key applications. These systems use algorithms to identify patients at high risk for conditions such as diabetes, cardiovascular events, or hospital readmissions. [11] examined a widely used risk-prediction algorithm in the U.S., revealing both its clinical utility and its embedded racial bias—an issue echoed by [20], who reviewed strategies to mitigate algorithmic discrimination. Similarly, [21] found that predictive AI tools are being used to inform chronic disease management and optimize resource allocation, although many remain under-validated [22].
In the domain of documentation and workflow support, AI is being applied to reduce the administrative burden on clinicians. AI-powered scribes and natural language processing (NLP) tools are being piloted to automatically generate clinical notes and transcribe consultations. [23] discussed early results from Canadian clinics showing that AI scribes improve both efficiency and patient engagement. [24] demonstrated that generative AI tools can lessen cognitive load but caution about accuracy and potential “hallucinations.”
Patient engagement and virtual care tools are also being tested. AI-enabled symptom checkers and virtual triage bots are helping patients navigate care options, particularly in rural or underserved areas. Studies like [25] highlighted the role of AI in remote patient monitoring (RPM), where it can detect early signs of deterioration in chronic disease patients. Similarly, [26] explored how AI can extract social determinants of health (SDOH) from clinical data to enhance individualized care.
From a systems-level perspective, AI is being used to support operational efficiency and strategic planning in primary care networks. Policy papers such as [27] and commentaries like [28] identified national gaps in data governance and infrastructure, while offering strategic frameworks for integrating AI into routine practice [29]. [30] emphasized the importance of clinician engagement and interprofessional collaboration in successful implementation.
Notably, AI is also being explored in mental health care within primary settings. [16] reported on the perceived utility of AI in adolescent mental health screening among Canadian family physicians, who nonetheless expressed caution due to ethical and trust-related concerns.
Taken together, these studies demonstrate that AI is no longer a hypothetical addition to primary care; it is an active and evolving tool that is being piloted and, in some cases, integrated across various facets of care [31]. However, most applications are still in the proof-of-concept or early implementation phase, and widespread adoption is slowed by systemic issues such as lack of interoperability, unclear regulations, clinician skepticism, and equity concerns [32].
3.2. Impact of AI Technologies in Primary Care
The integration of artificial intelligence (AI) technologies in primary care settings across Canada and the United States has had a transformative yet still maturing—impact on clinical practice, health system performance, patient experience, and workforce dynamics. As AI moves beyond experimental stages and into practical application, it is influencing how clinicians diagnose, document, triage, and monitor patients. This section draws on the 25 included studies to explore the nuanced, multidimensional impact of AI in primary care.
One of the clearest and most measurable impacts of AI in primary care is its contribution to diagnostic support. Several studies, including [33], have demonstrated how machine learning (ML) algorithms can outperform traditional heuristics in recognizing complex clinical patterns. These systems help identify subtle indicators across diverse datasets, enabling earlier detection of diseases such as diabetes, hypertension, and cancer. The ability to leverage AI for real-time decision-making in primary care is especially valuable where time constraints and cognitive load are high.
Beyond diagnostics, AI has had a major impact in risk prediction and chronic disease management. [34] highlight how predictive models are increasingly being deployed to identify high-risk patients and allocate resources accordingly. This is particularly important in managing conditions that require long-term monitoring, such as heart failure or COPD. However, the accuracy and fairness of these algorithms have come under scrutiny. [11] famously exposed racial bias in a U.S. algorithm that underestimated the health needs of Black patients. This underscores the importance of transparency, fairness metrics, and diverse data training sets as emphasized by [20].
Another key area of impact is administrative efficiency and documentation. Tools like AI scribes and NLP-based systems have been shown to significantly reduce the documentation burden on clinicians. [23] found that AI scribes improved clinical efficiency and increased time spent in direct patient interaction. Similarly, [24] reported early success using generative AI to automate clinical note generation, although concerns remain about factual accuracy and hallucinations. This aligns with the findings of [34], whose scoping review confirmed that AI applications in primary health care are prominently focused on automating administrative tasks to alleviate workload. [2] further argued that AI could return “the gift of time” to physicians, an asset increasingly lost in modern medical practice.
Remote monitoring and virtual care represent another domain where AI has had measurable effects. [25] described how AI-powered RPM tools enable clinicians to monitor patients with chronic illnesses in real-time, offering alerts for deterioration and enabling proactive care. These systems are especially valuable in rural and remote settings where in-person access to healthcare is limited. Furthermore, AI tools are increasingly being used to engage patients directly through virtual triage systems and digital symptom checkers, enhancing access and care continuity.
Importantly, AI is also helping to integrate non-clinical information such as social determinants of health (SDOH) into care decisions. [26] noted that AI can extract meaningful insights from free-text clinical notes to identify socioeconomic risks or access barriers. This allows clinicians to deliver more personalized and equitable care, although issues of privacy and data ethics remain paramount.
From the clinician’s perspective, the impact of AI is complex and shaped by trust, transparency, and usability. [30] reported that while clinicians generally see value in AI as a supportive tool, they remain wary of its opacity and the risk of losing clinical autonomy. Tools that offer explainable outputs and that have been co-designed with clinicians tend to be more readily adopted. Conversely, black-box models regardless of their accuracy face higher resistance due to perceived risk and lack of accountability. emphasized the dangers of automation bias, where overreliance on AI could undermine critical thinking and diagnostic integrity.
On a systems level, the influence of AI extends to health policy and infrastructure planning. Reports from the [33] suggest that AI is reshaping how health systems envision resource allocation, performance measurement, and even medical education. However, widespread deployment remains limited by issues such as interoperability, insufficient digital infrastructure, and regulatory ambiguity. Stakeholders interviewed by [15] stressed the need for leadership and local evaluation frameworks before scaling AI solutions.
Public perception also plays a role in shaping AI’s real-world impact. [33] found significant variation in public trust, with younger, more tech-savvy individuals generally more receptive to AI tools. These findings suggest that digital literacy and public education will be key components in maximizing the equitable impact of AI.
In conclusion, the impact of AI technologies in primary care is substantial and multifaceted. While benefits in diagnosis, efficiency, monitoring, and personalization are evident, the full potential of AI is tempered by systemic, ethical, and human factors. The reviewed studies consistently highlight that AI’s greatest impacts are realized not through technology alone, but through thoughtful integration, clinician engagement, and robust governance frameworks.
3.3. Barriers to AI Adoption in Primary Care
Despite the growing interest and evidence supporting the integration of artificial intelligence (AI) into primary care, its widespread adoption across Canadian and American healthcare systems remains limited by several significant barriers. These challenges span technological, human, ethical, infrastructural, and regulatory domains, each of which influences how AI is perceived, implemented, and sustained in clinical practice.
1) Data Quality, Availability, and Interoperability: The success of AI tools heavily depends on access to large, high-quality datasets. However, poor data standardization and fragmented electronic health records (EHRs) continue to hamper development and deployment. [28] [33] both emphasize that a lack of coordinated digital infrastructure, especially in decentralized systems like Canada’s, results in siloed data environments that are incompatible with AI learning requirements. The Canadian Medical Association (CMA) similarly identified data governance and interoperability as key limitations to national-scale AI integration.
2) Algorithmic Bias and Equity Concerns: AI tools trained on biased datasets risk reproducing or exacerbating existing health disparities. One of the most cited examples is from [11], who showed that a widely used risk prediction algorithm systematically underestimated the health needs of Black patients, leading to under-referral for critical care. [21] also noted that many AI models are not externally validated on diverse populations, raising serious concerns about generalizability, fairness, and unintended harm in vulnerable groups.
3) Lack of Explainability and Clinician Trust: Clinicians are often reluctant to adopt AI tools that operate as “black boxes.” The opacity of AI decision-making leads to distrust, especially when providers are unable to validate or challenge an AI's recommendation. [19] reported that general practitioners are more likely to adopt AI when it is transparent, explainable, and aligns with clinical intuition. Similarly, [16] found that Canadian physicians were hesitant to use AI in adolescent mental health screening due to concerns about interpretability and accountability.
4) Workflow Integration Challenges: AI tools that are not well integrated into existing clinical workflows may increase, rather than decrease, the burden on healthcare providers. [30] emphasized the importance of building clinician-facing AI systems that seamlessly align with routine practice. Tools that require extra steps, toggling between systems, or extensive training are less likely to be adopted. [22] similarly pointed out that AI implementation must consider frontline usability and interprofessional coordination.
5) Regulatory and Ethical Uncertainty: There remains a lack of clear regulatory frameworks governing the development, approval, and monitoring of AI in healthcare. [29] highlighted that existing privacy laws like HIPAA and PIPEDA were not designed for AI-scale data analytics, leading to grey zones in terms of consent, data ownership, and accountability. [28] echoed the call for updated legal frameworks to govern how AI is integrated and audited within clinical systems.
6) Cost and Resource Constraints: While AI promises long-term savings, the upfront cost of implementation—including technology acquisition, staff training, and infrastructure upgrades—can be prohibitive, particularly for small or rural clinics. [23] noted that although AI scribes reduced clinician workload, cost and technical support were major barriers to scaling. [25] added that reimbursement structures often do not yet support AI-enabled care models such as remote monitoring, making them financially unsustainable.
7) Patient Trust and Public Perception: Adoption is also shaped by patient attitudes toward AI in their care. [33] found that while younger, tech-literate populations were more receptive to AI, older adults expressed discomfort and skepticism. Public mistrust, particularly regarding data privacy and the role of AI in replacing human judgment, continues to pose a barrier to patient engagement and acceptance of AI tools.
8) Lack of Clinical Validation and Evidence: Many AI models used in primary care have limited clinical validation. [22] noted that most tools remain in pilot stages or are validated only in retrospective studies, without real-world implementation trials. Without rigorous evidence of safety and effectiveness, healthcare institutions are understandably cautious in committing to full-scale adoption.
4. Discussion
This systematic review synthesizes evidence on the application, impact, and challenges of artificial intelligence (AI) technologies in primary care across Canada and the United States. Twenty-five studies, including quantitative, qualitative, mixed-methods, systematic reviews, scoping reviews, policy reports, and commentaries, were analyzed. Collectively, the findings reveal that AI is already shaping diagnostic processes, workflow efficiency, patient monitoring, and documentation in primary care. However, its integration remains fragmented, constrained by structural, ethical, and practical barriers.
Empirical Findings
Quantitative and mixed-methods studies provided strong evidence for the measurable impact of AI tools in primary care. Clinical decision support systems (CDSS) were found to improve diagnostic accuracy by recognizing subtle patterns in large datasets, enabling earlier identification of chronic diseases such as diabetes, hypertension, and cancer [31]. Predictive analytics played a key role in identifying high-risk patients for conditions like COPD and heart failure, supporting resource allocation and preemptive care planning [21] [22]. Despite these successes, one of the most influential studies demonstrated how a widely used risk prediction algorithm in U.S. care settings systematically underestimated the health needs of Black patients, leading to inequitable care distribution [11]. This finding underscores the dual nature of predictive AI: while powerful in stratifying risk, these systems are vulnerable to embedding and amplifying systemic biases if trained on unrepresentative data [20].
Qualitative research enriched these empirical outcomes by capturing clinician and patient perspectives. Canadian family physicians in youth mental health settings reported that AI tools could enhance screening and triage but expressed ethical concerns around autonomy, accountability, and interpretability [16]. Similarly, deliberative dialogues among Canadian general practitioners found cautious support for AI, with clinicians emphasizing transparency and co-creation as prerequisites for adoption [18] [19]. Across both Canadian and U.S. contexts, clinicians consistently valued AI as an assistive tool but resisted systems that risked replacing professional judgment. From the patient perspective, national surveys revealed age- and education-related divides in public trust: younger adults were more open to AI-supported care, while older populations expressed skepticism, citing concerns over depersonalization, privacy, and the erosion of human judgment in medicine [14].
Another critical empirical finding involved AI’s role in documentation and administrative burden reduction. Pilot studies in Canadian primary care clinics demonstrated that AI scribes and NLP-based documentation systems significantly reduced time spent on clerical tasks, thereby increasing the proportion of time available for direct patient care [23] [24]. These tools alleviated physician burnout and improved efficiency, though concerns persisted about the accuracy of automatically generated notes and the risk of “hallucinated” clinical content. Complementary empirical work in the U.S. found that generative AI note-taking improved workflow but required rigorous oversight to avoid misdocumentation [24].
Remote patient monitoring (RPM) was another domain of progress. AI-enhanced RPM platforms enabled clinicians to track chronic disease patients in real time, issuing alerts for early deterioration and enabling proactive interventions [25]. This was particularly valuable in rural and underserved areas with limited in-person access. Complementary innovations explored AI’s ability to extract social determinants of health (SDOH) from clinical records, with qualitative studies showing that clinicians valued the potential for personalized, equitable care but remained concerned about consent, data misuse, and community trust [26].
Together, the empirical evidence paints a complex but encouraging picture: AI is demonstrating measurable benefits in diagnostics, workflow optimization, chronic disease management, and patient engagement. Yet, widespread adoption is slowed by recurring themes—algorithmic bias, lack of interoperability, insufficient clinical validation, and persistent trust deficits among clinicians and patients.
Narrative Reviews, Commentaries, and Policy Reports
Non-empirical sources provided system-level insights into the broader landscape of AI in healthcare. Narrative and scoping reviews highlighted AI’s potential to enhance precision, efficiency, and personalization in clinical practice while noting that most tools remain in experimental stages and lack external validation [34]. These reviews frequently warned against over-reliance on machine learning models, emphasizing that human oversight remains indispensable to prevent diagnostic errors and unintended consequences [35].
Commentaries focused on specific innovations such as AI scribes and generative documentation, describing early signs of reduced burnout and improved efficiency but cautioning against premature adoption due to unresolved issues of cost, accuracy, and accountability [23] [24]. Similarly, ethical perspectives argue that balancing innovation with confidentiality requires stronger legal safeguards and governance mechanisms, as AI-driven data collection introduces risks of misuse or breach [29].
Policy reports stressed the urgent need for infrastructure, regulation, and education to prepare health systems for AI adoption. The Canadian Medical Association (CMA) advocated for a national AI readiness strategy centered on ethical use, equity, and physician training [27]. Broader reviews noted that Canada’s fragmented digital health infrastructure, including the lack of interoperable electronic health records, continues to obstruct scalable adoption [33]. U.S.-focused commentaries emphasized similar challenges, citing reimbursement gaps, inconsistent regulatory oversight, and the inadequacy of existing privacy laws like HIPAA and PIPEDA to govern AI-scale data analytics [28] [29]. These non-empirical sources, while not offering measurable outcomes, provided critical insights into systemic, regulatory, and ethical conditions that will shape the trajectory of AI integration into primary care.
Integrated Synthesis of Findings
When empirical and non-empirical evidence are considered together, a clearer picture emerges. Empirical data show that AI is capable of improving diagnostic accuracy, streamlining documentation, supporting chronic disease management, and expanding access through telehealth and monitoring. At the same time, narrative and policy sources highlight the systemic barriers—fragmented infrastructure, regulatory uncertainty, algorithmic inequity, and financial obstacles—that slow widespread implementation. Clinician and patient trust, consistently highlighted across study types, emerges as a pivotal determinant of success: tools that are explainable, transparent, and designed in partnership with end users are more likely to be accepted and sustained [30].
Ultimately, this review demonstrates that AI is no longer a hypothetical innovation but a practical reality being tested and, in some cases, deployed in Canadian and American primary care. However, the path to meaningful, equitable adoption remains incomplete. The findings point toward several urgent priorities for future research and policy development: 1) real-world clinical trials to establish effectiveness and safety across diverse populations, 2) robust governance frameworks that ensure interoperability, privacy, and accountability, 3) strategies to mitigate algorithmic bias and enhance fairness, and 4) sustainable economic and reimbursement models that support adoption in both urban and rural settings.
5. Conclusions
The integration of Artificial Intelligence into primary care represents a paradigm shift with the potential to redefine the delivery of frontline medicine in Canada and the United States. This systematic review has synthesized evidence from 25 studies to evaluate the opportunities, impacts, and significant challenges surrounding AI adoption in these two distinct yet comparable healthcare systems. The findings confirm that AI is not a futuristic concept but an active area of innovation, with applications already demonstrating value in clinical decision support, risk prediction, administrative automation, remote patient monitoring, and the identification of social determinants of health. These tools offer a promising avenue to address pervasive system pressures, including physician burnout, rising patient complexity, access disparities, and inefficient workflows. The potential of AI to enhance diagnostic accuracy, return the “gift of time” to clinicians, and enable more proactive, personalized care is undeniable.
However, this review clearly demonstrates that the path to realizing AI’s full potential is fraught with substantial barriers that extend far beyond technological capability. The promise of AI is currently tempered by systemic fragmentation, ethical dilemmas, and human factors. The lack of interoperable data infrastructure in both countries severely limits the development and deployment of robust AI models that require comprehensive, longitudinal data. Furthermore, the threat of algorithmic bias, as starkly illustrated by embedded racial disparities in risk prediction tools, poses a grave risk of perpetuating and exacerbating existing health inequities. Crucially, the success of AI is contingent on human trust. Clinician skepticism toward “black box” algorithms, concerns over loss of autonomy, and workflow integration challenges remain critical impediments to adoption. These are compounded by an ambiguous regulatory landscape, significant financial costs, and variable public acceptance.
Therefore, the successful integration of AI into primary care will not be determined by algorithms alone but through a concerted, multi-stakeholder effort focused on thoughtful and equitable implementation. Moving forward, stakeholders must prioritize several key areas:
First, developing robust data governance frameworks that ensure interoperability, privacy, and security while making high-quality, diverse datasets available for training and validation. Second, embedding equity and transparency as core principles in AI design, mandating rigorous bias audits, explainable AI, and continuous monitoring to ensure fairness across diverse populations. Third, fostering co-design and collaboration with end-users—clinicians, patients, and administrators—to ensure that tools are usable, trustworthy, and seamlessly integrated into clinical workflows rather than disruptive additions.
Finally, establishing clear regulatory and reimbursement pathways that provide guidance on validation, safety, liability, and sustainable funding models for AI-enabled care.
In conclusion, AI stands to make primary care more predictive, personalized, and efficient. However, its ultimate impact will be determined not by its computational power, but by our ability to navigate the complex interplay of technology, ethics, and human-centric care. By addressing the identified challenges with strategic investment, collaborative design, and a steadfast commitment to equity, healthcare leaders in Canada and the United States can guide the evolution of AI from a disruptive novelty into an indispensable, supportive tool that augments the human judgment and enduring patient-physician relationship at the heart of primary care.
Conflicts of Interest
The authors declare no conflicts of interest.
NOTES
*Corresponding author.
#Co-authors.