Comparative Analysis of Anticholinergic Burden and Cognitive Decline in Elderly Patients on Long-Term Neuroleptic Therapy

Abstract

Elderly individuals undergoing long-term neuroleptic therapy are increasingly vulnerable to cognitive decline, a condition that significantly impairs quality of life and increases healthcare burden. One contributing factor is the cumulative anticholinergic burden from prescribed antipsychotic medications. This study aims to explore the relationship between anticholinergic load and cognitive impairment in aging patients using an interpretable machine learning framework. We developed a synthetic dataset of 1000 geriatric patient profiles with realistic distributions of clinical and demographic features, including age, comorbidity count, treatment duration, medication load, and baseline cognitive status. The binary target variable represented observed cognitive decline. Our approach involved robust preprocessing through KNN imputation, feature scaling, and one-hot encoding, followed by oversampling of the minority class using SMOTE. We trained and evaluated three predictive models—Random Forest, XGBoost, and Logistic Regression—using stratified cross-validation and hyperparameter tuning. Logistic regression outperformed the ensemble and tree-based models, achieving the highest ROC AUC of 0.702 on the test set. Feature importance analysis identified anticholinergic burden, age, and vascular disease as leading contributors to cognitive decline. Furthermore, SHAP (SHapley Additive exPlanations) values offered interpretable insights into individual prediction dynamics and global feature relevance. Logistic Regression outperformed XGBoost and Random Forest, achieving an ROC AUC improvement of +0.035 and +0.037 respectively, highlighting its superior discrimination capability in this setting. The findings validate the hypothesis that increased anticholinergic burden elevates cognitive risk and underscore the utility of transparent AI tools in medical decision-making. These results pave the way for integrating explainable machine learning into geriatric pharmacovigilance and cognitive health monitoring, with the potential to inform personalized treatment strategies and reduce adverse neurocognitive outcomes in vulnerable populations.

Share and Cite:

de Filippis, R. and Al Foysal, A. (2025) Comparative Analysis of Anticholinergic Burden and Cognitive Decline in Elderly Patients on Long-Term Neuroleptic Therapy. Open Access Library Journal, 12, 1-17. doi: 10.4236/oalib.1113511.

1. Introduction

Cognitive decline is a prevalent concern in elderly populations, particularly among those receiving long-term neuroleptic (antipsychotic) therapy [1]-[4]. Several studies have documented the cognitive risks associated with anticholinergic drug use, especially in geriatric cohorts receiving long-term neuroleptic therapy. For instance, Carrière et al. (2009) and Green et al. (2019) demonstrated a strong dose-response relationship between anticholinergic load and memory impairment. However, prior works often lack predictive frameworks that incorporate multiple interacting clinical factors, a gap this study aims to address using machine learning. Neuroleptics, while effective for managing psychiatric and behavioural disorders such as schizophrenia, bipolar disorder, and dementia-related agitation, are frequently associated with anticholinergic side effects [5]-[8]. These effects stem from the ability of certain medications to block acetylcholine, a neurotransmitter essential for memory, attention, and executive functioning [9]-[12]. The cumulative impact of such drugs—referred to as anticholinergic burden—has been strongly correlated with the onset and progression of cognitive impairment, especially in older adults with existing vulnerabilities [13]-[17]. As the aging population increasingly experiences polypharmacy, the challenge of quantifying and mitigating cognitive risks associated with multiple drug classes becomes more complex [18]-[22]. Traditional statistical models and clinical scoring systems may overlook intricate interactions between medications, comorbidities, and patient-specific factors [23]-[26]. Moreover, these methods often assume linear relationships and fail to adapt to diverse, high-dimensional healthcare data [27]-[31]. To address these limitations, this study introduces a machine learning (ML)-driven analytical framework that combines predictive modelling with explainable artificial intelligence (XAI) techniques [32]-[35]. Specifically, we investigate how the anticholinergic burden, in conjunction with other clinical features such as age, comorbidity count, medication load, and neuroleptic type, contributes to the probability of cognitive decline in elderly individuals. Using a rigorously constructed synthetic dataset reflecting realistic clinical distributions, we evaluate multiple ML models and apply SHAP (SHapley Additive exPlanations) to interpret their behaviour. Our goal is not only to enhance predictive accuracy but also to ensure transparency in how individual features influence model output—an essential step toward clinical trust and adoption. Ultimately, this work aims to support early identification of at-risk individuals and inform safer prescribing practices for elderly patients undergoing neuroleptic therapy.

2. Methodology

2.1. Dataset Generation

To conduct a controlled yet meaningful analysis, we developed a synthetic dataset that closely mimics the clinical characteristics of elderly patients undergoing long-term neuroleptic therapy. The decision to use synthetic data was driven by the need to simulate a sufficiently large and diverse population while maintaining control over variable distributions and minimizing potential privacy concerns associated with real-world health records [36]-[39]. The dataset consists of 1000 individual patient profiles, each representing a unique case with multiple clinical and demographic variables. The simulation process was guided by published epidemiological data and clinical expertise to ensure that the generated distributions align with realistic geriatric populations [40]-[44].

The core variables were generated as follows:

  • Age: Sampled from a normal distribution cantered around 75 years with a standard deviation of 8, then truncated to fall within the 65 to 100-year range. This reflects the typical age spectrum for older adults on antipsychotic treatment and captures both younger seniors and the oldest-old demographic.

  • Anticholinergic Burden: Modelled using a gamma distribution, which effectively captures the skewed nature of medication exposure due to polypharmacy. Most patients carry a moderate burden, while a smaller subset exhibits disproportionately high levels—mirroring real clinical scenarios where some elderly individuals are prescribed multiple high-risk medications.

  • Cognitive Decline: This was engineered as a binary classification target (0 = No Decline, 1 = Decline) based on a logistic regression-based probability function. The function incorporated age, anticholinergic burden, comorbidity count, neuroleptic class, vascular disease, depression diagnosis, and medication count as inputs. Coefficients were selected based on existing literature and clinical hypotheses, thereby ensuring that the simulated label generation remained biologically and clinically plausible.

This structured and clinically informed synthetic dataset serves as a robust foundation for downstream machine learning tasks, enabling us to evaluate model performance, feature importance, and risk stratification strategies in a reproducible and scalable manner.

2.2. Features and Target Variable

The dataset was designed to capture a broad spectrum of clinical and demographic attributes that could influence the risk of cognitive decline in elderly patients undergoing neuroleptic therapy. These attributes were categorized into numeric (continuous) and categorical (discrete) variables to facilitate appropriate preprocessing, feature engineering, and model interpretation.

Numeric Features

1) Age: Measured in years, this variable reflects the biological aging of the patient. Given its well-documented role in neurodegeneration and vulnerability to medication side effects, age was treated as a continuous variable ranging from 65 to 100.

2) Anticholinergic Burden: A cumulative score representing the total anticholinergic load from prescribed medications. Higher values suggest greater interference with cholinergic neurotransmission, which is closely linked to cognitive performance.

3) Baseline MMSE (Mini-Mental State Examination): A widely accepted quantitative measure of cognitive function at the start of therapy. Scores range from 0 to 30, with lower scores indicating greater impairment. This feature acts as a proxy for cognitive reserve.

4) Treatment Duration (Years): Represents the length of time the patient has been on neuroleptic medications. Chronic exposure may amplify the neurocognitive risks posed by antipsychotic agents.

5) Comorbidity Count: Indicates the number of coexisting medical conditions, such as diabetes, hypertension, or chronic kidney disease. This variable reflects overall health complexity and frailty.

6) Medication Count: Captures the total number of medications being taken, offering a direct measure of polypharmacy. It is also indirectly related to anticholinergic burden and adverse drug interactions.

Categorical Features

1) Gender: Classified as ‘Male’ or ‘Female’. Gender differences in pharmacokinetics and cognitive aging are well-established and were included to observe any predictive disparities.

2) Neuroleptic Class: Differentiates between Typical (first-generation) and Atypical (second-generation) antipsychotics. These classes differ in receptor profiles and anticholinergic properties.

3) Depression Diagnosis: A binary indicator (0 = No, 1 = Yes) showing whether the patient has a clinically diagnosed depressive disorder. Depression itself is a known risk factor for cognitive decline.

4) Vascular Disease: Another binary feature representing the presence of cardiovascular or cerebrovascular conditions, which can independently contribute to neurocognitive impairment.

Target Variable: The target outcome, Cognitive Decline, is a binary classification label (0 = No Decline, 1 = Decline) generated using a logistic model that integrates multiple clinical risk factors. This outcome was chosen to reflect a practical clinical endpoint, enabling binary prediction tasks suitable for machine learning classification models.

2.3. Data Preprocessing

To ensure data quality, consistency, and compatibility with machine learning algorithms, we implemented a robust and modular data preprocessing pipeline. This pipeline was designed to handle missing values, normalize feature distributions, encode categorical variables, and address the substantial class imbalance present in the target variable.

  • Imputation

Missing data is a common issue in clinical datasets. To maintain dataset integrity while minimizing bias, we used different imputation strategies based on feature type:

  • Numeric Features: For continuous variables such as age, MMSE score, and medication count, we applied K-Nearest Neighbors Imputation (KNNImputer). This method estimates missing values based on the average of the nearest samples in feature space, preserving underlying data structure and local variability [45]-[48].

  • Categorical Features: For discrete variables like gender or neuroleptic class, we used mode imputation, replacing missing entries with the most frequently occurring category. This technique is effective for minimizing distortion in nominal distributions [49]-[52]. KNN imputation was selected due to its robustness in preserving local feature structures, which is important in high-dimensional clinical datasets. Mode imputation for categorical features maintains the most representative category. Alternative methods, including mean and MICE imputation, were explored, but KNN showed better alignment with feature variability in our synthetic data pilot runs.

  • Normalization

After imputation, all numeric features were scaled using StandardScaler, which transforms each variable to have zero mean and unit variance. This is critical for algorithms such as logistic regression and distance-based models that are sensitive to feature magnitude. Normalization ensures that variables like age and anticholinergic burden contribute proportionally to the model during training.

  • Encoding

Categorical variables (e.g., gender, neuroleptic class) were processed using OneHotEncoder, which converts each category into a binary vector representation. This technique prevents algorithms from assuming ordinal relationships between categories and enables fair model treatment of all categorical classes.

  • Resampling (Class Imbalance Handling)

Given the low prevalence of cognitive decline in the dataset (~10%), we applied Synthetic Minority Over-sampling Technique (SMOTE). This algorithm generates synthetic samples from the minority class by interpolating between existing examples, thereby improving the classifier’s ability to learn rare-event patterns without duplicating existing instances.

2.4. Model Development and Optimization

To predict the likelihood of cognitive decline in elderly patients undergoing neuroleptic therapy, we implemented a structured and comparative machine learning approach. The goal was to assess both predictive performance and interpretability across different algorithmic paradigms.

Model Selection

Three supervised classification models were developed within dedicated pipelines, each incorporating the same preprocessing and resampling components for fair comparison:

1) Logistic Regression: A widely used baseline model, particularly valued in clinical applications due to its interpretability. It assumes a linear relationship between features and the log-odds of the target. Despite its simplicity, it performed the best in this study, achieving a test-set ROC AUC of 0.702.

2) Extreme Gradient Boosting (XGBoost): A powerful tree-based ensemble model known for its ability to handle nonlinearity, interaction effects, and feature importance evaluation. This model achieved a ROC AUC of 0.669, offering moderate performance but lower transparency compared to logistic regression.

3) Random Forest: A bagging ensemble of decision trees designed to reduce overfitting and improve robustness. Though known for strong baseline performance across many tasks, it achieved a slightly lower ROC AUC of 0.667 in this context.

Pipeline Construction

Each model was embedded into a complete scikit-learn pipeline, integrating the preprocessing steps (as described in Section 2.3), SMOTE-based resampling, and the respective classifier. This modular design ensured reproducibility, ease of parameter tuning, and compatibility with cross-validation tools.

Hyperparameter Optimization

For each model, a dedicated GridSearchCV routine was applied using 5-fold stratified cross-validation to ensure balanced evaluation across both classes. The search spanned hyperparameters such as regularization strength (logistic regression), maximum depth and learning rate (XGBoost), and tree complexity settings (Random Forest).

Ensemble Model: To leverage complementary strengths across models, we constructed a soft voting ensemble classifier, combining the predictions from the three trained pipelines. This ensemble aggregated probability outputs and performed class prediction based on the average of model confidences.

Model Selection Criteria: The final model was selected based on mean cross-validated ROC AUC scores across folds. Logistic regression emerged as the best-performing and most consistent model, offering both strong discrimination capability and clinical interpretability—critical features in real-world deployment.

3. Results

This section presents a comprehensive evaluation of model performance, predictive behaviour, and feature-level interpretability, focusing on the best-performing classifier—logistic regression. Each subsection integrates visual diagnostics, performance metrics, and explanatory insights to contextualize the model’s strengths and limitations.

3.1. Predictive Performance

Out of the three models evaluated—Logistic Regression, XGBoost, and Random Forest—logistic regression consistently achieved the best discrimination between patients with and without cognitive decline. The key evaluation metrics on the test set were:

  • Overall Accuracy: 84%, indicating strong general performance across both classes.

  • Precision (Decline): 0.29, suggesting that only 29% of patients predicted to have cognitive decline were actual decline cases.

  • Recall (Decline): 0.33, meaning the model correctly identified 33% of all patients who truly experienced cognitive decline.

  • F1 Score (Decline): 0.31, representing the harmonic mean of precision and recall, useful for imbalanced classification tasks.

Despite a strong accuracy score, the relatively low precision and recall for the minority class (Decline) highlight a key challenge: imbalanced datasets hinder minority detection, even when advanced techniques like SMOTE are applied. This underscores the importance of using AUC and class-specific metrics in addition to global accuracy for healthcare risk modelling [53] [54].

3.2. Confusion Matrix Analysis

Figure 1 illustrates the confusion matrix of the logistic regression classifier. The matrix indicates excellent performance for the majority class (No Decline), with

Figure 1. Confusion matrix for logistic regression. each cell represents prediction accuracy per class.

a high number of true negatives and low false positives. However, the model’s sensitivity to cognitive decline (true positives) remains limited, evidenced by a moderate number of false negatives—instances where high-risk individuals were misclassified as low-risk. This behaviour reflects the inherent difficulty in capturing rare events when signal separation between classes is weak or masked by overlapping feature distributions. From a clinical standpoint, the cost of false negatives—failing to identify high-risk individuals—is notably more consequential than false positives. This warrants future incorporation of cost-sensitive learning frameworks to minimize harm. Threshold tuning and reweighting strategies may also help balance sensitivity and specificity.

3.3. ROC Curve Interpretation

Figure 2 presents the Receiver Operating Characteristic (ROC) curve, a graphical representation of the trade-off between sensitivity (true positive rate) and 1-specificity (false positive rate) across classification thresholds.

  • The Area Under the Curve (AUC) was 0.68, signifying moderate discriminative ability.

  • This value indicates that, on average, the model has a 68% probability of ranking a randomly selected patient with cognitive decline higher than one without.

Although not exceeding conventional clinical thresholds (e.g., AUC ≥ 0.75), the curve demonstrates a useful early-warning signal capability, especially at lower

Figure 2. ROC Curve of the logistic regression model. The AUC value quantifies overall classification strength.

false-positive rates—a valuable feature for screening tools that prioritize safety and pre-emptive monitoring.

3.4. Logistic Regression Coefficients

Figure 3 showcases the standardized regression coefficients from the logistic model. This coefficient-based feature importance allows direct interpretability regarding both magnitude and direction of influence:

  • Anticholinergic Burden had the strongest positive coefficient, solidifying its role as the most critical contributor to cognitive decline.

  • Vascular Disease and Depression Diagnosis were also positively associated, aligning with known clinical comorbidities that elevate dementia risk.

  • Age, interestingly, carried a negative coefficient, which may seem counterintuitive. This could reflect statistical suppression, feature correlation (e.g., age vs. MMSE), or the synthetic population distribution where younger seniors with comorbidities might be disproportionately affected.

This linear breakdown offers clear, clinician-friendly insights and supports practical risk stratification.

Figure 3. Logistic regression coefficients with directionality. features with positive values increase risk; negative values are protective.

3.5. SHAP Summary Plot: Explainability and Interaction

To further enhance interpretability beyond linear assumptions, we employed SHAP (SHapley Additive exPlanations)—a game-theory-based approach that quantifies the marginal impact of each feature on individual predictions [55]-[57].

Figure 4 presents the SHAP summary plot:

  • The horizontal axis indicates the magnitude and direction of impact on prediction probability.

  • Each dot represents a patient; its color shows the feature’s value (e.g., red = high, blue = low).

  • Features like anticholinergic burden and age consistently pushed predictions toward higher risk when values were elevated.

  • The spread of SHAP values for comorbidity and depression suggests potential interaction effects, where context (e.g., co-occurring vascular disease) may modulate predictive strength.

This plot not only validates the regression coefficients but also uncovers nonlinearities and heterogeneous effects—critical for personalized medicine applications. Notably, SHAP interaction effects suggested that the impact of comorbidity count on risk was magnified in patients also diagnosed with vascular disease. Similarly, depression diagnosis showed greater predictive weight in patients on high anticholinergic load, highlighting potential synergy between psychiatric comorbidities and pharmacologic risk.

Figure 4. SHAP Summary Plot illustrating feature contributions across the dataset. Larger values indicate stronger influence on model output.

4. Discussion

This study provides compelling evidence that anticholinergic burden represents a clinically significant and quantifiable risk factor for cognitive decline in elderly patients receiving long-term neuroleptic therapy. Through the application of machine learning and interpretable AI methods, we were able to identify anticholinergic load, vascular comorbidities, and affective disorders (e.g., depression) as prominent contributors to cognitive vulnerability in this high-risk population [58] [59]. One of the most striking findings was the superior performance of logistic regression compared to more complex models such as Random Forest and XGBoost. This suggests that the underlying relationship between the selected features and cognitive decline may be predominantly linear, particularly when data are well-pre-processed and feature distributions are normalized. Moreover, the interpretability of logistic regression provides a critical advantage in clinical decision-making environments where transparency and explainability are paramount. The integration of SHAP (SHapley Additive Explanations) further strengthened the model’s clinical relevance by offering individualized and global explanations for prediction outcomes. SHAP visualizations revealed consistent directional effects for key predictors such as anticholinergic burden and age, while also highlighting nuanced interactions between variables like comorbidity and neuroleptic class. These insights align with existing literature on the pharmacological and pathophysiological mechanisms of cognitive decline, reinforcing the biological plausibility of our model outputs [60]-[62].

Despite the promising results, several limitations warrant careful consideration:

1) Synthetic Dataset: While constructed to mimic real-world clinical distributions, the use of simulated data limits ecological validity. The absence of true biological variability, temporal trends, and unmeasured confounders restricts generalizability. The choice to use synthetic data was also guided by strict privacy regulations and the absence of ethically accessible real-world datasets with complete neurocognitive labels. Nonetheless, the dataset was grounded in epidemiological and clinical distributions drawn from prior research. Validation against available cohort statistics from population studies (e.g., MMSE distributions, comorbidity prevalence) ensured external plausibility.

2) Class Imbalance: Despite the use of SMOTE, the model demonstrated modest sensitivity (recall) toward the minority class (i.e., patients experiencing cognitive decline). This limitation underscores the difficulty of detecting rare but clinically significant outcomes in imbalanced datasets—a challenge that is amplified in real-world settings where misclassification may delay critical interventions.

3) Feature Set Scope: The current analysis relies on a predefined set of variables, which, although clinically grounded, may omit important factors such as genetic predisposition, medication adherence, or neuroimaging biomarkers.

To enhance clinical applicability, future work should validate the proposed pipeline using real-world EHR data enriched with longitudinal follow-up. Additionally, incorporating temporal modelling, drug-specific pharmacodynamic profiles, and multi-modal data (e.g., imaging, lab tests, cognitive scores over time) could further improve model robustness and clinical utility. Exploration of cost-sensitive learning and ensemble techniques tailored for rare events may also help to improve minority class performance in practical deployments.

5. Conclusion

This study highlights the potential of machine learning (ML)—particularly interpretable models such as logistic regression—to serve as powerful tools for risk stratification of cognitive decline in elderly patients undergoing long-term neuroleptic therapy. By leveraging clinically meaningful features such as anticholinergic burden, vascular and psychiatric comorbidities, and medication complexity, the proposed framework provides a data-driven method for identifying individuals at elevated risk of neurocognitive deterioration. A key contribution of this work lies in its commitment to transparency and explainability, which are essential for the integration of artificial intelligence into real-world clinical practice [63] [64]. The use of interpretable coefficients and SHAP-based explanations allowed for clear, intuitive understanding of the relationships between input features and model predictions. This not only enhances trust among clinicians but also enables tailored patient counselling and personalized treatment adjustments based on model insights. The findings reinforce a growing body of literature that links cumulative anticholinergic exposure with adverse cognitive outcomes and demonstrate how these effects can be captured and quantified using robust analytical techniques. Despite the synthetic nature of the data, the model effectively replicates known clinical patterns and offers a scalable approach for broader healthcare implementation. Future work should focus on validating this framework using real-world electronic health record (EHR) data, which would allow for the incorporation of additional variables such as medication dosing, treatment adherence, and longitudinal cognitive assessments. Expanding the model to include drug-specific pharmacodynamics, temporal progression analysis, and genetic or imaging biomarkers could further enhance its predictive accuracy and clinical impact [65]-[67]. This study underscores the feasibility and value of explainable machine learning in supporting early intervention, optimizing pharmacotherapy, and safeguarding cognitive health in vulnerable aging populations.

Conflicts of Interest

The authors declare no conflicts of interest.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Byerly, M.J., Weber, M.T., Brooks, D.L., Snow, L.R., Worley, M.A. and Lescouflair, E. (2001) Antipsychotic Medications and the Elderly. Drugs & Aging, 18, 45-61.
https://doi.org/10.2165/00002512-200118010-00004
[2] McShane, R., Keene, J., Gedling, K., Fairburn, C., Jacoby, R. and Hope, T. (1997) Do Neuroleptic Drugs Hasten Cognitive Decline in Dementia? Prospective Study with Necropsy Follow Up. British Medical Journal, 314, 266-266.
https://doi.org/10.1136/bmj.314.7076.266
[3] Jeste, D.V., Blazer, D., Casey, D., Meeks, T., Salzman, C., Schneider, L., et al. (2007) ACNP White Paper: Update on Use of Antipsychotic Drugs in Elderly Persons with Dementia. Neuropsychopharmacology, 33, 957-970.
https://doi.org/10.1038/sj.npp.1301492
[4] Liperoti, R., Sganga, F., Landi, F., Topinkova, E., Denkinger, M.D., van der Roest, H.G., et al. (2017) Antipsychotic Drug Interactions and Mortality among Nursing Home Residents with Cognitive Impairment. The Journal of Clinical Psychiatry, 78, e76-e82.
https://doi.org/10.4088/jcp.15m10303
[5] Marcinkowska, M., Śniecikowska, J., Fajkis, N., Paśko, P., Franczyk, W. and Kołaczkowski, M. (2020) Management of Dementia-Related Psychosis, Agitation and Aggression: A Review of the Pharmacology and Clinical Effects of Potential Drug Candidates. CNS Drugs, 34, 243-268.
https://doi.org/10.1007/s40263-020-00707-7
[6] Sharma, B., Das, S., Mazumder, A., Rautela, D.S., Tyagi, P.K. and Khurana, N. (2024) The Role of Neurotransmitter Receptors in Antipsychotic Medication Efficacy for Alzheimer’s-Related Psychosis. The Egyptian Journal of Neurology, Psychiatry and Neurosurgery, 60, Article No. 75.
https://doi.org/10.1186/s41983-024-00848-2
[7] Bernardo, C.G., Singh, V. and Thompson, P.M. (2008) Safety and Efficacy of Psychopharmacological Agents Used to Treat the Psychiatric Sequelae of Common Neurological Disorders. Expert Opinion on Drug Safety, 7, 435-445.
https://doi.org/10.1517/14740338.7.4.435
[8] Gareri, P., De Fazio, P., Manfredi, V.G.L. and De Sarro, G. (2014) Use and Safety of Antipsychotics in Behavioral Disorders in Elderly People with Dementia. Journal of Clinical Psychopharmacology, 34, 109-123.
https://doi.org/10.1097/jcp.0b013e3182a6096e
[9] Klinkenberg, I., Sambeth, A. and Blokland, A. (2011) Acetylcholine and Attention. Behavioural Brain Research, 221, 430-442.
https://doi.org/10.1016/j.bbr.2010.11.033
[10] Wallace, T.L. and Bertrand, D. (2013) Importance of the Nicotinic Acetylcholine Receptor System in the Prefrontal Cortex. Biochemical Pharmacology, 85, 1713-1720.
https://doi.org/10.1016/j.bcp.2013.04.001
[11] Potter, A.S., Newhouse, P.A. and Bucci, D.J. (2006) Central Nicotinic Cholinergic Systems: A Role in the Cognitive Dysfunction in Attention-Deficit/Hyperactivity Disorder? Behavioural Brain Research, 175, 201-211.
https://doi.org/10.1016/j.bbr.2006.09.015
[12] Floresco, S.B. and Jentsch, J.D. (2010) Pharmacological Enhancement of Memory and Executive Functioning in Laboratory Animals. Neuropsychopharmacology, 36, 227-250.
https://doi.org/10.1038/npp.2010.158
[13] Collamati, A., Martone, A.M., Poscia, A., Brandi, V., Celi, M., Marzetti, E., et al. (2015) Anticholinergic Drugs and Negative Outcomes in the Older Population: From Biological Plausibility to Clinical Evidence. Aging Clinical and Experimental Research, 28, 25-35.
https://doi.org/10.1007/s40520-015-0359-7
[14] Attoh-Mensah, E., Loggia, G., Schumann-Bard, P., Morello, R., Descatoire, P., Marcelli, C., et al. (2020) Adverse Effects of Anticholinergic Drugs on Cognition and Mobility: Cutoff for Impairment in a Cross-Sectional Study in Young–Old and Old–Old Adults. Drugs & Aging, 37, 301-310.
https://doi.org/10.1007/s40266-019-00743-z
[15] Cardwell, K., Hughes, C.M. and Ryan, C. (2015) The Association between Anticholinergic Medication Burden and Health Related Outcomes in the ‘Oldest Old’: A Systematic Review of the Literature. Drugs & Aging, 32, 835-848.
https://doi.org/10.1007/s40266-015-0310-9
[16] Green, A.R., Reifler, L.M., Bayliss, E.A., Weffald, L.A. and Boyd, C.M. (2019) Drugs Contributing to Anticholinergic Burden and Risk of Fall or Fall-Related Injury among Older Adults with Mild Cognitive Impairment, Dementia and Multiple Chronic Conditions: A Retrospective Cohort Study. Drugs & Aging, 36, 289-297.
https://doi.org/10.1007/s40266-018-00630-z
[17] Carrière, I., Fourrier-Reglat, A., Dartigues, J., Rouaud, O., Pasquier, F., Ritchie, K., et al. (2009) Drugs with Anticholinergic Properties, Cognitive Decline, and Dementia in an Elderly General Population. Archives of Internal Medicine, 169, 1317-1324.
https://doi.org/10.1001/archinternmed.2009.229
[18] Mehta, R.S., Kochar, B.D., Kennelty, K., Ernst, M.E. and Chan, A.T. (2021) Emerging Approaches to Polypharmacy among Older Adults. Nature Aging, 1, 347-356.
https://doi.org/10.1038/s43587-021-00045-3
[19] Mair, A., Wilson, M. and Dreischulte, T. (2020) Addressing the Challenge of Polypharmacy. Annual Review of Pharmacology and Toxicology, 60, 661-681.
https://doi.org/10.1146/annurev-pharmtox-010919-023508
[20] Alhozim, B.M.A., Almutairi, E.T., Albutyan, Z.Y., Alzahrani, N.A., Alonizy, M.M., Albutyan, L.Y., et al. (2024) The Impact of Polypharmacy on Drug Efficacy and Safety in Geriatric Populations. Egyptian Journal of Chemistry, 67, 1533-1540.
https://doi.org/10.21608/ejchem.2024.337875.10834
[21] Edelman, E.J., Gordon, K.S., Glover, J., McNicholl, I.R., Fiellin, D.A. and Justice, A.C. (2013) The Next Therapeutic Challenge in HIV: Polypharmacy. Drugs & Aging, 30, 613-628.
https://doi.org/10.1007/s40266-013-0093-9
[22] Keine, D., Zelek, M., Walker, J.Q. and Sabbagh, M.N. (2019) Polypharmacy in an Elderly Population: Enhancing Medication Management through the Use of Clinical Decision Support Software Platforms. Neurology and Therapy, 8, 79-94.
https://doi.org/10.1007/s40120-019-0131-6
[23] Foluke, E. (2024) Machine Learning for Chronic Kidney Disease Progression Modelling: Leveraging Data Science to Optimize Patient Management. World Journal of Advanced Research and Reviews, 24, 453-475.
https://doi.org/10.30574/wjarr.2024.24.3.3730
[24] Levy, J.J., Lima, J.F., Miller, M.W., Freed, G.L., O'Malley, A.J. and Emeny, R.T. (2022) Machine Learning Approaches for Hospital Acquired Pressure Injuries: A Retrospective Study of Electronic Medical Records. Frontiers in Medical Technology, 4, Article 926667.
https://doi.org/10.3389/fmedt.2022.926667
[25] Filippis, R.D. and Foysal, A.A. (2025) A Machine Learning Approach to Predicting Treatment Outcomes in Bipolar Depression with OCD Comorbidity. Open Access Library, 12, 1-20.
https://doi.org/10.4236/oalib.1112894
[26] Alaa Ahmed M. and van der Schaar, M. (2017) Deep Multi-Task Gaussian Processes for Survival Analysis with Competing Risks. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, 4-9 December 2017, 2326-2334.
[27] Patra, S.S., Harshvardhan, G.M., Gourisaria, M.K., Mohanty, J.R. and Choudhury, S. (2021) Emerging Healthcare Problems in High-Dimensional Data and Dimension Reduction. In: Lecture Notes on Data Engineering and Communications Technologies, Springer, 25-49.
https://doi.org/10.1007/978-981-16-0538-3_2
[28] Dinov, I.D. (2016) Methodological Challenges and Analytic Opportunities for Modeling and Interpreting Big Healthcare Data. GigaScience, 5, 1-15.
https://doi.org/10.1186/s13742-016-0117-6
[29] Wilson, A. and Anwar, M.R. (2024) The Future of Adaptive Machine Learning Algorithms in High-Dimensional Data Processing. International Transactions on Artificial Intelligence, 3, 97-107.
https://doi.org/10.33050/italic.v3i1.656
[30] Bühlmann, P. and Van De Geer, S. (2011) Statistics for High-Dimensional Data: Methods, Theory and Applications. Springer Science & Business Media.
[31] Salerno, S. and Li, Y. (2023) High-dimensional Survival Analysis: Methods and Applications. Annual Review of Statistics and Its Application, 10, 25-49.
https://doi.org/10.1146/annurev-statistics-032921-022127
[32] Dang, M., Xiang, H., Wang, Y., Li, F. and Nguyen, T.N. (2022) Explainable Artificial Intelligence: A Comprehensive Review. Artificial Intelligence Review, 55, 3503-3568.
[33] Hassija, V., Chamola, V., Mahapatra, A., Singal, A., Goel, D., Huang, K., et al. (2023) Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence. Cognitive Computation, 16, 45-74.
https://doi.org/10.1007/s12559-023-10179-8
[34] Machlev, R., Heistrene, L., Perl, M., Levy, K.Y., Belikov, J., Mannor, S., et al. (2022) Explainable Artificial Intelligence (XAI) Techniques for Energy and Power Systems: Review, Challenges and Opportunities. Energy and AI, 9, Article 100169.
https://doi.org/10.1016/j.egyai.2022.100169
[35] Adadi, A. and Berrada, M. (2018) Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138-52160.
https://doi.org/10.1109/access.2018.2870052
[36] Raghunathan, T.E. (2021) Synthetic Data. Annual Review of Statistics and Its Application, 8, 129-140.
https://doi.org/10.1146/annurev-statistics-040720-031848
[37] de Melo, C.M., Torralba, A., Guibas, L., DiCarlo, J., Chellappa, R. and Hodgins, J. (2022) Next-Generation Deep Learning Based on Simulators and Synthetic Data. Trends in Cognitive Sciences, 26, 174-187.
https://doi.org/10.1016/j.tics.2021.11.008
[38] Smith, D.M., Clarke, G.P. and Harland, K. (2009) Improving the Synthetic Data Generation Process in Spatial Microsimulation Models. Environment and Planning A: Economy and Space, 41, 1251-1268.
https://doi.org/10.1068/a4147
[39] Nicolaie, M.A., Füssenich, K., Ameling, C. and Boshuizen, H.C. (2023) Constructing Synthetic Populations in the Age of Big Data. Population Health Metrics, 21, Article No. 19.
https://doi.org/10.1186/s12963-023-00319-5
[40] Kopec, J.A., Finès, P., Manuel, D.G., Buckeridge, D.L., Flanagan, W.M., Oderkirk, J., et al. (2010) Validation of Population-Based Disease Simulation Models: A Review of Concepts and Methods. BMC Public Health, 10, Article No. 710.
https://doi.org/10.1186/1471-2458-10-710
[41] Kingston, A., Comas-Herrera, A. and Jagger, C. (2018) Forecasting the Care Needs of the Older Population in England over the Next 20 Years: Estimates from the Population Ageing and Care Simulation (PACSim) Modelling Study. The Lancet Public Health, 3, e447-e455.
https://doi.org/10.1016/s2468-2667(18)30118-x
[42] Groves-Kirkby, N., Wakeman, E., Patel, S., Hinch, R., Poot, T., Pearson, J., et al. (2023) Large-Scale Calibration and Simulation of COVID-19 Epidemiologic Scenarios to Support Healthcare Planning. Epidemics, 42, Article 100662.
https://doi.org/10.1016/j.epidem.2022.100662
[43] Badr, H.S., Zaitchik, B.F., Kerr, G.H., Nguyen, N.H., Chen, Y., Hinson, P., et al. (2023) Unified Real-Time Environmental-Epidemiological Data for Multiscale Modeling of the COVID-19 Pandemic. Scientific Data, 10, Article No. 367.
https://doi.org/10.1038/s41597-023-02276-y
[44] Maringe, C., Benitez Majano, S., Exarchakou, A., Smith, M., Rachet, B., Belot, A., et al. (2020) Reflection on Modern Methods: Trial Emulation in the Presence of Immortal-Time Bias. Assessing the Benefit of Major Surgery for Elderly Lung Cancer Patients Using Observational Data. International Journal of Epidemiology, 49, 1719-1729.
https://doi.org/10.1093/ije/dyaa057
[45] Lee, J.Y. and Styczynski, M.P. (2018) NS-kNN: A Modified K-Nearest Neighbors Approach for Imputing Metabolomics Data. Metabolomics, 14, Article No. 153.
https://doi.org/10.1007/s11306-018-1451-8
[46] De Silva, H. and Perera, A.S. (2017) Evolutionary K-Nearest Neighbor Imputation Algorithm for Gene Expression Data. International Journal on Advances in ICT for Emerging Regions, 10, 11-18.
https://doi.org/10.4038/icter.v10i1.7183
[47] Keerin, P. and Boongoen, T. (2022) Estimation of Missing Values in Astronomical Survey Data: An Improved Local Approach Using Cluster Directed Neighbor Selection. Information Processing & Management, 59, Article 102881.
https://doi.org/10.1016/j.ipm.2022.102881
[48] Das, C., Bose, S., Chattopadhyay, M. and Chattopadhyay, S. (2016) A Novel Distance-Based Iterative Sequential KNN Algorithm for Estimation of Missing Values in Microarray Gene Expression Data. International Journal of Bioinformatics Research and Applications, 12, Article 312.
https://doi.org/10.1504/ijbra.2016.080719
[49] Luengo, J., García, S. and Herrera, F. (2011) On the Choice of the Best Imputation Methods for Missing Values Considering Three Groups of Classification Methods. Knowledge and Information Systems, 32, 77-108.
https://doi.org/10.1007/s10115-011-0424-2
[50] Lakshminarayan, K., Harp, S.A. and Samad, T. (1999) Imputation of Missing Data in Industrial Databases. Applied Intelligence, 11, 259-275.
https://doi.org/10.1023/a:1008334909089
[51] Aljuaid, T. and Sasi, S. (2016) Proper Imputation Techniques for Missing Values in Data Sets. 2016 International Conference on Data Science and Engineering, Cochin, 23-25 August 2016, 1-5.
https://doi.org/10.1109/icdse.2016.7823957
[52] Huisman, M. (2009) Imputation of Missing Network Data: Some Simple Procedures. Journal of Social Structure, 10, 1-29.
[53] Aubaidan, B.H., Kadir, R.A. and Ijab, M.T. (2024) A Comparative Analysis of Smote and CSSF Techniques for Diabetes Classification Using Imbalanced Data. Journal of Computer Science, 20, 1146-1165.
https://doi.org/10.3844/jcssp.2024.1146.1165
[54] Veerla, S., Devadasan, A.V., Masum, M., Chowdhury, M. and Shahriar, H. (2024) E-SMOTE: Entropy Based Minority Oversampling for Heart Failure and AIDS Clinical Trails Analysis. 2024 IEEE 48th Annual Computers, Software, and Applications Conference, Osaka, 2-4 July 2024, 1841-1846.
https://doi.org/10.1109/compsac61105.2024.00291
[55] Gayane, G. (2024) Explainable Artificial Intelligence: Methods and Evaluation. PhD Dissertation, Old Dominion University.
[56] Muralidhara, C.K.B. (2024) Interpretability of Classification & Regression Ensemble Models.
[57] Tsai, C.P., Yeh, C.-K. and Ravikumar, P. (2023) Faith-Shap: The Faithful Shapley Interaction Index. Journal of Machine Learning Research, 24, 1-42.
[58] Alsaleh, M.M., Allery, F., Choi, J.W., Hama, T., McQuillin, A., Wu, H., et al. (2023) Prediction of Disease Comorbidity Using Explainable Artificial Intelligence and Machine Learning Techniques: A Systematic Review. International Journal of Medical Informatics, 175, Article 105088.
https://doi.org/10.1016/j.ijmedinf.2023.105088
[59] Mohanty, S.D., Lekan, D., McCoy, T.P., Jenkins, M. and Manda, P. (2022) Machine Learning for Predicting Readmission Risk among the Frail: Explainable AI for Healthcare. Patterns, 3, Article 100395.
https://doi.org/10.1016/j.patter.2021.100395
[60] Bloomingdale, P., Karelina, T., Ramakrishnan, V., Bakshi, S., Véronneau-Veilleux, F., Moye, M., et al. (2022) Hallmarks of Neurodegenerative Disease: A Systems Pharmacology Perspective. Pharmacometrics & Systems Pharmacology, 11, 1399-1429.
https://doi.org/10.1002/psp4.12852
[61] Mostafavi, S., Gaiteri, C., Sullivan, S.E., White, C.C., Tasaki, S., Xu, J., et al. (2018) A Molecular Network of the Aging Human Brain Provides Insights into the Pathology and Cognitive Decline of Alzheimer’s Disease. Nature Neuroscience, 21, 811-819.
https://doi.org/10.1038/s41593-018-0154-9
[62] Geerts, H. (2025) Quantitative Systems Pharmacology Development and Application in Neuroscience. In: Handbook of Experimental Pharmacology, Springer, 1-50.
https://doi.org/10.1007/164_2024_739
[63] Karalis, V.D. (2024) The Integration of Artificial Intelligence into Clinical Practice. Applied Biosciences, 3, 14-44.
https://doi.org/10.3390/applbiosci3010002
[64] Albahri, A.S., Duhaim, A.M., Fadhel, M.A., Alnoor, A., Baqer, N.S., Alzubaidi, L., et al. (2023) A Systematic Review of Trustworthy and Explainable Artificial Intelligence in Healthcare: Assessment of Quality, Bias Risk, and Data Fusion. Information Fusion, 96, 156-191.
https://doi.org/10.1016/j.inffus.2023.03.008
[65] de Lange, E.C.M., van den Brink, W., Yamamoto, Y., de Witte, W.E.A. and Wong, Y.C. (2017) Novel CNS Drug Discovery and Development Approach: Model-Based Integration to Predict Neuro-Pharmacokinetics and Pharmacodynamics. Expert Opinion on Drug Discovery, 12, 1207-1218.
https://doi.org/10.1080/17460441.2017.1380623
[66] Vamathevan, J., Clark, D., Czodrowski, P., Dunham, I., Ferran, E., Lee, G., et al. (2019) Applications of Machine Learning in Drug Discovery and Development. Nature Reviews Drug Discovery, 18, 463-477.
https://doi.org/10.1038/s41573-019-0024-5
[67] de Vries, E.G.E., Kist de Ruijter, L., Lub-de Hooge, M.N., Dierckx, R.A., Elias, S.G. and Oosting, S.F. (2018) Integrating Molecular Nuclear Imaging in Clinical Research to Improve Anticancer Therapy. Nature Reviews Clinical Oncology, 16, 241-255.
https://doi.org/10.1038/s41571-018-0123-y

Copyright © 2025 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.