TITLE:
Explainable Credit Intelligence: A Unified SHAP-Based Framework for Interpretable Risk Scoring across Corporate and Retail Lending Domains
AUTHORS:
Omoshola Owolabi
KEYWORDS:
Explainable AI, Credit Risk Assessment, Financial Technology, Credit Intelligence, Machine Learning Interpretability, Risk Management, Algorithmic Transparency
JOURNAL NAME:
Journal of Data Analysis and Information Processing,
Vol.13 No.4,
November
13,
2025
ABSTRACT: This study proposes a dual-architecture Explainable Artificial Intelligence (XAI) framework designed to unify risk scoring methodologies across corporate and retail lending domains. The framework leverages wavelet-based decomposition to extract multi-resolution features from corporate cash flow time series, while employing Bidirectional Long Short-Term Memory (Bi-LSTM) autoencoders to generate latent representations of retail transaction behaviors. These heterogeneous representations are integrated via a novel interpretability mechanism, CrossSHAP, which enables cross-domain attribution analysis and consistent explanation of model outputs. The proposed system is further distinguished by its alignment with regulatory standards, incorporating automated mappings into Basel III Pillar 3 disclosures and Equal Credit Opportunity Act (ECOA) adverse action codes to support regulatory transparency and compliance. To facilitate model validation and fairness assessments, the framework also incorporates a synthetic data generation module that preserves high-order financial dependencies and inter-variable dynamics. Comprehensive evaluation following the SAFE ML paradigm demonstrates robust performance in all aspects of safety, accountability, fairness, and ethics. The proposed architecture contributes to the advancement of interpretable machine learning in financial risk modeling by enabling robust, transparent, and regulation-aware credit decisioning across diverse borrower segments.