TITLE:
Combating Deepfake Threats Using X-FACTS Explainable CNN Framework for Enhanced Detection and Cybersecurity Resilience
AUTHORS:
Ugoaghalam Uche James, Hamed Salam Olarinoye, Ihuoma Remita Uchenna, Chima Nwankwo Idika, Obinna Jeff Ngene, Onuh Matthew Ijiga, Kelvin Itemuagbor
KEYWORDS:
Deepfake Detection, Explainable AI (XAI), Convolutional Neural Networks (CNN), Cybersecurity Resilience, Facial Artifact Analysis
JOURNAL NAME:
Advances in Artificial Intelligence and Robotics Research,
Vol.1 No.1,
August
25,
2025
ABSTRACT: The advancement of deepfake technologies, leveraging sophisticated artificial intelligence methods, poses substantial cybersecurity risks including misinformation dissemination, identity manipulation, and political propaganda. To address these challenges, this research introduces a novel Convolutional Neural Network (CNN)-based deep learning model named X-FACTS (eXplainable Facial Artifact and Consistency Tracking System). The proposed model incorporates explainable AI (XAI), adversarial training, and frequency-domain analysis techniques to enhance deepfake video detection capabilities in comparative analyses against established deepfake detection algorithms such as SHAP-based, LIME-based, Grad-CAM-based, and Mul-ti-Stream Frequency-based models. The X-FACTS algorithm consistently demonstrated superior performance, achieving higher accuracy (92.3%), accuracy (0.91), precision (0.94), recall (0.92), F1-score (0.91), and specificity (0.89). The results indicate that integrating CNN architecture with explainability frameworks significantly improves the identification of artificially generated content. The study concludes that robust, transparent, and explainable deep learning approaches like X-FACTS are essential to effectively combat emerging AI-driven misinformation threats, emphasizing the need for interdisciplinary cooperation to build resilient digital media forensic tools.