TITLE:
The Double-Edged Sword of AI in Cybersecurity: Organizational Risks, Defensive Strategies, and Governance Implications
AUTHORS:
Nonye Peter Awurum
KEYWORDS:
Artificial Intelligence, Machine Learning, Organizational Cybersecurity, Adversarial AI, Explainable AI, Cyber Defense, Governance, Risk Management
JOURNAL NAME:
Open Access Library Journal,
Vol.12 No.11,
November
7,
2025
ABSTRACT: In recent times, the World has experienced the impact of artificial intelligence (AI) and machine learning (ML) on the digital ecosystem, organizations now face both unprecedented opportunities and complex risks. AI-driven tools are seen to strengthened cybersecurity defenses through various ways such as anomaly detection, predictive analytics, and automated incident response, the same technologies are also being weaponized by cybercriminals (black hat hackers) to conduct more sophisticated and evasive attacks. This duality, such as, AI and ML functioning both as powerful defensive tools and as sophisticated offensive weapons—has created a “double-edged sword” in organizational cybersecurity, requiring leaders to balance innovation with resilience in order to thrive and to ensure business continuity. The researcher adopted a qualitative multiple case design to examine the dual role of AI and ML in organizational cybersecurity, with focus on US based technology firms. Data for this study were gathered from interviews with cybersecurity professionals, organizational documents, and secondary sources, and then analyzed thematically to uncover prevailing patterns, risks, and defense strategies. The findings reveal four dominant themes, namely 1) the rise of offensive AI, including polymorphic malware and AI-driven phishing; 2) organizational investments in AI-powered defense frameworks; 3) the essential role of human factors, such as employee awareness and executive decision-making; and 4) governance and regulatory challenges in managing AI adoption. The researcher made emphasis on both the transformative benefits of AI-enabled defense and the growing dangers of adversarial AI. Practical implications include the need for explainable AI (XAI) in decision-making, integration of AI with established frameworks such as NIST CSF and MITRE ATT&CK, and stronger cross-sector collaboration to manage ethical and governance concerns. This article contributes to the educational understanding of AI’s double-edged impact and provides actionable strategies for decision-makers tasked with securing digital infrastructure in an evolving threat landscape.