TITLE:
Large Language Models (LLMs) for Software Security Analysis
AUTHORS:
Brian Mata, Vijay Madisetti
KEYWORDS:
Large Language Models, Software Security
JOURNAL NAME:
Journal of Information Security,
Vol.16 No.2,
April
29,
2025
ABSTRACT: Security vulnerabilities are a widespread and costly aspect of software engineering. Although tools exist to detect these vulnerabilities, non-machine learning techniques are often rigid and unable to detect many types of vulnerabilities, while machine learning techniques often struggle with large codebases. Recent work has aimed to combine traditional static analysis with machine learning. Our work enhances this by equipping LLM-based agents with classic static analysis tools, leveraging the strengths of both methods while addressing their inherent weaknesses. We achieved a false detection rate of 0.5696, significantly improving over the previous state-of-the-art LLM-enabled technique, IRIS, which has a false detection rate of 0.8482. Furthermore, using Claude Sonnet 3.5, our technique produces an F1 score of 0.1281, which is an improvement over the standard CodeQL suite and approaches IRIS’s score of 0.1770.