Leveraging Large Language Models for Security-Focused Code Reviews
This study investigates the potential application of Large Language Models (LLMs) in enhancing software security through automated vulnerability detection during the code review process.
sans-Leveraging-Large-Language-Models_McQuade (PDF, 0.30MB)
26 Mar 2025Related Content
Leveraging Generative AI for Password Cracking Efficiency Under Resource Constraints
Research PaperThe purpose of this research is to investigate whether generative AI can alleviate the hardware and financial burdens of password cracking (password recovery) while maintaining or even improving cracking success rates.
- 20 Mar 2026
Detecting AI Pickling
Research PaperThis study examines whether static analysis is a dependable "certification gate" for ingesting third-party, pickle-based AI model artifacts from open-source model hubs into a trusted internal registry.
- 12 Mar 2026
How Many LLMs Does it Take to Classify a Suspicious Email?
Research PaperThis study examines the accuracy, reliability, and operational behavior of three widely available LLMs using a dataset of 2000 human-written emails containing both legitimate and suspicious messages.
- 12 Mar 2026
Autonomous Threat Emulation and Detection Using Agentic AI
Research PaperTraditional threat emulation frameworks struggle to capture the dynamic and adaptive behaviours of modern Advanced Persistent Threats (APTs), leaving defenders reliant on static tests that quickly become obsolete.
- 10 Mar 2026
Evaluating Configurations for Reducing Problematic Emotional Engagement in Enterprise LLM Deployments: Implications for Insider Threat Risk
Research PaperThe risks of Large Language Models (LLMs) include triggering psychological drivers associated with malicious insider threat behavior. This study utilized AWS Bedrock to demonstrate that specific system-level configurations and guardrails can effectively mitigate these risks by reducing problematic human-AI engagement.
- 2 Mar 2026
Inside the Five Most Dangerous New Attack Techniques
Research PaperThis e-book represents the next evolution of that effort. Here, we take the five key topics presented from the keynote stage and expand them into four full-length chapters.
- 8 Dec 2025
- Heather Barnhart, Rob T. Lee, Joshua Wright, Tim Conway
No-Cost Detection of Endpoint Hard Drive Removal
Research PaperThis paper analyzes low-cost detection methods, using existing hard drive counters from Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) and the Windows Registry, for their fidelity in detecting hard drive removal.
- 19 Nov 2025
Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing
Research PaperAutomated testing during the build stage of the AI engineering life cycle can evaluate the effectiveness of generative AI guidelines against prompt injection attacks.
- 7 Nov 2025
Can Your Security Stack Handle AI? An Empirical Assessment of Enterprise Controls Versus Generative AI Risks
Research PaperEnterprise security teams face a critical dilemma. Executives want AI productivity gains, but it remains uncertain if existing security controls can handle the risks.
- 6 Nov 2025
Evaluating Large Language Models for Automated Threat Modeling: A Comparative Analysis
Research PaperThis study investigates the use of Large Language Models (LLMs) as an assistant to conduct threat models of systems or applications.
- 6 Nov 2025
Modernizing OT Security: How Frenos Uses Digital Twin Technology, AI and Threat Emulation to Transform Security Posture and Compliance
Research PaperThis paper explores how Frenos aligns with important concepts like SANS 5 ICS Critical Controls and supports regulatory objectives, while focusing on mitigating real-world exposures in your environment.
- 28 Oct 2025
- Jason Dely, Tim Conway
Fixing What You Broke: Can AI Be Used to Thwart AI-Generated Malware?
Research PaperSecurity professionals are starting to rethink their approach to access control and monitoring for...
- 3 Sep 2025
Trust But Verify: Evaluating the Accuracy of LLMs in Normalizing Threat Data Feeds
Research PaperThis paper examines whether Large Language Models (LLMs) can be reliably applied to the normalization of Indicators of Compromise (IOCs) into Structured Threat Information Expression (STIX) format.
- 16 Jul 2025
Do AI Coding Assistants Make Bad Coders Worse? A Security Evaluation of GitHub Copilot
Research PaperAs AI coding assistants become increasingly integral to software development, the security of their generated outputs is under greater scrutiny.
- 11 Jul 2025
AI-Driven Insecurity: Assessing Security Gaps in AI Generated IT Guidance
Research PaperThe increasing reliance on AI-generated technical guidance for IT system configuration introduces significant security risks. This study assesses these risks through a case study: setting up an Apache web server on a Rocky Linux system using instructions from seven AI models.
- 13 May 2025
SIEM Detection Logic Conversion with LLMs
Research PaperThis research explores how Large Language Models (LLMs) and automation scripts can expedite the translation of detection logic between SIEMs, converting detections in minutes instead of hours.
- 2 May 2025
MITRE ATT&CK Labeling of Cyber Threat Intelligence via LLM
Research PaperThis paper explores the effectiveness of various online and locally hosted LLMs in classifying an...
- 7 Jan 2025
Revolutionizing Cybersecurity: Implementing Large Language Models as Dynamic SOAR Tools
Research PaperThis research explores the potential of Large Language Models (LLMs), explicitly using ChatGPT...
- 5 Dec 2024
Leveraging Generative Artificial Intelligence for Memory Analysis
Research PaperThe increasing sophistication of malware poses significant challenges for traditional memory...
- 5 Dec 2024
Machine Learning: Preventing Network Abnormalities
Research PaperThe Department of Defense (DoD) developed and published multiple zero trust documents describing the...
- 30 Aug 2024
