The Evolution of the Digital Predator: Using AI to Evade Security Controls
Since the advent of the computer, there has been a never-ending game of cat and mouse between those seeking to harm and those seeking to protect the end user.
SANS_The_Evolution_of_the_Digital_Predator_Using_AI_to_Evade_Security_Controls (PDF, 0.82MB)
20 Dec 2023Related Content
Post-Exploitation: C2 Framework Effectiveness Against Advanced Audit Logging
Research PaperThis research paper examines the effectiveness of a sample of open-source Commandand-Control (C2) frameworks in evading advanced audit logging during postexploitation.
- 20 Mar 2026
Leveraging Generative AI for Password Cracking Efficiency Under Resource Constraints
Research PaperThe purpose of this research is to investigate whether generative AI can alleviate the hardware and financial burdens of password cracking (password recovery) while maintaining or even improving cracking success rates.
- 20 Mar 2026
Detecting AI Pickling
Research PaperThis study examines whether static analysis is a dependable "certification gate" for ingesting third-party, pickle-based AI model artifacts from open-source model hubs into a trusted internal registry.
- 12 Mar 2026
How Many LLMs Does it Take to Classify a Suspicious Email?
Research PaperThis study examines the accuracy, reliability, and operational behavior of three widely available LLMs using a dataset of 2000 human-written emails containing both legitimate and suspicious messages.
- 12 Mar 2026
Autonomous Threat Emulation and Detection Using Agentic AI
Research PaperTraditional threat emulation frameworks struggle to capture the dynamic and adaptive behaviours of modern Advanced Persistent Threats (APTs), leaving defenders reliant on static tests that quickly become obsolete.
- 10 Mar 2026
Evaluating Configurations for Reducing Problematic Emotional Engagement in Enterprise LLM Deployments: Implications for Insider Threat Risk
Research PaperThe risks of Large Language Models (LLMs) include triggering psychological drivers associated with malicious insider threat behavior. This study utilized AWS Bedrock to demonstrate that specific system-level configurations and guardrails can effectively mitigate these risks by reducing problematic human-AI engagement.
- 2 Mar 2026
Inside the Five Most Dangerous New Attack Techniques
Research PaperThis e-book represents the next evolution of that effort. Here, we take the five key topics presented from the keynote stage and expand them into four full-length chapters.
- 8 Dec 2025
- Heather Barnhart, Rob T. Lee, Joshua Wright, Tim Conway
Enhancing Security Operations with Google Threat Intelligence
Research PaperThis product review examines how Google Threat Intelligence's extensive data sources, real-time insights, and investigative capabilities can elevate SecOps workflows and strengthen an organization’s defensive posture.
- 24 Nov 2025
- Dave Shackleford
No-Cost Detection of Endpoint Hard Drive Removal
Research PaperThis paper analyzes low-cost detection methods, using existing hard drive counters from Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) and the Windows Registry, for their fidelity in detecting hard drive removal.
- 19 Nov 2025
Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing
Research PaperAutomated testing during the build stage of the AI engineering life cycle can evaluate the effectiveness of generative AI guidelines against prompt injection attacks.
- 7 Nov 2025
Can Your Security Stack Handle AI? An Empirical Assessment of Enterprise Controls Versus Generative AI Risks
Research PaperEnterprise security teams face a critical dilemma. Executives want AI productivity gains, but it remains uncertain if existing security controls can handle the risks.
- 6 Nov 2025
Evaluating Large Language Models for Automated Threat Modeling: A Comparative Analysis
Research PaperThis study investigates the use of Large Language Models (LLMs) as an assistant to conduct threat models of systems or applications.
- 6 Nov 2025
Modernizing OT Security: How Frenos Uses Digital Twin Technology, AI and Threat Emulation to Transform Security Posture and Compliance
Research PaperThis paper explores how Frenos aligns with important concepts like SANS 5 ICS Critical Controls and supports regulatory objectives, while focusing on mitigating real-world exposures in your environment.
- 28 Oct 2025
- Jason Dely, Tim Conway
Interrogators: Attack Surface Mapping in an Agentic World
Research PaperThis research introduces the concept of AI agent interrogators and the open-source project Agent Interrogator, an opaque box interrogation framework designed to map the attack surface of agentic systems.
- 23 Oct 2025
Fixing What You Broke: Can AI Be Used to Thwart AI-Generated Malware?
Research PaperSecurity professionals are starting to rethink their approach to access control and monitoring for...
- 3 Sep 2025
The Mimic Octopus: Weaponizing File Corruption and Recoverability to Bypass Antivirus and Email Filtering
Research PaperThis paper investigates a novel tactic in phishing operations where threat actors intentionally corrupt document and archive files, such as DOCX, DOCM, PDF, and ZIP , to evade antivirus (AV) and email filtering systems.
- 3 Sep 2025
Trust But Verify: Evaluating the Accuracy of LLMs in Normalizing Threat Data Feeds
Research PaperThis paper examines whether Large Language Models (LLMs) can be reliably applied to the normalization of Indicators of Compromise (IOCs) into Structured Threat Information Expression (STIX) format.
- 16 Jul 2025
Do AI Coding Assistants Make Bad Coders Worse? A Security Evaluation of GitHub Copilot
Research PaperAs AI coding assistants become increasingly integral to software development, the security of their generated outputs is under greater scrutiny.
- 11 Jul 2025
AI-Driven Insecurity: Assessing Security Gaps in AI Generated IT Guidance
Research PaperThe increasing reliance on AI-generated technical guidance for IT system configuration introduces significant security risks. This study assesses these risks through a case study: setting up an Apache web server on a Rocky Linux system using instructions from seven AI models.
- 13 May 2025
From Crash to Compromise: Unlocking the Potential of Windows Crash Dumps in Offensive Security
Research PaperThis research explores how offensive security practitioners can incorporate crash dump analysis into their workflows to extract sensitive data such as plaintext credentials, encryption keys, and files from memory.
- 9 May 2025
- SANS Institute
