Skip to main content

Shining a Light on AI: Ensuring Vendor Transparency in Data Sourcing and Delivery

Amidst the proliferation of AI solutions, the focus lies in evaluating transparency, undisclosed system modifications, and data exfiltration within the privacy policies of vendors providing desktop applications, browser plug-ins, and browser-only AI solutions.

SANS_Shining_a_Light_on_AI_Ensuring_Vendor Transparency in Data Sourcing and Delivery - Publ (PDF, 0.93MB)

29 Jan 2024
ByBrian P. Mohr
Share
All papers are copyrighted

No re-posting of papers is permitted

Related Content

Leveraging Generative AI for Password Cracking Efficiency Under Resource Constraints

Research Paper

The purpose of this research is to investigate whether generative AI can alleviate the hardware and financial burdens of password cracking (password recovery) while maintaining or even improving cracking success rates.

  • 20 Mar 2026

Detecting AI Pickling

Research Paper

This study examines whether static analysis is a dependable "certification gate" for ingesting third-party, pickle-based AI model artifacts from open-source model hubs into a trusted internal registry.

  • 12 Mar 2026

How Many LLMs Does it Take to Classify a Suspicious Email?

Research Paper

This study examines the accuracy, reliability, and operational behavior of three widely available LLMs using a dataset of 2000 human-written emails containing both legitimate and suspicious messages.

  • 12 Mar 2026

Autonomous Threat Emulation and Detection Using Agentic AI

Research Paper

Traditional threat emulation frameworks struggle to capture the dynamic and adaptive behaviours of modern Advanced Persistent Threats (APTs), leaving defenders reliant on static tests that quickly become obsolete.

  • 10 Mar 2026

Evaluating Configurations for Reducing Problematic Emotional Engagement in Enterprise LLM Deployments: Implications for Insider Threat Risk

Research Paper

The risks of Large Language Models (LLMs) include triggering psychological drivers associated with malicious insider threat behavior. This study utilized AWS Bedrock to demonstrate that specific system-level configurations and guardrails can effectively mitigate these risks by reducing problematic human-AI engagement.

  • 2 Mar 2026

No-Cost Detection of Endpoint Hard Drive Removal

Research Paper

This paper analyzes low-cost detection methods, using existing hard drive counters from Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) and the Windows Registry, for their fidelity in detecting hard drive removal.

  • 19 Nov 2025

Structural Vulnerability: Autodesk Revit Server WAN Exposure Versus Cost of Autodesk Construction Cloud

Research Paper

Autodesk Revit Server, a critical collaboration tool in the architecture, engineering, and construction (AEC) industry, was designed to operate within trusted networks.

  • 7 Nov 2025

Automating Generative AI Guidelines: Reducing Prompt Injection Risk with 'Shift-Left' MITRE ATLAS Mitigation Testing

Research Paper

Automated testing during the build stage of the AI engineering life cycle can evaluate the effectiveness of generative AI guidelines against prompt injection attacks.

  • 7 Nov 2025

Can Your Security Stack Handle AI? An Empirical Assessment of Enterprise Controls Versus Generative AI Risks

Research Paper

Enterprise security teams face a critical dilemma. Executives want AI productivity gains, but it remains uncertain if existing security controls can handle the risks.

  • 6 Nov 2025

Evaluating Large Language Models for Automated Threat Modeling: A Comparative Analysis

Research Paper

This study investigates the use of Large Language Models (LLMs) as an assistant to conduct threat models of systems or applications.

  • 6 Nov 2025

Privacy Protections: Are Stronger Laws Changing What We Reveal?

Research Paper

As U.S. states enact privacy laws aimed at giving consumers more control over their personal data, little is known about whether privacy legislation influences individuals’ willingness to disclose their identity on public platforms.

  • 26 Sep 2025

Forensic Investigation of Bluetooth-Based Credit Card Skimmers

Research Paper

Hidden Bluetooth Low Energy (BLE) credit skimmers are a growing threat to credit card fraud. Criminals can set up practical and inexpensive systems built on top of modules, such as the HM-19, to collect and transmit stolen data covertly across wireless channels.

  • 3 Sep 2025

Fixing What You Broke: Can AI Be Used to Thwart AI-Generated Malware?

Research Paper

Security professionals are starting to rethink their approach to access control and monitoring for...

  • 3 Sep 2025

Trust But Verify: Evaluating the Accuracy of LLMs in Normalizing Threat Data Feeds

Research Paper

This paper examines whether Large Language Models (LLMs) can be reliably applied to the normalization of Indicators of Compromise (IOCs) into Structured Threat Information Expression (STIX) format.

  • 16 Jul 2025

Do AI Coding Assistants Make Bad Coders Worse? A Security Evaluation of GitHub Copilot

Research Paper

As AI coding assistants become increasingly integral to software development, the security of their generated outputs is under greater scrutiny.

  • 11 Jul 2025

AI-Driven Insecurity: Assessing Security Gaps in AI Generated IT Guidance

Research Paper

The increasing reliance on AI-generated technical guidance for IT system configuration introduces significant security risks. This study assesses these risks through a case study: setting up an Apache web server on a Rocky Linux system using instructions from seven AI models.

  • 13 May 2025

SIEM Detection Logic Conversion with LLMs

Research Paper

This research explores how Large Language Models (LLMs) and automation scripts can expedite the translation of detection logic between SIEMs, converting detections in minutes instead of hours.

  • 2 May 2025

A Pebble In the Ocean: Maximizing Log Fidelity In Container Environments

Research Paper

Log fidelity is crucial for Incident Response Teams to investigate and contain cyber incidents but...

  • 17 Apr 2025

Leveraging Large Language Models for Security-Focused Code Reviews

Research Paper

This study investigates the potential application of Large Language Models (LLMs) in enhancing...

  • 26 Mar 2025

Unveiling the Dependency on Network Telemetry: Optimizing Lateral Movement Detection

Research Paper

This study investigates the dependency on network and endpoint telemetry for identifying lateral...

  • 17 Jan 2025