Trust But Verify: Evaluating the Accuracy of LLMs in Normalizing Threat Data Feeds
This paper examines whether Large Language Models (LLMs) can be reliably applied to the normalization of Indicators of Compromise (IOCs) into Structured Threat Information Expression (STIX) format.