Cybersecurity Research Papers

Cybersecurity Research Papers

Under the guidance and review of our world-class instructors, SANS Technology Institute master's degree candidates conduct research that is relevant, has real-world impact, and often contributes cutting-edge advancements to the field of cybersecurity knowledge. Here are some highlights of their recent findings.

EDR Evasion: Stranger things in a payload
By Christopher Watson
July 28, 2021

Cloud Forensics Triage Framework (CFTF)
By Michael Beck
July 28, 2021

  • Digital media forensic investigations come in multiple forms and span single assets - from thumb drives, laptops, mobile phones, or a single email server to large-scale corporate incident response actions. Corporate network investigations are when analysts can become overwhelmed with the volume of internal hosts of interest, which must be forensically triaged and analyzed. The pressure to produce evidence to support or refute a case is still the same. Analysts need to deliver the evidence as quickly as possible and maintain proper evidence handling procedures. Endpoint Detection and Response (EDR) tools perform a great job identifying these systems and providing a platform to collect data. The next step of preparation and analysis of these hosts must be done and is time-consuming. This circumstance is where a Cloud Forensics Triage Framework (CFTF) can leverage cloud resources to set up a scalable and automated forensic triage framework and benefit the digital media forensic investigators. The research will explore the possibilities of using a mixture of traditional forensic media collection processes and modern cloud technologies to determine if reducing the time it takes to deliver processed media benefits the overall mean time to deliver results.

Content Security Policy Bypass: Exploiting Misconfigurations
By James Casteel
July 15, 2021

  • Content Security Policy (CSP) is designed to help mitigate content injection attacks such as XSS. While it can be helpful as a part of a defense-in-depth strategy, misconfigurations may be bypassed, especially when used as a sole defensive mechanism. Content Security Policy configurations can be very complex, leaving gaps in coverage when utilizing older or larger web applications. Bypassing Content Security Policy misconfigurations can often be trivial in a complex application. This research analyzes how CSP works as well as bypass techniques and methodologies to help exploit policy misconfigurations.

Information Security Starts with the Employees
By Simone Genna
July 8, 2021

  • Organizations continue to spend exorbitant budgets to combat the issue of insider threat with one source estimating it at $270B/year by 2026 (Forbes, 2020). By comparison, the cost to put a man on the moon, possibly the greatest accomplishment in the history of mankind, was $283B (adjusted for inflation) and that was spread across thirteen years from 1960 to 1973. The cybersecurity industry’s approach to insiders has reached a tipping point where the methodology and framework have become unscalable, inefficient, and ineffective. The only strategy appears to be doubling down on buying more technical solutions.

IT Service Management and Infosec: Collaborate for Mutual Success
By Kevin Geil
June 30, 2021

  • Collaboration between information security and IT is critical to the success of both teams. Information security frameworks and IT service management methodologies share a foundation in asset management, configuration management, and change management. This research describes the nexus between information security and IT service management by mapping ITIL version 4 management practices to the CIS Critical Security Controls. It shows that in many cases, information security controls and IT service management practices can be implemented and audited using the same steps.

CIS CSC Controls vs. Ransomware: An Evaluation
By Dylan Malloy
May 19, 2021

  • Cybercriminals continue to develop and enhance both new and existing ransomware variants, exploiting vulnerabilities to compromise computer systems and wreak havoc on individuals and organizations. Ransomware, while everchanging, typically relies heavily on a lack of controls in place for it to be promptly stopped or eradicated; however, many controls set out to reduce the overall impact of ransomware, if not stop it entirely. Organizations often try to protect themselves from ransomware by investing money into their security stack, Anti-virus, Endpoint Detection and Response, and Host Intrusion Prevention System. However, these tools will not be nearly as effective without the proper controls to align their functions. Implementing CIS Critical Security Controls can significantly reduce the impact of ransomware, or even potentially stop it in its tracks, meaning minimal disruptions to operations.

Staying Invisible: Analyzing Private Browsing and Anti-forensics on Mac OS X
By Rick Schroeder
May 6, 2021

  • The increasing desire to protect personal information has resulted in enhanced privacy features in web browsers. Private browsing modes combined with the growing popularity of disk cleaning tools present a problem for forensic analysts. The increase in privacy features results in a reduction of forensic evidence on the suspect system. This added complexity makes it difficult for an investigator to determine which websites were browsed by the suspect. When the primary sources of forensic evidence are tampered with, it is necessary to identify secondary sources. In Windows-based investigations, secondary evidence is often discovered within hibernation files, operating system artifacts, or error logs. Digital forensic analysts require similar files in macOS. They need to understand how and when logs are written. Identifying and understanding secondary sources of evidence is essential for an analyst to support the details of their case.

ExcavationPack: A Framework for Processing Data Dumps
By TJ Nicholls
May 6, 2021

  • Data dumped online from breaches is rich with information but can be challenging to process. The data is often unstructured and littered with different data types. This research presents a framework using Docker containers to process unstructured data. The container-focused approach enables flexible data processing strategies, horizontal scaling of resources, the efficacy of processing strategies, and future growth. Security professionals utilizing this framework will be able to identify points of interest in data dumps.

GPS for Authentication: Is the Juice Worth the Squeeze?
By Adam Baker
May 6, 2021

  • For decades, location has been used as a validating factor in authentication. However, this has almost exclusively reflected IP address-based geolocation, a far less precise data point than a GPS coordinate. This paper will compare the precision of IP address location data to that of GPS coordinates, to determine if the increased available precision of GPS coordinates provides sufficient enhancement in value to justify expanding the use of GPS coordinates for authentication.

Vulnerability Management Blueprint for the Clinical Environment
By Adi Sitnica
April 14, 2021

  • The industry-standard vulnerability management process is largely inapplicable within clinical settings. Unique medical industry-specific devices and other complexities and limitations, such as vendor-owned and managed systems and regulated and other non-standard hardware, limit the general effectiveness of the process. This document explores a standard clinical footprint and provides guidance (or a 'blueprint') to further developing and maturing the vulnerability management operational model for clinical settings, with the primary goal of risk reduction within the confines of a clinical environment.

A Multi-leveled Approach for Detection of Coercive Malicious Documents Employing Optical Character Recognition
By Josiah Smith
April 8, 2021

  • Authors of malicious documents often include a graphical asset used to lure the potential victim to "enable editing" and to "enable content" to activate the macro's embedded logic. While these graphical lures vary in theme, language, and content, they commonly have similar coercive text. Using Optical Character Recognition to produce text files of the images provides the ability to anchor the images' contents. While attackers have been known to intentionally manipulate images to bypass OCR-based detection, some additional techniques can surface the textual contents. Optical Character Recognition can be utilized to track, pivot, and cluster malicious campaigns, identify new TTPs, and possibly provide attribution against adversaries.

Malware Detection in Encrypted TLS Traffic Through Machine Learning
By Bryan Scarbrough
March 10, 2021

  • The proliferation of TLS across the Internet leads to a safer environment for the end user but a more obscure setting for the network defender. This research demonstrates what can be learned using Machine Learning analysis of TLS traffic without decryption. It applies a novel approach to TLS analysis by analyzing data available in the unencrypted portion of the handshake combined with Open-source Intelligence (OSINT) data about Internet Protocol (IP) addresses and domain names. The metadata is then analyzed using three different machine learning algorithms: Support Vector Machine (SVM), One-Class SVM (OC-SVM), and an Autoencoder Neural Network. This research also addresses the imbalanced data distribution between malicious and benign traffic with the OC-SVM and the Autoencoder Neural Network. Finally, this research demonstrates that when using the correct header data the SVM and OC-SVM classify malware with a more than 99% F2 score and the Autoencoder approximately 95% F2.

Remote Workforce Impact on Threat Defenses
By Sean Goodwin
March 10, 2021

  • As organizations embrace remote work, the defensive security posture needs to be re-examined to effectively address threats while facing new or different constraints and tools. This paper investigates the prevention and detection control effectiveness against the known adversary Tactics, Techniques, and Procedures (TTPs) documented within the MITRE ATT&CK (R) taxonomy in a remote working (work from home, WFH) environment.

Preventing Windows 10 SMHNR DNS Leakage
By Robert Upchurch
March 3, 2021

  • Microsoft enables Smart Multi-Homed Name Resolution (SMHNR) by default, sending name lookups out of all the connected interfaces for all configured name resolution protocols: DNS, LLMNR, and NetBIOS over TCP/IP (NetBT). Research on the effect that SMHNR has on DNS behavior showed that several users were concerned with DNS leakage ("DNS Leaks," 2017). DNS leakage is where unauthorized parties can observe, intercept, and possibly tamper with the name lookups or the lookup responses. Users were also frustrated by operational issues, such as attempting to resolve a private network hostname and receiving no response, a slow response, or an incorrect response while connected to a VPN ("Windows 10", 2015). This frustration led to users attempting to disable SMHNR ("Turn Off," 2021), but it did not always resolve the issue. The process to disable SMHNR varied based on the edition of Windows used, so the goal was to investigate the effect of SMHNR on DNS behavior and pursue an edition agnostic, native operating system method to mitigate that effect. Testing revealed that Name Resolution Policy Table (NRPT) rules provided a simple, scalable, and agile mechanism for controlling DNS client behavior that was effective across the multiple editions of Windows and worked irrespective of whether SMHNR was on or off.

Improving Incident Response Through Simplified Lessons Learned Data Capture
By Andrew Baze
February 17, 2021

  • The Lessons Learned portion of the cybersecurity incident response process is often neglected, resulting in unfortunate missed opportunities that could help teams mature, identify important trends, and improve their security. Common incident handling frameworks and compliance regimes describe time-consuming and relatively complex processes designed to capture these valuable lessons. While an extensive and resource-heavy process may be necessary in some cases, it is often difficult for incident response teams to dedicate sufficient time to capture this lesson data at the end of an incident. Dedicating time is even more difficult when the team is simultaneously handling other incidents. This paper addresses the planning and implementation of a simplified approach to capturing Lessons Learned data at any time, as opposed to at the conclusion of an incident. This approach includes a tagging schema and demonstrates how identification of lesson type, sub-type, and associated work items can provide valuable data to further an organization's original Lessons Learned goals.

Collection and Analysis of Serial-Based Traffic in Critical Infrastructure Control Systems
By Jonathan Baeckel
February 11, 2021

  • There is a blind spot the size of a 27-ton, 2.25-megawatt maritime diesel generator in the world's critical infrastructure control system (CICS) landscape. Compared to typical IT systems, CICSs are composed of a much larger ratio of non-routable traffic, such as serial-based Fieldbus communications, than their IT-based brethren, which almost exclusively rely on TCP/IP-based traffic. This traffic tells field devices to take actions and reports back process status to operators, engineers, and automated portions of the process. As vital as it is to the process, this specialized traffic is routinely ignored by Operational Technology (OT) architects and analysts charged with defending this type of system. They tend to favor a TCP/IP only approach to traffic collection and analysis that is more geared toward an IT-only environment. This paper analyzes Stuxnet to determine the effect that serial communication monitoring and analysis may have on the situational awareness of such an event. It will pose several questions. Could the attack have been detected without the availability of known Indicators of Compromise (IoC)? Would the attack have been detected sooner? Would there have been no effect at all? This information may help organizations pursue a risk-based approach to architecting a CICS traffic collection and analysis system.

How Sweet It Is: A Comparative Analysis of Remote Desktop Protocol Honeypots
By Lauri Marc Ahlman
January 28, 2021

  • Remote Desktop Protocol (RDP) and other remote administrative services are consistently targeted by attackers seeking to gain access to protected systems. Honeypots are a valuable tool for network defenders to learn about attacker tools and techniques. This paper proposes an architecture for an RDP honeypot running on a Linux host. The proposed solution includes a capability to replay RDP sessions and observe attacker activity and keystrokes. Further, this paper presents a comparative analysis between this proposed solution and an RDP honeypot using the open-source project PyRDP (Gonzalez, 2020) which is represented as a Windows environment.

Network Segmentation of Users on Multi-User Servers and Networks
By Ryan Cox
January 20, 2021

  • In High Performance Computing (HPC) environments, hundreds of users can be logged in and running batch jobs simultaneously on clusters of servers in a multi-user environment. Security controls may be in place for much of the overall HPC environment, but user network communication is rarely included in those controls. Some users run software that must listen on arbitrary network ports, exposing user software to attacks by others. This creates the possibility of account compromise by fellow users who have access to those same servers and networks. A solution was developed to transparently segregate users from each other both locally and over the network. The result is easy to install and administer.

CTI, CTI, CTI: Applying better terminology to threat intelligence objects
By Adam Greer
January 13, 2021

  • Increased awareness of the need for actionable cyber-threat intelligence (CTI) has created a boom in marketing that has flooded industry publications, news, blogs, and marketing material with the singular term applied to an increasingly diverse set of technologies and practices. In 2015, Dave Shackleford and Stephen Northcutt published findings of a survey sponsored by some of the largest names in cyber-threat intelligence at the time in order to address the widespread confusion around what precisely cyber-threat intelligence is and how it is generated, delivered, and consumed. In this research, they note that "... a shortage of standards and interoperability around feeds, context, and detection may become more problematic as more organizations add more sources of CTI..." (Shackleford, 2015). However, IT security teams have matured drastically since then, and most research has been applied to automation and standards for specific sub-domains, such as dissemination. This paper analyzes the current CTI environment and uses a defined methodology to develop a taxonomy for the domain that clarifies the application of CTI to security programs and serves as a foundation to further domain research.

Tracing the Tracer: Analysis of a Mobile Contact Tracing Application
By Anthony Wallace
January 4, 2021

  • The pandemic has led to the rapid development of applications designed to take advantage of our hyper-connected world. The Ehteraz application was developed, deployed, and mandated in the nation of Qatar. Government regulation required citizens to register with the app to enter businesses such as malls and grocery stores which forced rapid adoption among the populace. Many citizens are concerned about the range of permissions the app requires to function. Unpacking the application and finding a method of dissecting network traffic was complicated by measures developers took to prevent miscreant-in-the-middle attacks and analysis. Sharing the journey of decrypting the traffic in this application may prove useful to future engineers reversing and bypassing protections to perform analysis on mobile app traffic. Initial analysis has confirmed the application sends only location and Bluetooth data to centralized servers owned by the Ministry of Interior of the State of Qatar.

Evaluating Open-Source HIDS with Persistence Tactic of MITRE Att&ck
By Jon Chandler
January 4, 2021

  • Small companies with limited budgets need to understand if open-source tools can provide adequate security coverage. The MITRE ATT&CK framework provides an excellent source to evaluate endpoint security tool effectiveness. A MITRE research paper provides the following insight into the value of ATT&CK, “The techniques in the ATT&CK model describe the actions adversaries take to achieve their tactical objectives” (Strom, et al., 2019). This paper examines two open-source endpoint tools, OSSEC and WAZUH, against the MITRE ATT&CK framework. This analysis will determine each endpoint tool’s ability to detect a select number of the MITRE ATT&CK framework persistence techniques. Out of the techniques reviewed, this paper will analyze the degree to which the ATT&CK technique can be accurately identified by the evaluated tools. MITRE also conducts evaluations but on proprietary tools. The results of the open-source endpoint tools analyzed here can be compared to the MITRE ATT&CK Evaluations conducted on the proprietary endpoint toolsets. The MITRE ATT&CK framework is a valuable methodology that allows a company to compare endpoint tools from a security risk and product evaluation perspective.

Developing a JavaScript Deobfuscator in .NET
By Roberto Nardella
January 4, 2021

  • JavaScript, a core technology of the World Wide Web, is a recently born scripting language and, starting from its early years, became notorious within the cyber security community not only for well-known security problems like Cross Site Scripting (XSS) or Cross Site Request Forgery (CSRF), but also for its flexibility in offering a valid vehicle for the implementation of the first stage of a malware attack.

Ubuntu Artifacts Generated by the Gnome Desktop Environment
By Brian Nishida
December 16, 2020

  • This research identifies Gnome Desktop Environment (GDE) artifacts and demonstrates their utility in Linux forensic examinations. The classic Linux forensic examination is tailored to computer intrusions of victim servers because the enterprise's critical Linux systems are typically web servers, mail servers, and database servers. However, the emphasis on intrusions and servers has two shortcomings. First, in addition to network intrusions, digital forensic labs examine specimens from various investigations: e.g., child exploitation, homicide, and financial crimes, to name a few. Second, the majority of Linux users run GUI-based desktop versions rather than command-line server versions. In these cases, the GDE may be used to install applications, run applications, open files, join Wi-Fi networks, and upload files. These point-and-click actions have been overlooked in the classic Linux examination; therefore, they will be explored in this research. Lastly, the importance of these GDE artifacts will be demonstrated in three practical scenarios.

Automating Google Workspace Incident Response
By Megan Roddie
December 16, 2020

  • Incident responders require a toolset and resources that allow them to efficiently investigate malicious activity. In the case of Google Workspace, there are an increasing number of subscribers, but resources to assist in the analysis of security incidents are lacking. The goal of this research is to develop a tool that expands on Google’s default administrative capabilities with the intent of providing value to incident responders. Through providing both additional context and purposeful views, incident responders can more quickly identify malicious activity and respond accordingly.

Detecting System Log Loss Through One-Way Communication Channels
By Jason Leverton
December 16, 2020

  • Organizations are consolidating log collecting, monitoring, and incident response activities. There are many reasons an organization could find itself in this situation, whether they are attempting their first deployment of security architecture or they are shifting to a SaaS Cybersecurity product. These data collection points may not always be located within the same trust boundary, or even within the same organization. They may also be communicating through highly restrictive gateways. These collection points could gather information from multiple networks, all with different classifications, security postures, or network owners. There are incidents when communication flowing from one organization to another may have restrictions on two-way communication and rely entirely on a one-way communication channel. The lack of a two-way connection presents a challenge when continuous monitoring is required. Most host-based agents and log transfer mechanisms rely solely on established connections (TCP). This paper examines the transfer of logs through a one-way communication channel. It aims to detect and measure the amount of log loss on the channel and intuit the time, size, and volume of log messages lost. The goal is not to provide error correction but instead to introduce error detection.

Detecting and Preventing the Top AWS Database Security Risks
By Gavin Grisamore
December 9, 2020

  • Engineers regularly perform risky actions while deploying and operating databases on cloud services like AWS. Engineers are often focused on delivering value to customers and less on the security of the cloud infrastructure. Security teams are increasingly concerned with identifying these cloud-native risks and putting migrations in place to secure their critical data and limit exposure without inhibiting development workflows or velocity. This paper examines several common AWS database security risks and addresses how to implement detection and prevention controls to mitigate the risks.

Mitigating Attacks on a Supercomputer with KRSI
By Billy Wilson
December 9, 2020

  • Kernel Runtime Security Instrumentation (KRSI) provides a new form of mandatory access control, starting in the 5.7 Linux kernel. It allows systems administrators to write modular programs that inject errors into unwanted systems operations. This research deploys KRSI on eight compute nodes in a high-performance computing (HPC) environment to determine whether KRSI can successfully thwart attacks on a supercomputer without degrading performance. Five programs are written to demonstrate KRSI’s ability to target unwanted behavior related to filesystem permissions, process execution, network events, and signals. System performance and KRSI functionality are measured using various benchmarks and an adversary emulation script. The adversary emulation activities are logged and mitigated with minimal performance loss, but very extreme loads from stress testing tools can overload a ring buffer and cause logs to drop.

Is it Ever Really Gone? The Impact of Private Browsing and Anti-Forensic Tools
By Rick Schroeder
December 9, 2020

  • Digital forensics analysts are tasked with identifying which websites a user visited. Several factors determine the level of difficulty this poses for the forensic analyst. Network-based security tools, such as web content filters, provide a quick and easy look at a user’s browsing history. When network-based tools aren’t available forensic analysts rely on artifacts that reside on the hard drive to paint the picture of user activity and answer questions involving browsing history. These artifacts can be deleted or tampered with, removing key pieces of evidence from the system. Although this adds a layer of complexity to the investigation, it does not end the investigation. Analysts should employ multiple methods to recover evidence. Information from web browsing sessions is often written to more than one location. Knowing where to find that data and how to interpret it will add value and credibility to an investigation. Digital forensic analysts need to think outside the box and perform in-depth analysis to complete an investigation involving a private browsing mode.

Reverse Engineering Virtual Machine File System 6 (VMFS 6)
By Michael Smith
November 19, 2020

  • Virtual Machine File System (VMFS) 6 is a proprietary file system. The file system’s proprietary nature means that many forensic applications are unable to parse the file system. There is a lack of support because proprietary file systems do not have to follow an accepted standard and can make modifications that break forensic tools with any release. This instability means that maintaining parsers for these file systems can become costly very quickly. This vacuum of support for proprietary file systems has created an opportunity for open-source utilities to grow in ways that support parsing these file systems. Skilled forensic examiners scour the open-source community and publicly available research for parsers and digital artifacts analyses when they encounter file systems or files unsupported by large forensic applications. The goal of this research is two-fold. First, to increase the understanding of VMFS 6 with its myriad digital artifacts. Second, to conclusively determine the recoverability of a deleted file.

Continuous Monitoring Effectiveness Against Detecting Insider Threat
By Steven Austin
November 19, 2020

  • More organizations are implementing some form of Continuous Monitoring, yet there is an increase in insider threat incidents. The number of insider threat incidents has increased by 47% in two years, from 3,200 in 2018 to 4,716 in 2020 (Epstein, 2020). This data shows insider threat is an on-going problem for organizations despite efforts to implement Continuous Monitoring. The results of this research provide organizations with evidence of Continuous Monitoring effectiveness against detecting malicious insider attack techniques.

Defeat the Dread of Adopting DMARC: Protect Domains from Unauthorized Email
By Tim Lansing
November 11, 2020

  • Many large organizations do not implement Domain-based Message Authentication, Reporting, and Conformance (DMARC) (Frenkel, 2017), and system administrators at small to medium businesses struggle to understand DMARC and how to use it to protect domains that send and do not send emails. When fully implemented, DMARC is a barrier discouraging criminals from conducting spoofing attacks against a domain (Kerner, 2018). DMARC reports on what servers are sending the domain’s email. This research examines how to simplify the process of configuring and monitoring the Sender Policy Framework (SPF), Domain Keys Identified Mail (DKIM), and DMARC to save individuals and businesses time, and allow them to better protect themselves and their domains.

Learning from Learning: Detecting Account Takeovers by Identifying Forgetful Users
By Sean McElroy
November 11, 2020

  • By measuring a user’s increasingly familiarity with a web application over time, outliers in use may indicate account takeover fraud. Credential stuffing attacks are increasing in frequency, allowing threat actors to use data breaches from one source to perpetuate another. While multi-factor authentication remains a crucial preventative measure to protect against credential stuffing, the availability of credential data sets with contact information and the correlation with demographic data can allow threat actors to overcome it through interactive social engineering. Concurrently, alternative defense mechanisms such as network source profiling and device fingerprinting lose effectiveness as privacy-protecting technologies reduce the observable variability between legitimate and fraudulent user sessions. This paper explores the potential of clickstream data containing logs of users’ navigation through a web application as an alternative defense to detecting account takeover activity for digital banking platforms. By identifying when users are exhibiting learning behaviors, the detection of such behaviors for established users may provide an indicator of compromise.

Architecture and Configuration for Hardened SSH Keys
By Scott Ross
November 11, 2020

  • The Secure Shell (SSH) protocol is a tool often-used to administer Unix-like computers, transfer files, and forward ports securely and remotely. Security can be quite robust for SSH when implemented correctly, and yet it is also user-friendly for developers familiar with Unix. Asymmetric SSH keys used by the protocol have allowed operations engineers and developers to authenticate to remote machines – supporting increased automation and orchestration across DevOps environments. While the private keys should be password protected, they are often not. The fast pace of DevOps and the focus on delivery has led to many companies not controlling their authentication credentials or understanding the risk they create. Private key files can become scattered around the environment, presenting a tempting target for threat actor exploitation to pivot across a network or access cloud services. This paper will evaluate a simple solution for protecting private keys by storing them on an external cryptographic device (Yubikey) and automating key management/SSH configuration (Ansible). This potential solution will be compared to local key storage and prevalent ad-hoc key management against conventional SSH attack techniques in the MITRE ATT&CK matrix.

Fear of the Unknown: A Meta-Analysis of Insecure Object Deserialization Vulnerabilities
By Karim Lalji
October 28, 2020

  • Deserialization vulnerabilities have gained significant traction in the past few years, resulting in this category of weakness taking eighth place on the OWASP Top 10. Despite the severity, deserialization vulnerabilities tend to be among the less popular application exploits discussed (Bekerman, 2020) and frequently misunderstood by security consultants and penetration testers without a development background. This knowledge discrepancy leaves adversaries with an advantage and security professionals with a disadvantage. This research will aim to demonstrate exploitation techniques using insecure deserialization on multiple platforms, including Java, .NET, PHP, and Android, to obtain a metanalysis of exploitation techniques and defensive strategies.

Verifying Universal Windows Platform (UWP) Signatures at Scale
By Joal Mendonsa
October 28, 2020

  • Enterprise security teams often use native Windows tools, like PowerShell, to check signatures and quickly establish where a binary is a known-good or is unknown and worthy of further investigation. Unfortunately, a new and growing class of applications – Universal Windows Platform (UWP) applications – incorrectly appear to be unsigned when checked using traditional methods. This paper will demonstrate a way to efficiently validate UWP applications in a networked environment, strictly using Microsoft tools, and without placing additional binaries on remote systems.

Open-Source Endpoint Detection and Response with CIS Benchmarks, Osquery, Elastic Stack, and TheHive
By Christopher Hurless
October 23, 2020

  • There is a wealth of open-source tools available for information security. A characterization of the various open-source products will provide a means of fortifying endpoints and auditing those fortifications with an Endpoint Detection and Response (EDR) solution. High-quality security practices do not have to be expensive products, but they do need to hit several automation requirements to be effective. With this in mind, building robust, automated, EDR capability using open-source, community-driven tools that automate and standardize security responses is not only possible but practical. Having a set of predefined control settings on an endpoint goes beyond malware detection. It sets the stage to ensure that an organization’s endpoints are fortified from an attack before it happens. By implementing the Center for Internet Security (CIS) Desktop Benchmarks, organizations have a means of strengthening endpoints from attack. Adding Osquery allows them to have a tool for knowing when a machine has fallen out of a fortified state. Following the loss of fortification is the need to investigate the cause and return the device to its intended state which can be done using Elastic Stack and TheHive.

Prescriptive Model for Software Supply Chain Assurance in Private Cloud Environments
By Robert Wood
October 14, 2020

  • As companies embrace Continuous Integration/Continuous Deployment (CI/CD) environments, automated controls are critical for safeguarding the Software Development Life Cycle (SDLC). The ability to vet and whitelist container images before installation is vitally important to ensuring the security of corporate networks. Google Cloud offers the Container Registry in combination with Binary Authorization to understand the container footprint in the environment and provide a mechanism for enforcing policies. Grafeas and Kritis are open-source alternatives. This paper evaluates Grafeas and Kritis and provides specific recommendations for using these tools or equivalents in private cloud environments.

The All-Seeing Eye of Sauron: A PowerShell tool for data collection and threat hunting
By Timothy Hoffman
October 14, 2020

  • The cost of a data breach directly relates to the time it takes to detect, contain, and eradicate it. According to a study by the Ponemon Institute, the average time to identify a breach in 2019 was 206 days (Ponemon Institute, 2019). Reducing this timeframe is paramount to reducing the overall timeline of removing a breach, and the costs associated with it. With ever-evolving adversaries creating new ways of compromising organizations, preventive security measures are essential, but not enough. Organizations should not assume they will be compromised, but instead that they already have been. Finding and removing these already existing breaches can be difficult. To find existing breaches, organizations need to conduct threat hunting, which seeks to uncover the presence of an attacker in an environment not previously discovered by existing detection technologies (Gunter & Seitz, 2018). This paper looks at the PowerShell tool, Eye of Sauron, which can be used for threat hunting by identifying indicators of compromise (IOCs), as well as anomaly detection using data stacking in a Windows environment. Its' capability to detect the presence of IOCs is tested in two scenarios, first in a simulated attack, and second after the introduction of malware.

No Strings on Me: Linux and Ransomware
By Richard Horne
October 7, 2020

  • Ransomware poses an ever-increasing threat to businesses and organizations as it continues to evolve and change. Many organizations are forced to pay for solutions to this growing problem with expensive and out-of-date signature-based solutions. As the possibility looms for ransomware to impact all operating systems and businesses alike, organizations will need to focus on early detections and warnings to stay ahead of its spread. This paper aims to examine the probability of detecting ransomware throughout its lifecycle within Linux environments. In conjunction with detections, the ultimate goal of the ideas presented is to provide security teams with a more reliable and cost-effective method to detect, react, and neutralize Linux ransomware variants.

Shall We Play a Game?: Analyzing the Security of Cloud Gaming Services
By Adam Knepprath
October 7, 2020

  • The adoption of cloud gaming services is quickly growing. Like many services that are eager to go to market, cloud gaming services lack strong security measures. This paper provides an analysis of three cloud gaming service providers’ privacy policies, out of the box security, and mitigations end-users should consider.

The Poisoned Postman: Detecting Manipulation of Compliance Features in a Microsoft Exchange Online Environment
By Rebel Powell
September 30, 2020

  • Modern attack techniques frequently target valuable information stored on enterprise communications systems, including those hosted in cloud environments. Adversaries often look for ways to abuse tools and features in such systems to avoid introducing malicious software, which could alert defenders to their presence (Crowdstrike, 2020). While on-premise detection strategies have evolved to address this threat, cloud-based detection has not yet matched the adoption pace of cloud-based services (MITRE, 2020). This research examines how adversaries can perform feature attacks on organizations that use Microsoft Office 365's Exchange Online by exploring recent advanced persistent threat tactics in Exchange on-premise environments and applying variations of them to Exchange Online's Compliance and Discovery features. It also analyzes detection strategies and mitigations that businesses can apply to their systems to prevent such attacks.

Mitigating Risk with the CSA 12 Critical Risks for Serverless Applications
By Mishka McCowan
September 30, 2020

  • Since its introduction in 2014, serverless technology has seen significant adoption in businesses of all sizes. This paper will examine a subset of the 12 Most Critical Risks for Serverless Applications from the Cloud Security Alliance and the efficacy of their recommendations in stopping attacks. It will demonstrate practical attacks, measure the effectiveness of the Cloud Security Alliance recommendations in preventing them, and discuss how the recommendations can be applied more broadly.

Fight or Flight: Moving Small and Medium Businesses into the Cloud During a Major Incident
By Drew Hjelm
September 30, 2020

  • Incident responders often aid small and medium businesses (SMB) during crippling cyberattacks that cause outages of critical systems. Most SMBs lack sufficient capacity to monitor and protect their on-premises IT infrastructure. Many of these SMBs are already using cloud platforms in a limited fashion. These organizations can use more cloud services to improve security visibility against future attacks and possibly speed up recovery time. This research examines the feasibility thereof and discusses the challenges that organizations may face with rapid cloud migration, including software compatibility and insurance requirements.

Security Network Auditing: Can Zero-Trust Be Achieved?
By Carl Garrett
September 23, 2020

  • Since 2010, government and business organizations have begun to adopt the Zero-Trust framework. Although the concept is a decade old, organizations are still in the infant stages of its implementation. Given that tablets and mobile phones have become an intricate part of business aids, all organizations will eventually integrate Zero-Trust into their environments. Many third-party vendors market Zero-Trust tools; though, they only provide one or two pieces to achieve "true" Zero-Trust. Designing a security auditing Zero-Trust framework, professionals must use a layered approach to defense-in-depth. They must also understand the principle of Least Common Mechanism because complicated information technology systems are challenging to control. In traditional perimeter networks, users must authenticate to an entire organizational network, where perimeter-less Zero-Trust networks are segmented; thus, users can log on a Zero-Trust network by accessing a single-segment at a time. This technology eliminates the need for virtual private networks (VPN), thus, providing faster access. Additionally, most organizations state they audit their systems. However, this project focuses on auditing Zero-Trust devices, applications, data, and network traffic, not continuous logging. When implementing the Zero-Trust framework, organizations will learn how to plan and audit for adequate security. a

Replacing WINS in an Open Environment with Policy Managed DNS Servers
By Mark Lucas
September 21, 2020

  • In some environments, Windows workstations require placement on the open internet. In order to protect the read-write domain controllers, administrators locate them in a protected enclave behind a firewall, and read-only domain controllers authenticate workstations during day-to-day operations. While this is strong protection for the read-write domain controllers, the configuration breaks the standard dynamic DNS registration of Windows workstations with the read-write domain controller. In our environment, we have maintained WINS servers linked to Windows DNS via the WINS lookup function to continue finding workstations by name. The TechNet page on WINS (Davies, 2011) was last updated almost nine years ago, and Microsoft has been actively encouraging the abandonment of WINS (Ross & Mcillece, 2020). This paper explores Windows DNS Policies to replacing WINS with Dynamic DNS and policy-controlled responses to queries. Utilizing source IP addresses, DNS policies can regulate the provided answers. The operability of DNS Policies and the applicability to this solution is evaluated in depth.

Zeek Log Reconnaissance with Network Graphs Using Maltego Casefile
By Ricky Tan
September 21, 2020

  • Cyber defenders face a relentless barrage of network telemetry, in terms of volume, velocity, and variety. One of the most prolific types of telemetry are Zeek (formerly known as Bro) logs. Many “needle-in-a-haystack” approaches to threat discovery that rely on log examination are resource-intensive and unsuitable for time-sensitive engagements. This reality creates unique difficulties for teams with few personnel, skills, and tools. Such challenges can make it difficult for analysts to conduct effective incident response, threat hunting, and continuous monitoring of a network. This paper showcases an alternative to traditional investigative methods by using network graphs. Leveraging a freely available, commercial-off-the-shelf tool called Maltego Casefile, analysts can visualize key relationships between various Zeek log fields to quickly gain insight into network traffic. This research will explore variations of the network graph technique on multiple packet capture (PCAP) datasets containing known-malicious activity.

Industrial Traffic Collection: Understanding the implications of Deploying visibility without impacting production
By Daniel Behrens
September 21, 2020

  • Due to the critical nature of industrial environments and the lifetime of deployed assets, many organizations do not have complete knowledge of what assets are operating in the environment and what communications are involved. With the continuous move to IP based communications for controls equipment, Cybersecurity continues to increase in importance and is a priority for many executives. Industrial controls are unique because they are interfacing with the real world, which has implications on human safety and the ability of an organization to maintain operations. Unfortunately, the criticality of these devices and the lack of robust network functions on many often requires the use of passive solutions to gather information. This paper will focus on outlining the potential impact of collecting network traffic, discussing the functions available on networking equipment to enable it, identifying possible deployment architectures and the pros and cons of each, and explaining a methodology to calculate the potential impacts.

Fashion Industry (Securely) 4.0ward
By Shawna Turner
September 9, 2020

  • The fashion market segment is going through a significant technological upgrade. The need to meet modern consumer expectations and desires requires wholesale changes in the way the fashion ecosystem has historically shared information and manufactured products. Fashion cannot use existing security guidance due to the consumer expectations that a fashion product provides a unified physical experience. The addition of significant new technology increases the risk of intellectual property loss. The fashion industry requires a list of minimum-security controls that address the entire ecosystem of fashion from the fashion houses to the supply chain to the factory floor to address information security concerns. This paper begins the process of developing a minimum viable list of controls by combining controls from the Purdue model with recommended controls from the Verizon 2019 Data Breach Investigation Report (DBIR). The paper focuses on proposed controls for the fashion sector; however, they apply to any manufacturing pivoting to Industry 4.0.

Detection of Malicious Documents Utilizing XMP Identifiers
By Josiah Smith
August 27, 2020

  • Modern digital documents are often composed of multiple other documents and images. Malware authors often produce malicious documents while reutilizing graphical assets or other components that can be uniquely identified with the Adobe Extensible Metadata Platform (XMP). XMP IDs define a standard for mapping asset relationships and can be utilized to track, pivot, and cluster malicious campaigns, identify new TTPs, and possibly provide attribution against adversaries.

Risk Management with Automated Feature Analysis of Software Components
By Steven Launius
August 27, 2020

  • Organizations developing software need pragmatic risk management practices to prevent malicious code from contaminating their software. Traditional security tools for Static Code Analysis identify vulnerabilities, not the presence of backdoors exhibiting unintended actions. Application Inspector is a Microsoft tool released to the open source community that identifies risky features and characteristics of source code libraries. This research will evaluate the accuracy of feature detection in the Application Inspector tool and construct a risk model for automating decisions based on feature analysis of source code.

You've Had the Power All Along: Process Forensics With Native Tools
By Trevor McAfee
August 27, 2020

  • Many organizations are interested in standing up threat response teams but are unable, or unwilling, to provide funding or approval for third-party tools. This lack of support requires threat response teams to utilize built-in, OS-specific tools, to investigate suspicious processes and files. These tools can provide a significant amount of useful information when scrutinizing a suspicious process or file. However, these tools and their output are often unwieldy. A lack of cohesiveness requires running multiple similar commands to gather all the data for an investigation, and then manually combining and correlating that data. This paper examines the data of interest during an incident response and the native Microsoft Windows tools used to obtain it. This paper also discusses how to use PowerShell to automate the collection and compilation of this important data.

Benefits and Adoption Rate of TLS 1.3
By Ben Weber
July 28, 2020

  • The cybersecurity industry is often reluctant to adopt new technologies due to perceived complications, assumed dependencies, and unclear information about the benefits. Digital communication protections are not exempt from this phenomenon and are often overlooked when maintaining a secure environment. Adopting new technologies is essential to utilize recent advancements in speed, security, and other newly available features. RFC 8446, better known as TLS 1.3, was released in August of 2018 and included enhancements to the speed and security of a TLS session. Older versions of TLS that still exist, however, fall short when compared to TLS 1.3. This paper provides data testing the speed and security of TLS 1.3 compared to TLS 1.2 across major TLS libraries and a point-in-time measurement of TLS 1.3 adoption across the top 500 websites in the business, retail, technology, and news sectors.

ATT&CK-Based Live Response for GCP CentOS Instances
By Allen Cox
July 22, 2020

  • As organizations increasingly invest in cloud service providers to host data, applications, and services, incident responders must detect and respond to malicious activity across several major platforms. With nearly one-third of the cloud infrastructure market share, Amazon Web Services (AWS) dominates the information security scientific literature. However, of the other major cloud providers, Google Cloud Platform (GCP) experienced the most significant annual growth in 2019 (Canalys, 2020), and as a result, defenders can expect to respond more frequently to incidents in GCP. This research examines the data sources available to responders on GCP CentOS compute instances and within the cloud platform. Using MITRE ATT&CK to identify attacker tactics and Red Canary’s Atomic Red Team to generate test data, this research proposes a live response script to collect the essential data that responders will need to identify the discussed tactics.

Examining Sysmon's Effectiveness as an EDR Solution
By Christian Vrescak
July 17, 2020

  • In today’s cyber threat landscape, investigators and incident responders are often outmatched against their adversaries due to a lack of endpoint visibility. This deficiency leads to false negatives leaving defenders and organizations at the mercy of attackers. To solve this problem, Endpoint Detection & Response (EDR) tools were created to provide endpoint visibility and arm defenders against their attackers (CrowdStrike, 2019). While these tools are a difference-maker for defenders, the cost of commercial offerings can put them out of reach for many organizations (Infocyte, 2020). Microsoft Sysinternals Sysmon, a free EDR tool, collects detailed information about system activity, including process creations, network connections, file creations, and much more (Russinovich, M. & Garnier, T., 2020). This paper examines the effectiveness of Sysmon as a free EDR tool in providing sufficient visibility into Windows endpoint activity to detect and forensicate attacker techniques such as those listed in MITRE’s ATT&CK knowledge base.

Methods to Employ Zeek in Detecting MITRE ATT&CK Techniques
By Michael McPhee
July 15, 2020

  • MITRE ATT&CK techniques and their respective detections, while a significant step forward in democratizing threat intelligence, are predominantly focused on endpoint visibility through direct management or via agents. Some detection approaches leverage network sensors (e.g., Zeek) like BZAR (Fernandez, Wunder, Azoff, & Tylabs) in network-based detection of ATT&CK techniques. However, many of these earlier solutions focus on Microsoft Windows-specific protocols. They do not provide broad coverage of less-sophisticated endpoints, industrial systems, or infrastructure devices themselves (such as routers, switches, wireless devices). This paper will explore the feasibility of network-based detections using combinations of CLI utilities and Zeek IDS to augment or replace endpoint-focused detections and extend ATT&CK's utility to the rest of the network.

Improving Analyst Efficiency in Office365 Business Email Compromise Investigation Scenarios Through the Implementation of Open Source Tools
By Aaron Elyard
June 25, 2020

  • Working within Microsoft’s browser-based O365 Graphical User Interface (GUI) can be challenging for DFIR practitioners when time is of the essence. PowerShell-based cmdlets are often preferred due to their flexibility, speed, and efficiency compared to a browser-based approach. However, in his professional career, the author has observed that more junior analysts may not feel comfortable using command line tools. Additionally, they may not have devoted the appropriate time to learning the various options needed to obtain the data they need for their investigations. This paper explores a tool the author created to bridge the gap between the browser-based GUI and raw PowerShell. It examines the impact of the use of such a tool on the analyst’s efficiency, measured in the number of interactive actions an analyst must take.

Natural Language Processing for the Security Analyst
By Daniel Severance
June 24, 2020

  • Data science is an emerging multidisciplinary field that offers multiple benefits to information security. Within this field, there is an inherent ability to do anomaly detection at scale. Recently there are increased efforts in applied data sciences in the field of information security and assurance, however there can be a high barrier to entry due to the mathematics required. Nonetheless, topics such as natural language processing can be and have been integrated into security toolsets successfully. These computational linguistic methods can effectively be used to empower analysis techniques. This paper examines the viability of applying these language techniques in security anomaly detection and the ability to integrate with existing security tools.

Real-Time Honeypot Forensic Investigation on a German Organized Crime Network
By Karim Lalji
June 23, 2020

  • German police raided a military-grade NATO bunker in the fall of 2019, believed to have been associated with a dark web hosting operation supporting a variety of cybercrimes. The organized crime group has gone by the aliases of CyberBunker, ZYZtm, and Calibour (Dannewitz, 2019). While most of the group's assets were seized during the initial raid, the IP address space remained and was later sold to Legaco Networks. Before being shut down, Legaco Networks temporarily redirected the traffic to the SANS Internet Storm Center honeypots for examination. The intention behind this examination was to identify malicious traffic patterns or evidence of illegal activity to assist the information security community in understanding the techniques of a known adversary. Analysis of the network traffic revealed substantial residual botnet activity, phishing sites, ad networks, pornography, and evidence of potential Denial of Service (DoS) attacks. The investigation uncovered a possible instance of Gaudox Malware, IRC botnets, and a wide variety of reconnaissance activities related to Mirai variant IoT exploits. A survey of the network activity has been provided with an emphasis on potential botnet activity and Command and Control (C&C) communication.

Securing the Soft Underbelly of a Supercomputer with BPF Probes
By Billy Wilson
June 18, 2020

  • High-performance computing (HPC) sites have a mission to help researchers obtain results as quickly as possible, but research contracts often require security controls that degrade performance. One standard solution is to secure a set of login nodes that mediate access to an enclave of lightly monitored compute nodes, referred to as “the soft underbelly of a supercomputer” by one DoD representative (National, 2016). Recent advances in the BPF subsystem, a Linux tracing technology, have provided a new means to monitor compute nodes with minimal performance degradation. Well-crafted BPF traces can detect malicious activity on an HPC cluster without slowing down systems or the researchers that depend on them. In this paper, a series of low-profile attacks are conducted against a compute cluster under heavy computational load, and BPF probes are attached to detect the attacks. The probes successfully log all attacks, and performance loss is less than one percent for all benchmarks save for one inconclusive set.

Recognizing Suspicious Network Connections with Python
By Gregory Melton
June 17, 2020

  • Endpoint protection solutions tend to focus on system indicators and known malicious code to defend both enterprise and Small Office-Home Office (SOHO) users. In the absence of a Security Operations Center (SOC) or paid antivirus services, there are few proactive defense options for hobbyists and SOHO owners. A significant problem is how advanced persistent threat (APT) actors’ Tactics, Techniques, and Procedures (TTPs) have changed over the years; it is common for advanced actors to exploit poorly defended subcontractors and seemingly less relevant targets. This brings the Small Office-Home Office into the picture as a pivotal defense point against advanced attackers. This research intends to focus on attackers using Shell, terminal, or Remote Access Tool (RAT) connections to SOHO endpoints. This research seeks to block interactive connections with system-level network logging and blacklist automation. This method will recognize malicious connections and automatically block them in near real-time.

Answering the Unanswerable Question: How Secure Are We?
By Jason Bohreer
June 3, 2020

  • Business environments consist of invisible or ill-defined risk factors which create challenges with prioritization for business owners, systems owners, and IT/Security teams in their goal to improve their security position. The security of the environment relies upon the appropriate people understanding and addressing the risks. However, they typically do not have the relevant understanding, and therefore, the capability to act, due to the complexities of the defense-in-depth strategies. Security professionals have a good understanding of the relationships between the various controls and have numerous tools to consolidate logs and network traffic. However, while many of these tools are “best-of-breed” and operate within their information silos, they lack native methods to populate external systems to aggregate the findings in a risk-based approach which business stakeholders require to make decisions. By designing a framework to collect and measure different aspects of security, this research explores how to remove the operational fog that obscures our vision of our environments. With layers of fog removed, the improved clarity allows us to make quantitative assessments of our security by examining how security controls relate to one another.

QUIC & The Dead: Which of the Most Common IDS/IPS Tools Can Best Identify QUIC Traffic?
By Lehlan Decker
May 20, 2020

  • The QUIC protocol created by Google for use in their popular browser Chrome has begun to be adopted by other browsers. Some organizations have a robust strategy to handle TLS with HTTP2. However, QUIC (HTTP/2 over UDP) lacks visibility via crucial information security tools such as Wireshark, Zeek, Suricata, and Snort. Lack of visibility is due to both its use of TLS 1.3 for encryption and UDP for communication. The defender is at a disadvantage as selective blocking of QUIC isn’t always possible. Moreover, some QUIC traffic may be legitimate, and so outright blocking of endpoints that use QUIC is likely to cause more issues than it solves. To complicate matters further, QUIC has begun to appear in Command and Control (C2) frameworks like Merlin as an additional means of hiding traffic.

Quantifying Threat Actor Assessments
By Andy Piazza
May 20, 2020

  • The cyber threat landscape is a complex mix of adversaries, vulnerabilities, and emerging capabilities. Within this environment, Chief Information Security Officers (CISOs) must prioritize resources and projects to maximize their defenses against the most significant threats. The challenge, though, lies in assessing threats to an organization in a meaningful way. By assessing threat actors’ intent to target a specific organization for certain attack types, information security leaders can determine which malicious actors are most likely to target their enterprise. The assessment of the threat actors’ documented capabilities for those specific attack types allows leaders to wade through the fear, uncertainty, and doubt (FUD) of vendor marketing and nation-state saber-rattling to prioritize capabilities for defensive posturing. This paper introduces the Threat Box, a Cartesian coordinate system, which portrays threat actors’ intent and capabilities as an executive communication tool for information security leaders to depict the prioritization of threat actors.

Ebb and Flow: Network Flow Logging as a Staple of Public Cloud Visibility or a Waning Imperative?
By Dennis Taggart
May 18, 2020

  • The basic tenets of information security remain relatively unchanged even while specific examples of security-related tools, processes, and procedures may shift in popularity over time. Deciding what to prioritize and recommend as a security professional can be challenging, but the most straightforward cases are those justified by the quantitative reduction of risk. In this search for quantitative risk reduction, it is worthwhile for security professionals to consider that the methods used to fulfill basic security needs in one environment may not provide the same benefit in another. The 2019 version of the Cloud Security Alliance's Top Threats to Cloud Computing document warns of critical security issues facing public cloud consumers (Cloud Security Alliance, 2019, p.40). The CSA also acknowledges their work concentrates less on some of the more traditional security threats like “vulnerabilities and malware”, while calling for further research (Cloud Security Alliance, 2019, p.40). This whitepaper inhabits the category of additional research and also occupies a space parallel, but perhaps not identical to classical security views. This research assumes a slightly-less-traditional approach by not taking the value of flow logging, or its costs in the cloud, for granted. It further asserts that given limited resources, there may be more directly valuable logging sources available. This paper establishes a quantitative methodology for judging the effectiveness of flow and non-flow logging as applied in a public cloud environment. It exercises this methodology by simulating top cloud computing threats and examining the capabilities of each.

Efficacy of UNIX HIDS
By Janusz Pazgier
May 15, 2020

  • There has been an increase in UNIX-based adversarial activity, as enterprises and users shift towards the platform (WatchGuard, 2017). The focus of this paper is to demonstrate the effectiveness of three separately installed host-based intrusion detection systems (HIDS): OSSEC, Samhain, and Auditd, and their ability to detect specific MITRE ATT&CK tactics. Custom scripts implement the ATT&CK tactics of privilege escalation, persistence, and data exfiltration. The goal is to inform security professionals about the pros and cons of implementing each of these HIDS.

Dealing with DoH: Methods to Increase DNS Visibility as DoH Gains Traction
By Scott Fether
May 6, 2020

  • Microsoft is planning to implement DNS over HTTPS (DoH) in the native Windows DNS Client (Jensen, Pashov, & Montenegro, 2019). Firefox and Chrome have already implemented this protocol in their browsers. Because of DoH’s encrypted nature and use of port 443, security analysts will need to adjust their log collection and analysis techniques. Much of the literature available regarding DoH suggests either preventing the use of DoH (Hjelm, 2019, p. 20) or utilizing SSL/TLS proxies to inspect the queries (Middlehurst, 2018). Firefox can generate host logs on DoH resolution, which includes unencrypted queries and answers. This research will explore various inspection and logging techniques that will identify the most effective approach to analyzing DoH.

Creating an Active Defense PowerShell Framework to Improve Security Hygiene and Posture
By Kyle Snihur
April 28, 2020

  • Security professionals are inundated with alerts, and analysts are suffering alert fatigue with no actionable intelligence (Miliard, 2019). Poor priorities and lack of resources put enterprises at risk (Wilson, 2015). In Windows domains, PowerShell can be used to aggregate data and provide actionable reports and alerts for security professionals continuously. This paper explores the viability of creating an Active Defense PowerShell framework for small to medium-sized organizations to improve security hygiene and posture. The benefits include providing actionable alerts and emails that security professionals can quickly address. Aggregated data can also be used to identify and prioritize holes in an organization's security posture.

Mission Implausible: Defeating Plausible Deniability with Digital Forensics
By Michael Smith
April 2, 2020

  • The goal of plausible deniability is to hide potentially sensitive information while maintaining the appearance of compliance. In simple terms, it is granting someone access to a safe but keeping items of real value successfully hidden in a false bottom. Encryption platforms such as VeraCrypt and TrueCrypt achieve this goal in the digital realm using nested encryption. This nesting typically takes one of two forms; a deniable file system or a deniable operating system (OS). The deniable file system uses the interior of an encrypted container to mask its presence, akin to the false bottom to the safe analogy. The deniable operating system uses an encrypted bootable partition to mask the presence of a second OS, much like a safe that reveals a different compartment based on how a key turns in the lock. The use of encryption to create a scenario for plausible deniability presents a significant threat to the success of law enforcement and digital forensic professionals. Performing registry analysis and digital forensics is the metaphorical equivalent of using a magnifying glass to look for clues inside the safe with a false bottom or a key-based compartment. When forensics is successful in revealing clues of a deniable file system, it effectively defeats the case for plausible deniability. The goal of this research is to explore the digital forensics metaphorical equivalent of such clues.

Tracking Penetration Test Activities
By Joshua Arey
April 2, 2020

  • Most penetration testers (“pentesters”) are required to track their actions during a penetration test event but rarely do so in enough detail to recreate all of their activities accurately. Instead, pentesters often only track activities that lead to findings disclosed in the final penetration testing (“pentest”) report. Tracking testing activities can be challenging and often gets disregarded when it slows down a pentest engagement. Fortunately, there are automatic logging mechanisms on most pentest systems available for leveraging to help automatically track pentest activities. However, many logging capabilities do not sufficiently record the generated network traffic from the attacking system, and network monitoring tools do not record what actions triggered the sending of packets. Customizing system logging configurations and incorporating system monitoring tools such as auditd can help automatically track testing activities on Linux-based pentest systems. This additional logging allows for accurate tracking in enough detail for an auditor to accurately determine what actions a pentester took against the pentest targets.

Preventing Living off the Land Attacks
By David Brown
March 5, 2020

  • Increasingly, attackers are relying on trusted Microsoft programs to carry out attacks against individuals and organizations (Symantec, 2017). The software typically comes installed by default in Windows and is often required for the essential functionality of the operating system. These types of attacks are called “living off the land,” and they can be challenging to detect and prevent. This paper examines the viability of using Microsoft AppLocker to thwart living off the land attacks without impacting the legitimate operating system and administrative use of the underlying Microsoft programs.

Incident Response in a Zero Trust World
By Heath Lawson
February 27, 2020

  • Zero Trust Networks is a new security model that enables organizations to provide continuously verified access to assets and are becoming more common as organizations adopt cloud resources (Rose, S., Borchert, O., Mitchell, S., & Connelly, S., 2019). This new model enables organizations to achieve much tighter control over access to their resources by using a variety of signals that provide great insight to validate access requests. As this approach is increasingly adopted, incident responders must understand how Zero Trust Networks can enhance their existing processes. This paper provides a comparison of incident response capabilities in Zero Trust Networks compared to traditional perimeter-centric models, and guidance for incident responders tasked with managing incidents using this new paradigm.

Vulnerabilities on the Wire: Mitigations for Insecure ICS Device Communication
By Michael Hoffman
February 12, 2020

  • Modbus TCP and other legacy ICS protocols ported over from serial communications are still widely used in many ICS verticals. Due to extended operational ICS component life, these protocols will be used for many years to come. Insecure ICS protocols allow attackers to potentially manipulate PLC code and logic values that could lead to disrupted critical system operations. These protocols are susceptible to replay attacks and unauthenticated command execution (Bodungen, Singer, Shbeeb, Hilt, & Wilhoit, 2017). This paper examines the viability of deploying PLC configuration modifications, programming best practices, and network security controls to demonstrate that it is possible to increase the difficulty for attackers to maliciously abuse ICS devices and mitigate the effects of attacks based on insecure ICS protocols. Student kits provided in SANS ICS515 and ICS612 courses form the backdrop for testing and evaluation of ICS protocols and device configurations.

Defending Infrastructure as Code in GitHub Enterprise
By Dane Stuckey
January 21, 2020

  • As infrastructure workloads have changed, cloud workflows have been adopted, and elastic provisioning and de-provisioning have become standard, manual processes. As a result, semi-automated infrastructure management workflows have proven insufficient. One of the most widely implemented solutions to these problems has been the adoption of declarative infrastructure as code, a philosophy and set of tools which use machine-readable files that declare the desired state of infrastructure. Unfortunately, infrastructure as code has introduced new attack surfaces and techniques that traditional network defense controls may not adequately cover or account for. This paper examines a common deployment of infrastructure as code via GitHub Enterprise and HashiCorp Terraform, explores an attack scenario, examines attacker tradecraft within the context of the MITRE ATT&CK framework, and makes recommendations for defensive controls and intrusion detection techniques.

Lateral traffic movement in Virtual Private Clouds
By Andy Huang
January 3, 2020

  • Cloud vendors have introduced virtual private cloud (VPC) structures to bring the benefits of private cloud into the public cloud. These structures provide vertical segmentation and isolation for application projects implemented within them. However, the security context needs to be considered as applications communicate with one another between VPCs using technologies such as peering and privatelinks. Applications are usually highly dependent on each other for data and functionality, leading to cross-connections between VPC structures. The implications between different connection setups need to be vetted to ensure that access is not overly permissive, thus leading to possible lateral movement of traffic.

Defense in Depth: Can Geolocation Help Prevent Tax Fraud?
By Jon Glas
January 3, 2020

  • Abstract: Accountants and tax filing businesses use complex software to automate the preparation and electronic filing of tax returns. Cybercriminals harvest identities, breach networks, and impersonate legitimate users to leverage tax software to defraud the government, the affected businesses, and citizens for over $1 billion annually (McTigue, 2018). The IRS and tax software companies have partnered to implement controls focused on authentication, authorization, and detection to identify fraudulent tax returns before they are processed. These controls successfully prevent upwards of $10 billion of fraudulent filing a year (McTigue, 2018), but those controls focus on an analysis of the ‘who’ and ‘what’ components of tax returns. This paper uses Geolocation tools to look at the ‘where’ component of tax returns by analyzing legitimate and fraudulent tax return electronic filing data to look for trends and patterns. The goal of this paper is to determine if Geolocation technologies can be used as an additional layer of controls to support a defense in depth approach of fraud prevention.

Defense in Depth for a Small Office/Home Office
By Gregory Melton
December 18, 2019

  • Much attention is given to enterprise security with expensive solutions and teams of both IT and security personnel, but the home office may only ever be proactively defended by a single amateur or hobbyist. Large scale corporate solutions may deal with Advanced Persistent Threats (APTs) and corporate espionage, but there are far fewer solutions to home office threats. This paper focuses on best practices for a home network running minimal servers to protect from casual browsing and careless home users. This research intends to demonstrate meaningful defense of endpoints in a local network by drastically reducing potential communication to C2 nodes and data exfiltration with proper filtering and minimal extra hardware.

Building an Audit Engine to Detect, Record, and Validate Internal Employees' Need for Accessing Customer Data
By Jekeon Jack Cha
December 11, 2019

  • When using Software-as-a-Service (SaaS) products, customers are asked to store and entrust a large volume of personal data to SaaS companies. Unfortunately, consumers are living in a world of numerous data breaches and significant public privacy violations. As a result, customers are rightfully skeptical of the privacy policies that businesses provide and are looking for service providers who can distinguish their commitment to customer data privacy. This paper examines the viability of building an accurate audit engine to detect, record, and validate internal employees’ reasons for accessing a particular customer’s data. In doing so, businesses can gain clear visibility into their current processes and access patterns to meet the rising privacy demand of their customers.

Looking for Linux: WSL Key Evidence
By Amanda Draeger
December 11, 2019

  • Microsoft released Windows Subsystem for Linux (WSL) in 2016 to much fanfare, but little research into the security implications of installing this feature followed. This lack of research, and lack of documentation, is a problem for the administrators who want to take advantage of its feature set while monitoring their systems for unusual behavior. Native Windows logging can provide visibility into WSL’s behavior, but there has been no research on which logs can provide this visibility, and what exact information they can provide. This paper examines how to monitor a Windows 10 system with WSL installed for common indicators of malicious activity.

Detecting Malicious Authentication Events in SaaS Applications Using Anomaly Detection
By Gavin Grisamore
December 11, 2019

  • SaaS applications have been exploding in popularity due to their ease of deployment, use, and maintenance. Security teams are struggling to keep pace with the growing list of applications used in their environment as well as with the process of tracking the data these applications hold. Attackers have been taking advantage of these visibility gaps and have targeted SaaS applications regularly. By using log data from the applications themselves, security teams can use anomaly detection techniques to find and respond to such attacks. Anomaly detection allows security teams to more quickly identify and remedy a data breach by condensing large amounts of data into a shortened list of events that are outliers. The detection techniques used can help security teams respond to or prevent the next data breach.

Catch Me If You Can: Detecting Server-Side Request Forgery Attacks on Amazon Web Services
By Sean McElroy
November 27, 2019

  • Cloud infrastructure offers significant benefits to organizations capable of leveraging rich application programming interfaces (APIs) to automate environments at scale. However, unauthorized access to management APIs can enable threat actors to compromise the security of large amounts of sensitive data very quickly. Practitioners have documented techniques for gaining access through Server-Side Request Forgery (SSRF) vulnerabilities that exploit management APIs within cloud providers. However, mature organizations have failed to detect some of the most significant breaches, sometimes for months after a security incident. Cloud services adoption is increasing, and firms need effective methods of detecting SSRF attempts to identify threats and mitigate vulnerabilities. This paper examines a variety of tools and techniques to detect SSRF activity within an Amazon Web Services (AWS) environment that can be used to monitor for real-time SSRF exploit attempts against the AWS API. The research findings outline the efficacy of four different strategies to answer the question of whether security professionals can leverage additional vendor-provided and open-source tools to detect SSRF attacks.

Securing the Supply Chain - A Hybrid Approach to Effective SCRM Policies and Procedures
By Daniel Carbonaro
November 7, 2019

  • Organizations’ supply chains are growing increasingly interdependent and complex, the result of which is an ever-increasing attack surface that must be defended. Current supply chain security frameworks offer effective guidance to organizations to help mitigate their supply chains from attack. However, they are limited in their scope and impact and can be extremely complex for organizations to adopt effectively. To further complicate issues, the ability of an organization to identify the scope of their supply chains may be a complicated endeavor. This paper seeks to give context not only to the challenges facing security within the ICT Supply Chain, but attempts to give a hybrid framework for any business regardless of size or function to follow when attempting to mitigate threats both to and from within their supply chain.

Guarding the Modern Castle: Providing Visibility into the BACnet Protocol
By Aaron Heller
October 30, 2019

  • Building automation devices are used to monitor and control HVAC, security, fire, lighting, and other similar functions in a building or across a campus. Over 60% of the global market for building automation relies on the BACnet protocol to enable communication between field devices (BSRIA, 2018). There are few open-source network intrusion detection or prevention systems (NIDS/NIPS) capable of interpreting and monitoring the BACnet protocol (Hurd & McCarty, 2017). This blind spot presents a significant security risk. The maloperation of building automation systems can cause physical damage and financial losses, and can allow an attacker to pivot from a building automation network into other networks (Balent & Gordy, 2013). A BACnet/IP protocol analyzer was created for an open-source NIDS/NIPS called Zeek to help minimize this network security blind spot. The analyzer was tested with publicly available BACnet capture files, including some with protocol anomalies. The new analyzer and test cases provide network defenders with a tool to implement a BACnet/IP capable NIDS/NIPS as well as insight into how to defend the modern-day “castles” that rely on the Building Automation and Control network protocol.

An AWS Network Monitoring Comparison
By Nichole Dugan
October 30, 2019

  • AWS recently released network traffic mirroring in their environment. As this is a relatively new feature, users of the service in the past have used tools such as Security Onion to monitor traffic using a hosted base model of forwarding network traffic to analyze the data. It may not be apparent to an organization which option works best for them, so an analysis should be done of both the traffic mirroring and host based options to determine the benefits and drawbacks of each method. This paper seeks to compare the two types of network monitoring available in the AWS environment, traffic mirroring and host based, and determine which method is more cost-effective, and, through testing, determine which method generates more alerts.

Challenges in Effective DNS Query Monitoring
By Caleb Baker
October 23, 2019

  • Domain Name System (DNS) queries are fundamental functions of modern computer networks. Capturing the contents of DNS queries and analyzing the logged data is a recommended practice for gaining insight into activity on a network and monitoring for unusual behavior. Multiple solutions and approaches are available for monitoring DNS queries. Some methods add the capability to redirect queries identified as malicious, stopping an attack. This paper investigates the effectiveness of solutions that utilize the monitoring of DNS queries to detect and block behavior DNS queries identified as potential indicators of compromise. The performance of each tool will be evaluated against a sample of real-world threats that utilize DNS queries. As the prevalence of DNS query monitoring increases, attackers will need to take steps to bypass monitoring by obfuscating DNS queries. Accordingly, this paper will also assess the capabilities of each tool to detect techniques for DNS query obfuscation.

BITS Forensics
By Roberto Nardella
October 14, 2019

  • The “Background Intelligent Transfer Service” (BITS) is a technology developed by Microsoft in order to manage file uploads and downloads, to and from HTTP servers and SMB shares, in a more controlled and load balanced way. If the user starting the download were to log out the computer, or if a network connection is lost, BITS will resume the download automatically; the capability to survive reboots makes it an ideal tool for attackers to drop malicious files into an impacted Windows workstation, especially considering that Microsoft boxes do not have tools like “wget” or “curl” installed by default, and that web browsers (especially those in Corporate environments) may have filters and plugins preventing the download of bad files. In recent years, BITS has been increasingly used not only as a means to place malicious files into targets but also to exfiltrate data from compromised computers. This paper shows how BITS can be used for malicious purposes and examines the traces left by its usage in network traffic, hard disk and RAM. The purpose of this research is also to compare the eventual findings that can surface from each type of examination (network traffic examination, hard disk examination and RAM examination) and highlight the limitation of each analysis type.

Pass-the-Hash in Windows 10
By Lukasz Cyra
September 27, 2019

  • Attackers have used the Pass-the-Hash (PtH) attack for over two decades. Its effectiveness has led to several changes to the design of Windows. Those changes influenced the feasibility of the attack and the effectiveness of the tools used to execute it. At the same time, novel PtH attack strategies appeared. All this has led to confusion about what is still feasible and what configurations of Windows are vulnerable. This paper examines various methods of hash extraction and execution of the PtH attack. It identifies the prerequisites for the attack and suggests hardening options. Testing in Windows 10 v1903 supports the findings. Ultimately, this paper shows the level of risk posed by PtH to environments using the latest version of Windows 10.

Exploring Osquery, Fleet, and Elastic Stack as an Open-source solution to Endpoint Detection and Response
By Christopher Hurless
September 10, 2019

  • Endpoint Detection and Response (EDR) capabilities are rapidly evolving as a method of identifying threats to an organization's computing environment. Global research and advisory company, Gartner defines EDR as: "Solutions that record and store endpoint-system-level behaviors, use various data analytics techniques to detect suspicious system behavior, provide contextual information, block malicious activity, and provide remediation suggestions to restore affected systems" (Gartner, 2019). This paper explores the feasibility and difficulty of using open-source tools as a practical alternative to commercial EDR solutions. A business with sufficiently mature Incident Response (IR) processes might find that building an EDR solution “in house” with open-source tools provides both the knowledge and the technical capability to detect and investigate security incidents. The required skill level to begin using and gaining value from these tools is relatively low and can be acquired during the build process through problem deconstruction and solution engineering.

A New Needle and Haystack: Detecting DNS over HTTPS Usage
By Drew Hjelm
September 10, 2019

  • Encrypted DNS technologies such as DNS over HTTPS (DoH) give users new means to protect privacy while using the Internet. Organizations will face new obstacles for monitoring network traffic on their networks as users attempt to use encrypted DNS. First, the paper presents several tests to perform to detect encrypted DNS using endpoint tools and network traffic monitoring. The goal of this research is to present several controls that organizations can implement to prevent the use of encrypted DNS on enterprise networks.

Changing the DevOps Culture One Security Scan at a Time
By Jon-Michael Lacek
August 28, 2019

  • Information Security has always been considered a roadblock when it comes to project management and execution. This mentality is even further solidified when discussing Information Security from a DevOps perspective. A fundamental principle of a DevOps lifecycle is a development and operations approach to delivering a product that supports automation and continuous delivery. When an Information Technology (IT) Security team has to manually obtain the application code and scan it for vulnerabilities each time a DevOps team wants to perform a release, the goals of DevOps can be significantly impacted. This frequently leads to IT Security teams and their tools being left out of the release management lifecycle. The research presented in this paper will demonstrate that available pipeline plugins do not introduce significant delays into the release process and are able to identify all of the vulnerabilities detected by traditional application scanning tools. The art of DevOps is driving organizations to produce and release code at speeds faster than ever before, which means that IT Security teams need to figure out a way to insert themselves into this practice.

Container-Based Networks: Lowering the TCO of the Modern Cyber Range
By Bryan Scarbrough
August 26, 2019

  • The rapid pace and ever-changing environment of cybersecurity make it difficult for companies to find qualified individuals, and for those same individuals to receive the training and experience they need to succeed. Some are fortunate enough to use cyber ranges for training and proficiency testing, but access is often limited to company employees. Limited access to cyber ranges precludes outsiders or newcomers from learning the skills necessary to meet the ever-growing demand for cybersecurity professionals. There have been several open-sourced initiatives such as Japan's Cybersecurity Training and Operation Network Environment (CyTrONE), and the University of Rhode Island's Open Cyber Challenge Platform (OCCP), but they require significant hardware to support. The average security professional needs a cyber range environment that replicates real-world Internet topologies, networks, and services, but operates on affordable equipment.

Cyber Protectionism: Global Policies are Adversely Impacting Cybersecurity
By Erik Avery
August 21, 2019

  • Cyber Protectionist policies are adversely impacting global cybersecurity despite their intent to mitigate threats to national security. These policies threaten the information security community by generating effects which increase the risk to the networks they are intended to protect. International product bans, data-flow restrictions, and increased internet-enabled crime are notable results of protectionist policies – all of which may be countered through identifying protectionist climates and subsequent threat. Analyzed historical evidence facilitates a metrics-based comparison between protectionist climate and cybersecurity threats to comprise the Cyber Protectionist Risk Matrix - a risk framework that establishes a new cybersecurity industry standard.

ATT&CKing Threat Management: A Structured Methodology for Cyber Threat Analysis
By Andy Piazza
July 29, 2019

  • Risk management is a principal focus for most information security programs. Executives rely on their IT security staff to provide timely and accurate information regarding the threats and vulnerabilities within the enterprise so that they can effectively manage the risks facing their organizations. Threat intelligence teams provide analysis that supports executive decision-makers at the strategic and operational levels. This analysis aids decision makers in their commission to balance risk management with resource management. By leveraging the MITRE Adversarial Tactics Techniques & Common Knowledge (ATT&CK) framework as a quantitative data model, analysts can bridge the gap between strategic, operational, and tactical intelligence while advising their leadership on how to prioritize computer network defense, incident response, and threat hunting efforts to maximize resources while addressing priority threats.

Attackers Inside the Walls: Detecting Malicious Activity
By Sean Goodwin
July 2, 2019

  • Small and medium-sized businesses (SMBs) do not always have the budget for an advanced intrusion detection system (IDS) technology. Open-source software can fill this gap, but these free solutions may not provide full coverage for known attacks, especially once the attacker is inside the perimeter. This paper investigates the IDS capabilities of a stand-alone Security Onion device when combined with built-in event logging in a small Windows environment to detect malicious actors on the internal network.

Building Cloud-Based Automated Response Systems
By Mishka McCowan
July 2, 2019

  • When moving to public cloud infrastructures such as Amazon Web Services (AWS), organizations gain access to tools and services that enable automated responses to specific threats. This paper will explore the advantages and disadvantages of using native AWS services to build an automated response system. It will examine the elements that organizations should consider including developing the proper skills and systems that are required for the long-term viability of such a system.

Defending with Graphs: Create a Graph Data Map to Visualize Pivot Paths
By Brianne Fahey
June 26, 2019

  • Preparations made during the Identify Function of the NIST Cybersecurity Framework can often pay dividends once an event response is warranted. Knowing what log data is available improves incident response readiness and providing a visual layout of those sources enables responders to pivot rapidly across relevant elements. Thinking in graphs is a multi-dimensional approach that improves upon defense that relies on one-dimensional lists and two-dimensional link analyses. This paper proposes a methodology to survey available data element relationships and apply a graph database schema to create a visual map. This graph data map can be used by analysts to query relationships and determine paths through the available data sources. A graph data map also allows for the consideration of log sources typically found in a SIEM alongside other data sources like an asset management database, application whitelist, or HR information which may be particularly useful for event context and to review potential Insider Threats. The templates and techniques described in this paper are available in GitHub for immediate use and further testing.

Automating Response to Phish Reporting
By Geoffrey Parker
June 12, 2019

  • Phish Reporting buttons have become easy buttons. They are used universally for reporting spam, real phishing attacks when detected, and legitimate emails. Phish Reporting buttons automate the reporting process for users; however, they have become a catch-all to dispose of unwanted messages and are now overwhelming Response Teams and overflowing Help Desk ticket queues. The excessive reporting leads to a problem of managing timely responses to real phishing attacks. Response times to false positives, spam, and legitimate messages incorrectly reported are also significant factors. Vendors sold phish alert buttons with phishing simulation systems which then became part of more in-depth training systems and later threat management systems. Because of this organic growth, many companies implemented a phish reporting system but did not know that they needed an automation system to manage the resulting influx of tickets. Triage systems can automate a high percentage of these phish alerts, freeing the incident response teams to deal with the genuine threats to the enterprise on a prioritized basis.

Mobile A/V: Is it worth it?
By Nicholas Dorris
June 5, 2019

  • In the mid 2010’s, mobile devices such as smartphones and tablets have become ubiquitous with users employing these gadgets for various applications. While this pervasive adoption of mobile devices offers numerous advantages, attackers have leveraged the languid attitude of device owners to secure the owner’s gadgets. The diversity of mobile devices exposes them to a variety of security threats, as the industry lacks a comprehensive solution to protect mobile devices. In a bid to secure their assets and informational resources, individuals and corporations have turned to commercial mobile antivirus software. Most security providers present mobile versions of their PC antivirus applications, which are primarily based on the conventional signature-based detection techniques. Although the signature-based strategy can be valuable in identifying and mitigating profiled malware, it is not as effective in detecting unknown, new, or evolving threats, as it lacks adequate information and signature regarding these infections. Mobile attackers have remained ahead via obfuscation and transformation methods to bypass detection techniques. This paper seeks to ascertain whether current mobile antivirus solutions are effective, in addition to which default Android settings assist in the prevention or mitigation of various malware and their consequences.

Finding Secrets in Source Code the DevOps Way
By Phillip Marlow
June 5, 2019

  • Secrets, such as private keys or API tokens, are regularly leaked by developers in source code repositories. In 2016, researchers found over 1500 Slack API tokens in public GitHub repositories belonging to major companies (Detectify Labs, 2016). Moreover, a single leak can lead to widespread effects in dependent projects (JS Foundation, 2018) or direct monetary costs (Mogull, 2014). Existing tools for detecting these leaks are designed for either prevention or detection during full penetration-test-style scans. This paper presents a way to reduce detection time by integrating incremental secrets scanning into a continuous integration pipeline.

DICE and MUD Protocols for Securing IoT Devices
By Muhammed Ayar
June 5, 2019

  • An exponential growth of Internet of Things (IoT) devices on communication networks is creating an increasing security challenge that is threatening the entire Internet community. Attackers operating networks of IoT devices can target any site on the Internet and bring it down using denial of service attacks. As exemplified in various DDoS attacks that took down portions of the Internet in the past few years (such as the attacks on Dyn and KrebsOnSecurity (Hallman, Bryan, Palavicini Jr, Divita, Romero- Mariona, 2017)), IoT users need to take drastic steps in securing them. This research will discuss the steps in attempting to secure IoT devices using DICE and MUD.

Digging for Gold: Examining DNS Logs on Windows Clients
By Amanda Draeger
May 22, 2019

  • Investigators can examine Domain Name Service (DNS) queries to find potentially compromised hosts by searching for queries that are unusual or to known malicious domains. Once the investigator identifies the compromised host, they must then locate the process that is generating the DNS queries. The problem is that Windows hosts do not log DNS client transactions by default, and there is little documentation on the structure of those logs. This paper examines how to configure several modern versions of Windows to log DNS client transactions to determine the originating process for any given DNS query. These configurations will allow investigators to determine not only what host is compromised, but what the malicious process is more quickly.

Overcoming the Compliance Challenges of Biometrics
By David Todd
May 22, 2019

  • Due to increased regulations designed to protect sensitive data such as personally identifiable information (PII) and protected health information (PHI), hospitals and other industries requiring improved data protections are starting to adopt biometrics. However, adoption has been slow within many of the industries that have suffered most of the breaches over the last several years. One reason adoption has been slow is that companies hesitate to implement biometrics across their organization without first understanding the vast complexities of the various state-by-state privacy regulations. By adopting a common biometrics compliance framework, this research will show how organizations can implement biometric solutions that comply with the overall spirit of the different state privacy and biometric regulations, enabling those companies to improve global data protections.

Runtime Application Self-Protection (RASP), Investigation of the Effectiveness of a RASP Solution in Protecting Known Vulnerable Target Applications
By Alexander Fry
April 30, 2019

  • Year after year, attackers target application-level vulnerabilities. To address these vulnerabilities, application security teams have increasingly focused on shifting left - identifying and fixing vulnerabilities earlier in the software development life cycle. However, at the same time, development and operations teams have been accelerating the pace of software release, moving towards continuous delivery. As software is released more frequently, gaps remain in test coverage leading to the introduction of vulnerabilities in production. To prevent these vulnerabilities from being exploited, it is necessary that applications become self-defending. RASP is a means to quickly make both new and legacy applications self-defending. However, because most applications are custom-coded and therefore unique, RASP is not one-size-fits-all - it must be trialed to ensure that it meets performance and attack protection goals. In addition, RASP integrates with critical applications, whose stakeholders typically span the entire organization. To convince these varied stakeholders, it is necessary to both prove the benefits and show that RASP does not adversely affect application performance or stability. This paper helps organizations that may be evaluating a RASP solution by outlining activities that measure the effectiveness and performance of a RASP solution against a given application portfolio.

Security Considerations for Voice over Wi-Fi (VoWiFi) Systems
By Joel Chapman
April 30, 2019

  • As the world pivots from Public Switched Telephony Networks (PSTN) to Voice over Internet Protocol (VoIP)-based telephony architectures, users are employing VoIP-based solutions in more situations. Mobile devices have become a ubiquitous part of a person's identity in the developed world. In the United States in 2017, there were an estimated 224.3 million smartphone users, representing about 68% of the total population. The ability to route telephone call traffic over Wi-Fi networks will continue to expand the coverage area of mobile devices, especially into urban areas where high-density construction has previously caused high signal attenuation. Estimates show that by 2020, Wi-Fi-based calling will make up 53% of mobile IP voice service usage (roughly 9 trillion minutes per year) (Xie, 2018). In contrast to the more traditional VoIP solutions, however, the standards for carrier-based Voice over Wi-Fi (VoWiFi) are often proprietary and have not been well-publicized or vetted. This paper examines the vulnerabilities of VoWiFi calling, assesses what common and less well-known attacks are able to exploit those vulnerabilities, and then proposes technological or procedural security protocols to harden telephony systems against adversary exploitation.

Security Monitoring of Windows Containers
By Peter Di Giorgio
March 27, 2019

  • The information technology community has utilized container technology since the LXC project began in 2008 (Hildred, 2015). Containers are a form of virtualization that package application code and its dependencies together. Containers share the operating system kernel but maintain isolated processes. Until recently, it was not possible for the Windows operating system to share its kernel. As such, developers were long unable to package many Windows-specific applications into containers. However, after ten years of waiting, Microsoft finally delivered Windows containers in 2018. Today, container security best practices focus on container integrity and container host security. The industry is just beginning to consider techniques to monitor Windows containers. This research focuses on the possibility of using known techniques and open source tools to extract Windows event logs, processes, services, and registry data from containers to observe attacks.

Gaining Endpoint Log Visibility in ICS Environments
By Michael Hoffman
March 11, 2019

  • Security event logging is a base IT security practice and is referenced in Industrial Control Security (ICS) standards and best practices. Although there are many techniques and tools available to gather event logs and provide visibility to SOC analysis in the IT realm, there are limited resources available that discuss this topic specifically within the context of the ICS industry. As many in the ICS community struggle with gaining logging visibility in their environments and understanding collection methodologies, logging implementation guidance is further needed to address this concern. Logging methods used in ICS, such as WMI, Syslog, and Windows Event Forwarding (WEF), are common to the IT industry. This paper examines WEF in the context of Windows ICS environments to determine if WEF is better suited for ICS environments than WMI pulling regarding bandwidth, security, and deployment considerations. The comparison between the two logging methods is made in an ICS lab representing automation equipment commonly found in energy facilities.

PowerShell Security: Is it Enough?
By Timothy Hoffman
February 20, 2019

  • PowerShell is a core component of any modern Microsoft Windows environment and is used daily by administrators around the world. However, it has also become an “attacker’s tool of choice when conducting fileless malware attacks” (O’Connor, 2017). According to a study by Symantec, the number of prevented PowerShell attacks increased by over 600% between the last half of 2017 and the first half of 2018 (Wueest, 2018). This is a staggering number of prevented attacks, but the more concerning problem is the unknown number of undetected attacks that occurred during this time. Modern attackers often prefer to “live off the land,” using native tools already in an environment to prevent detection; PowerShell is a prime example of this is. These statistics lead to a suggestion that current PowerShell security may not be effective enough, or organizations are improperly implementing it. This paper investigates the efficiency of PowerShell security, analyzing the success of security features like execution policies, language modes, and Windows Defender, as well as the vulnerabilities introduced by leaving PowerShell 2.0 enabled in an environment. Multiple attack campaigns will be conducted against these security features while implemented individually and collectively to validate their effectiveness in preventing PowerShell from being used maliciously.

Cyber Threats to the Bioengineering Supply Chain
By Scott Nawrocki
February 12, 2019

  • Biotechnology and pharmaceutical companies rely on the sequencing of DNA to conduct research, develop new drug therapies, solve environmental challenges and study emerging infectious diseases. Synthetic biology combines biology and computer engineering disciplines to read, synthetically write and store DNA sequences utilizing bioinformatics applications. Bioengineers begin with a computerized genetic model and turn that model into a living cell (2011, Smolke). Genetic editing is making headlines as there are rumors that a genetically modified human, immune to HIV, was born in China. As the soil on our farms becomes depleted of nitrogen, genetic research is focusing on applications as a means to reintroduce nitrogen into the ground. Reliance on oil and pollution has paved the way for research into bio-fuels. Genomic research advances have outpaced the security of these applications and technology which leaves them vulnerable to attack (2017, Ney). As information security professionals, we must keep pace with these advances. This research will demonstrate the stages of a network-based attack, recommend Critical Security Controls countermeasures and introduce the concept of a Bioengineering Systems Kill Chain.

PyFunnels: Data Normalization for InfoSec Workflows
By TJ Nicholls
February 1, 2019

  • Information security professionals cannot afford delays in their workflow due to the challenge of integrating data. For example, when data is gathered using multiple tools, the varying output formats must be normalized before automation can occur. This research details a Python library to normalize output from industry standard tools and act as a consolidation point for that functionality. Information security professionals should collaborate using a centralized resource that facilitates easy access to output data. Doing so will bypass extraneous tasks and jump straight to the application of practical data.

Onion-Zeek-RITA: Improving Network Visibility and Detecting C2 Activity
By Dallas Haselhorst
January 4, 2019

  • The information security industry is predicted to exceed 100 billion dollars in the next few years. Despite the dollars invested, breaches continue to dominate the headlines. Despite best efforts, all attempts to keep the enemies at the gates have ultimately failed. Meanwhile, attacker dwell times on compromised systems and networks remain absurdly high. Traditional defenses fall short in detecting post-compromise activity even when properly configured and monitored. Prevention must remain a top priority, but every security plan must also include hunting for threats after the initial compromise. High price tags often accompany quality solutions, yet tools such as Security Onion, Zeek (Bro), and RITA require little more than time and skill. With these freely available tools, organizations can effectively detect advanced threats including real-world command and control frameworks.

Don't Knock Bro
By Brian Nafziger
December 12, 2018

  • Today's defenders often focus detections on host-level tools and techniques thereby requiring host logging setup and management. However, network-level techniques may provide an alternative without host changes. The Bro Network Security Monitor (NSM) tool allows today's defenders to focus detection techniques at the network-level. An old method for controlling a concealed backdoor on a system using a defined sequence of packets to various ports is known as port-knocking. Unsurprisingly, old methods still offer value and malware, defenders, and attackers still use port-knocking. Current port-knocking detection relies on traffic data mining techniques that only exist in academia writing without any applicable tools. Since Bro is a network-level tool, it should be possible to adapt these data mining techniques to detect port-knocking within Bro. This research will document the process of creating and confirming a port-knocking network-level detection with Bro that will provide an immediate and accessible detection technique for organizations.

A Swipe and a Tap: Does Marketing Easier 2FA Increase Adoption?
By Preston Ackerman
November 19, 2018

  • Data breaches and Internet-enabled fraud remain a costly and troubling issue for businesses and home end-users alike. Two-factor authentication (2FA) has long held promise as one of the most viable solutions that enables ordinary users to implement extraordinary protection. A security industry push for widespread 2FA availability has resulted in the service being offered free of charge on most major platforms; however, user adoption remains low. A previous study (Ackerman, 2017) indicated that awareness videos can influence user behavior by providing a clear message which outlines personal risks, offers a mitigation strategy, and demonstrates the ease of implementing the mitigating measure. Building on that previous work, this study, focused on younger millennials between 21 and 26 years of age, seeks to reveal additional insights by designing experiments around the following key questions: 1) Does including a real-time implementation demonstration increase user adoption? 2) Does marketing the convenient push notification form of 2FA, rather than the popular SMS text method, increase user adoption? To address these questions, a two-phase study exposed groups of users to different video messages advocating use of 2FA. Each phase of the survey collected data measuring self-efficacy, fear, response costs and efficacy, perceived threat vulnerability and severity, and behavioral intent. The second phase also collected survey data regarding actual 2FA adoption. The insights derived from subsequent analysis could be applicable not just to increasing 2FA adoption but to security awareness programs more generally.

Microsoft DNS Logs Parsing and Analysis: Establishing a Standard Toolset and Methodology for Incident Responders
By Shelly Giesbrecht
November 2, 2018

  • Microsoft DNS request and response event logs are frequently ignored by incident responders within an investigation due to a historical reputation of being hard to parse and analyze. The fundamental importance of DNS to networking and the functioning of the Internet suggests this oversight could lead to a lack of crucial contextual information in an investigative timeline. This paper seeks to define a best practice for parsing, exporting and analyzing Microsoft DNS Debug and Analytical logs through the comparison of existing tool combinations to DNSplice, a purpose-built utility coded during the development of this paper. Findings suggest that DNSplice is superior to other toolsets tested where time to completion is a critical factor in the investigative process. Further research is required to determine if the findings are still valid on larger datasets or different analysis hardware.

Tearing up Smart Contract Botnets
By Jonathan Sweeny
October 22, 2018

  • The distributed resiliency of smart contracts on private blockchains is enticing to bot herders as a method of maintaining a capable communications channel with the members of a botnet. This research explores the weaknesses that are inherent to this approach of botnet management. These weaknesses, when targeted properly by law enforcement or malware researchers, could limit the capabilities and effectiveness of the botnet. Depending on the weakness targeted, the results vary from partial takedown to total dismantlement of the botnet.

To Block or not to Block? Impact and Analysis of Actively Blocking Shodan Scans
By Andre Shori
October 22, 2018

  • This paper details an experiment constructed to evaluate the effectiveness of blocking Shodan search engine scans in reducing overall attack traffic volumes. Shodan is considered to be part of an attacker’s toolset, and there is a persistent perception that blocking Shodan Scans will reduce an organization’s attack surface. An attempt was made to determine what effect, if any, such a block would result in by comparing attacker traffic before and after implementing a block on Shodan scans, and by determining the complexity of performing such a block. The analysis here may provide defenders and managers with useful data when deciding on whether or not to devote resources to blocking Shodan or other similar internet-connected device search engines.

Generating Anomalies Improves Return on Investment: A Case Study for Implementing Honeytokens
By Wes Earnest
October 11, 2018

  • Putting the right information security architecture into practice within an organization can be a daunting challenge. Many organizations have implemented a Security Information and Event Management (SIEM) to comply with the logging requirements of various security standards, only to find that it does not meet their information security expectations. According to a recent survey, more than half of respondents say they are not satisfied with their organization's SIEM. The following case study deconstructs these logging requirements and the assumptions that lead to a typical SIEM implementation, and discusses an alternative approach focused on improving the organization’s return on investment, decreasing security risk, and decreasing mean time to detection of a potential security breach.

Testing Web Application Security Scanners against a Web 2.0 Vulnerable Web Application
By Edmund Foster
October 11, 2018

  • Web application security scanners are used to perform proactive security testing of web applications. Their effectiveness is far from certain, and few studies have tested them against modern ‘Web 2.0' technologies which present significant challenges to scanners. In this study three web application security scanners are tested in 'point-and-shoot' mode against a Web 2.0 vulnerable web application with AJAX and HTML use cases. Significant variations in performance were observed and almost three-quarters of vulnerabilities went undetected. The web application security scanners did not identify Stored XSS, OS Command, Remote File Inclusion, and Integer Overflow vulnerabilities. This study supports the recommendation to combine multiple web application security scanners and use them in conjunction with a specific scanning strategy.

All-Seeing Eye or Blind Man? Understanding the Linux Kernel Auditing System
By David Kennel
September 21, 2018

  • The Linux kernel auditing system provides powerful capabilities for monitoring system activity. While the auditing system is well documented, the manual pages, user guides, and much of the published writings on the audit system fail to provide guidance on the types of attacker-related activities that are, and are not, likely to be logged by the auditing system. This paper uses simulated attacks and analyzes logged artifacts for the Linux kernel auditing system in its default state and when configured using the Controlled Access Protection Profile (CAPP) and the Defense Information Systems Agency’s (DISA) Security Implementation Guide (STIG) auditing rules. This analysis provides a clearer understanding of the capabilities and limitations of the Linux audit system in detecting various types of attacker activity and helps to guide defenders on how to best utilize the Linux auditing system.

Which YARA Rules Rule: Basic or Advanced?
By Chris Culling
August 10, 2018

  • YARA rules, if used effectively, can be a powerful tool in the fight against malware. However, it appears that the majority of individuals who use YARA write only the most basic of rules, instead of taking advantage of YARA’s full functionality. Basic YARA rules, which focus primarily on identifying malware signatures via detection of predetermined strings within the target file, folder, or process, can be evaded as malware variants are created. Advanced YARA rules, on the other hand, which often include signatures as well, also focus on the malware’s behavior and characteristics, such as size and file type. While it is not uncommon for strings within malware to change, it is much rarer that its primary behavior will. After analyzing multiple samples of two different malware strains within the same family, it became clear that using both basic and advanced YARA rules is the most effective way for users and analysts to implement this powerful tool. As there are a large number of advanced capabilities contained within YARA, this paper will focus on easy-to-use, advanced features, including YARA's Portable Execution (PE) module, to highlight some of the more powerful aspects of YARA. While it takes more time and effort to learn and utilize advanced YARA rules, in the long run, this method is a worthwhile investment towards a safer networking environment.

Times Change and Your Training Data Should Too: The Effect of Training Data Recency on Twitter Classifiers
By Ryan O'Grady
July 11, 2018

  • Sophisticated adversaries are moving their botnet command and control infrastructure to social media microblogging sites such as Twitter. As security practitioners work to identify new methods for detecting and disrupting such botnets, including machine-learning approaches, we must better understand what effect training data recency has on classifier performance. This research investigates the performance of several binary classifiers and their ability to distinguish between non-verified and verified tweets as the offset between the age of the training data and test data changed. Classifiers were trained on three feature sets: tweet-only features, user-only features, and all features. Key findings show that classifiers perform best at +0 offset, feature importance changes over time, and more features are not necessarily better. Classifiers using user-only features performed best, with a mean Matthews correlation coefficient of 0.95 ± 0.04 at +0 offset, 0.58 ± 0.43 at -8 offset, and 0.51 ± 0.21 at +8 offset. The R2 values are 0.90, 0.34, and 0.26, respectively. Thus, the classifiers tested with +0 offset accounted for 56% to 64% more variance than those tested with −8 and +8 offset. These results suggest that classifier performance is sensitive to the recency of the training data relative to the test data. Further research is needed to replicate this experiment with botnet vs. non-botnet tweets to determine if similar classifier performance is possible and the degree to which performance is sensitive to training data recency.

Extracting Timely Sign-in Data from Office 365 Logs
By Mark Lucas
May 22, 2018

  • Office 365 is quickly becoming a repository of valuable organizational information, including data that falls under multiple privacy laws. Timely detection of a compromised account and stopping the bad guy before data is exfiltrated, destroyed, or the account used for nefarious purposes is the difference between an incident and a compromise. Microsoft provides audit logging and alerting tools that can assist system administrators find these incidents. An examination of the efficacy and efficiency of these tools and the shortcomings and advantages provides insight into how to best use the tools to protect individual accounts and the organization as a whole.

Evaluation of Comprehensive Taxonomies for Information Technology Threats
By Steven Launius
March 26, 2018

  • Categorization of all information technology threats can improve communication of risk for an organization’s decision-makers who must determine the investment strategy of security controls. While there are several comprehensive taxonomies for grouping threats, there is an opportunity to establish the foundational terminology and perspective for communicating threats across the organization. This is important because confusion about information technology threats pose a direct risk of damaging an organization’s operational longevity. In order for leadership to allocate security resources to counteract prevalent threats in a timely manner, they must understand those threats quickly. A study that investigates categorization techniques of information technology threats to nontechnical decision-makers through a qualitative review of grouping methods for published threat taxonomies could remedy the situation.

Pick a Tool, the Right Tool: Developing a Practical Typology for Selecting Digital Forensics Tools
By J. Richard “Rick” Kiper, Ph.D.
March 16, 2018

  • One of the most common challenges for a digital forensic examiner is tool selection. In recent years, examiners have enjoyed a significant expansion of the digital forensic toolbox – in both commercial and open source software. However, the increase of digital forensics tools did not come with a corresponding organizational structure for the toolbox. As a result, examiners must conduct their own research and experiment with tools to find one appropriate for a particular task. This study collects input from forty six practicing digital forensic examiners to develop a Digital Forensics Tools Typology, an organized collection of tool characteristics that can be used as selection criteria in a simple search engine. In addition, a novel method is proposed for depicting quantifiable digital forensic tool characteristics.

PCAP Next Generation: Is Your Sniffer Up to Snuff?
By Scott D. Fether
March 16, 2018

  • The PCAP file format is widely used for packet capture within the network and security industry, but it is not the only standard. The PCAP Next Generation (PCAPng) Capture File Format is a refreshing improvement that adds extensibility, portability, and the ability to merge and append data to a wire trace. While Wireshark has led the way in supporting the new format, other tools have been slow to follow. With advantages such as the ability to capture from multiple interfaces, improved time resolution, and the ability to add per-packet comments, support for the PCAPng format should be developing more quickly than it has. This paper describes the new standard, displays methods to take advantage of new features, introduces scripting that can make the format useable, and makes the argument that migration to PCAPng is necessary.

Bug Bounty Programs: Enterprise Implementation
By Jason Pubal
January 17, 2018

  • Bug bounty programs are incentivized, results-focused programs that encourage security researchers to report security issues to the sponsoring organization. These programs create a cooperative relationship between security researchers and organizations that allow the researchers to receive rewards for identifying application vulnerabilities. Bug bounty programs have gone from obscurity to being embraced as a best practice in just a few years: application security maturity models have added bug bounty programs and there are standards for vulnerability disclosure best practices. Through leveraging a global community of researchers available 24 hours a day, 7 days a week, information security teams can continuously deliver application security assessments keeping pace with agile development and continuous integration deployments complementing existing controls such as penetration testing and source code reviews.

Container Intrusions: Assessing the Efficacy of Intrusion Detection and Analysis Methods for Linux Container Environments
By Alfredo Hickman
January 13, 2018

  • The unique and intrinsic methods by which Linux application containers are created, deployed, networked, and operated do not lend themselves well to the conventional application of methods for conducting intrusion detection and analysis in traditional physical and virtual machine networks. While similarities exist in some of the methods used to perform intrusion detection and analysis in conventional networks as compared to container networks, the effectiveness between the two has not been thoroughly measured and assessed: this presents a gap in application container security knowledge. By researching the efficacy of these methods as implemented in container networks compared to traditional networks, this research will provide empirical evidence to identify the gap, and provide data useful for identifying and developing new and more effective methods to secure application container networks

Looking Under the Rock: Deployment Strategies for TLS Decryption
By Chris Farrell
January 13, 2018

  • Attackers can freely exfiltrate confidential information all while under the guise of ordinary web traffic. A remedy for businesses concerned about these risks is to decrypt the communication to inspect the traffic, then block it if it presents a risk to the organization. However, these solutions can be challenging to implement. Existing infrastructure, privacy and legal concerns, latency, and differing monitoring tool requirements are a few of the obstacles facing organizations wishing to monitor encrypted traffic. TLS decryption projects can be successful with proper scope definition, an understanding of the architectural challenges presented by decryption, and the options available for overcoming those obstacles.

Digital Forensic Analysis of Amazon Linux EC2 Instances
By Ken Hartman
January 13, 2018

  • Companies continue to shift business-critical workloads to cloud services such as Amazon Web Services Elastic Cloud Computing (EC2). With demand for skilled security engineers at an all-time high, many organizations do not have the capability to do an adequate forensic analysis to determine the root cause of an intrusion or to identify indicators of compromise. To help organizations improve their incident response capability, this paper presents specific tactics for the forensic analysis of Amazon Linux that align with the SANS Finding Malware Step by Step process for Microsoft Windows.

BYOD Security Implementation for Small Organizations
By Raphael Simmons
December 15, 2017

  • The exponential improvement of the mobile industry has caused a shift in the way organizations work across all industry sectors. Bring your own device (BYOD) is a current industry trend that allows employees to use their personal devices such as laptops, tablets, mobile phones and other devices, to connect to the internal network. The number of external devices that can now connect to a company that implements a BYOD policy has allowed for a proliferation of security risks. The National Institute of Standards and Technology lists these high-level threats and vulnerabilities of mobile devices: lack of physical security controls, use of untrusted mobile devices, use of untrusted networks, use of untrusted applications, interaction with other systems, use of untrusted content, and use of location services. A well implemented Mobile Device Management (MDM) tool combined with network access controls can be used to mitigate the risks associated with a BYOD policy.

Who's in the Zone? A Qualitative Proof-of-Concept for Improving Remote Access Least-Privilege in ICS-SCADA Environments
By Kevin Altman
December 4, 2017

  • Remote access control in many ICS-SCADA environments is of limited effectiveness leading to excessive privilege for staff who have responsibilities bounded by region, site, or device. Inability to implement more restrictive least-privilege access controls may result in unacceptable residual risk from internal and external threats. Security vendors and ICS cybersecurity practitioners have recognized this issue and provide options to address these concerns, such as inline security appliances, network authentication, and user-network based access control. Each of these solutions reduces privileges but has tradeoffs. This paper evaluates network-based access control combined with security zones and its benefits for existing ICS-SCADA environments. A Proof-of-Concept (PoC) evaluates a promising option that is not widely known or deployed in ICS-SCADA.

Hacking Humans: The Evolving Paradigm with Virtual Reality
By Andrew Andrasik
November 22, 2017

  • Virtual reality (VR) systems are evolving from high-end gaming and military applications to being used in day-to-day business operations and daily life. Cyber security professionals must begin now to prepare proactive threat analysis and incident handling plans that cover information systems and users. Previous compromises illustrate the devastating effects malware can have on the confidentiality, integrity, and availability of information systems. These disastrous consequences may be transferred directly to the user given his or her perception of events. Even in the early stages, VR represents a new paradigm within the information age. Today, users view information systems through a monitor that acts as a window into a virtual environment. Within VR, a user may become completely immersed while absorbing information from all five senses. VR represents a dichotomy that adds a potential human component to an information system compromise. This research project examines offensive tactics, techniques, and procedures, then exploits and extrapolates them to a compromised VR system and the user to illustrate the hazards associated with VR.

Leverage Risk Focused Teams to Strengthen Resilience against Cyber Risks
By Dave Bishop
November 17, 2017

  • Information security, risk management, audit and business continuity teams must continue to evolve and mature to combat the growing cyber risks impacting business operations. Each team has standards and frameworks, but they often dont speak the same language or understand how each group intersects in protecting the organization. This research identifies opportunities to reduce resource duplication and integrate information security and risk-focused teams to strengthen the organizations resilience against cyber risks.

The State of Honeypots: Understanding the Use of Honey Technologies Today
By Andrea Dominguez,
November 17, 2017

  • The aim of this study is to fill in the gaps in data on the real-world use of honey technologies. The goal has also been to better understand information security professionals views and attitudes towards them. While there is a wealth of academic research in cutting-edge honey technologies, there is a dearth of data related to the practical use of these technologies outside of research laboratories. The data for this research was collected via a survey which was distributed to information security professionals. This research paper includes details on the design of the survey, its distribution, analysis of the results, insights, lessons learned and two appendices: the survey in its entirety and a summary of the data collected.

Exploring the Effectiveness of Approaches to Discovering and Acquiring Virtualized Servers on ESXi
By Scott Perry
November 17, 2017

  • As businesses continue to move to virtualized environments, investigators need updated techniques to acquire virtualized servers. These virtualized servers contain a plethora of relevant data and may hold proprietary software and databases that are relatively impossible to recreate. Before an acquisition, investigators sometimes rely on the host administrators to provide them with network topologies and server information. This paper will demonstrate tools and techniques to conduct server and network discovery in a virtualized environment and how to leverage the software used by administrators to acquire virtual machines hosted on vSphere and ESXi.

Tackling the Unique Digital Forensic Challenges for Law Enforcement in the Jurisdiction of the Ninth U.S. Circuit Court
By John Garris
November 17, 2017

  • The creation of a restrictive digital evidence search protocol by the U.S. Ninth Circuit Court of Appeals - the most stringent in the United States - triggered intense legal debate and caused significant turmoil regarding digital forensics procedures and practices in law enforcement operations. Understanding the Court's legal reasoning and the U.S. Department of Justice's counter-arguments regarding this protocol is critical in appreciating how the tension between privacy concerns and the challenges to law enforcement stand at the center of this unique Information Age issue. By focusing on the Court's core assumption that the seizure and search of electronically stored information are inherently overly intrusive, digital forensics practitioners have a worthy target to focus their efforts in the advancement of digital forensics processes, procedures, techniques, and tool-sets. This paper provides an overview of various proposals, developments, and possible approaches to help address the privacy concerns central to the Court's decision, while potentially improving the overall effectiveness and efficiency of digital forensic operations in law enforcement.

Can the "Gorilla" Deliver? Assessing the Security of Google's New "Thread" Internet of Things (IoT) Protocol
By Kenneth Strayer
October 6, 2017

  • Security incidents associated with Internet of Things (IoT) devices have recently gained high visibility, such as the Mirai botnet that exploited vulnerabilities in remote cameras and home routers. Currently, no industry standard exists to provide the right combination of security and ease-of-use in a low-power, low-bandwidth environment. In 2014, the Thread Group, Inc. released the new Thread networking protocol. Google's Nest Labs recently open-sourced their implementation of Thread in an attempt to become a market standard for the home automation environment. The Thread Group claims that Thread provides improved security for IoT devices. But in what way is this claim true, and how does Thread help address the most significant security risks associated with IoT devices? This paper assesses the new IEEE 802.15.4 "Thread" protocol for IoT devices to determine its potential contributions in mitigating the OWASP Top 10 IoT Security Concerns. It provides developers and security professionals a better understanding of what risks Thread addresses and what challenges remain.

Hardening BYOD: Implementing Critical Security Control 3 in a Bring Your Own Device (BYOD) Architecture
By Christopher Jarko
September 22, 2017

  • The increasing prevalence of Bring Your Own Device (BYOD) architecture poses many challenges to information security professionals. These include, but are not limited to: the risk of loss or theft, unauthorized access to sensitive corporate data, and lack of standardization and control. This last challenge can be particularly troublesome for an enterprise trying to implement the Center for Internet Security (CIS) Critical Security Controls for Effective Cyber Defense (CSCs). CSC 3, Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers, calls for hardened operating systems and applications. Even in traditional enterprise environments, this requires a certain amount of effort, but it is much more difficult in a BYOD architecture where computer hardware and software is unique to each employee and company control of that hardware and software is constrained. Still, it is possible to implement CSC 3 in a BYOD environment. This paper will examine options for managing a standard, secure Windows 10 laptop as part of a BYOD program, and will also discuss the policies, standards, and guidelines necessary to ensure the implementation of this Critical Security Control is as seamless as possible.

Botnet Resiliency via Private Blockchains
By Jonny Sweeny
September 22, 2017

  • Criminals operating botnets are persistently in an arms race with network security engineers and law enforcement agencies to make botnets more resilient. Innovative features constantly increase the resiliency of botnets but cannot mitigate all the weaknesses exploited by researchers. Blockchain technology includes features which could improve the resiliency of botnet communications. A trusted, distributed, resilient, fully-functioning command and control communication channel can be achieved using the combined features of private blockchains and smart contracts.

OSSIM: CIS Critical Security Controls Assessment in a Windows Environment.
By Kevin Geil
September 22, 2017

  • Use of a Security Information and Event Management (SIEM) or log management platform is a recommendation common to several of the “CIS Critical Security Controls For Effective Cyber Defense” (2016). Because the CIS Critical Security Controls (CSC) focus on automation, measurement and continuous improvement of control application, a SIEM is a valuable tool. Alienvault's Open Source SIEM (OSSIM) is free and capable, making it a popular choice for administrators seeking experience with SIEM. While there is a great deal of documentation on OSSIM, specific information that focuses on exactly what events to examine, and then how to report findings is not readily accessible. This paper uses a demo environment to provide specific examples and instructions for using OSSIM to assess a CIS Critical Security Controls implementation in a common environment: A Windows Active Directory domain. The 20 Critical Security Controls can be mapped to other controls in most compliance frameworks and guidelines; therefore, the techniques in this document should be applicable across a wide variety of control implementations.

Trust No One: A Gap Analysis of Moving IP-Based Network Perimeters to A Zero Trust Network Architecture
By John Becker
September 22, 2017

  • Traditional IP-based access controls (e.g., firewall rules based on source and destination addresses) have defined the network perimeter for decades. Threats have evolved to evade and bypass these IP restrictions using techniques such as spear phishing, malware, credential theft, and lateral movement. As these threats evolve, so have the demands from end users for increased accessibility. Remote employees require secure access to internal resources. Cloud services have moved the perimeter outside of the enterprise network. The DevOps movement has emphasized speed and agility over up front network designs. This paper identifies gaps to implementation for organizations in the discovery phase of migrating to identity-based access controls as described by leading cloud companies.

A Spicy Approach to WebSockets: Enhancing Bros WebSockets Network Analysis by Generating a Custom Protocol Parser with Spicy
By Jennifer Gates
September 22, 2017

  • Although the Request for Comments (RFC) defining WebSockets was released in 2011, there has been little focus on using the Bro Intrusion Detection System (IDS) to analyze WebSockets traffic. However, there has been progress in exploiting the WebSockets protocol. The ability to customize and expand Bro’s capabilities to analyze new protocols is one of its chief benefits. The developers of Bro are also working on a new framework called Spicy that allows security professionals to generate new protocol parsers. This paper focuses on the development of Spicy and Bro scripts that allow visibility into WebSockets traffic. The research conducted compared the data that can be logged with existing Bro protocol analyzers to data that can be logged after writing a WebSockets protocol analyzer in Spicy. The research shows increased effectiveness in detecting malicious WebSockets traffic using Bro when the traffic is parsed with a Spicy script. Writing Bro logging scripts tailored to a particular WebSockets application further increases their effectiveness.

Does Network Micro-segmentation Provide Additional Security?
By Steve Jaworski
September 15, 2017

  • Network segmentation is a concept of taking a large group of hosts and creating smaller groups of hosts that can communicate with each other without traversing a security control. The smaller groups of hosts each have defined security controls, and groups are independent of each other. Network micro-segmentation takes the smaller group of hosts by configuring controls around individual hosts. The goal of network microsegmentation is to provide more granular security and reduce an attackers capability to easily compromise an entire network. If an attacker is successful in compromising a host, he or she is limited to only the network segment on which the host resides. If the host resides in a micro-segment, then the attacker is restricted to only that host. This paper will discuss what network and network micro-segmentation is, where it applies, any additional layer of security including levels of complexity.

HL7 Data Interfaces in Medical Environments: Attacking and Defending the Achille's Heel of Healthcare
By Dallas Haselhorst
September 12, 2017

  • On any given day, a hospital operating room can be chaotic. The atmosphere can make one’s head spin with split-second decisions. In the same hospital environment, medical data also whizzes around, albeit virtually. Beyond the headlines involving medical device insecurities and hospital breaches, healthcare communication standards are equally as insecure. This fundamental design flaw places patient data at risk in nearly every hospital worldwide. Without protections in place, a hospital visit today could become a patient’s worst nightmare tomorrow. Could an attacker collect the data and sell it to the highest bidder for credit card or tax fraud? Or perhaps they have far more malicious plans such as causing bodily harm? Regardless of their intentions, healthcare data is under attack and it is highly vulnerable. This research focuses on attacking and defending HL7, the unencrypted and unverified data standard used in healthcare for nearly all system-to-system communications.

HL7 Data Interfaces in Medical Environments: Understanding the Fundamental Flaw in Healthcare
By Dallas Haselhorst
September 12, 2017

  • Ask healthcare IT professionals where the sensitive data resides and most will inevitably direct attention to a hardened server or database with large amounts of protected health information (PHI). The respondent might even know details about data storage, backup plans, etc. Asked the same question, a penetration tester or security expert may provide a similar answer before discussing database or operating system vulnerabilities. Fortunately, there is likely nothing wrong with the data at that point in its lifetime. It potentially sits on a fully encrypted disk protected by usernames, passwords, and it might have audit-level tracking enabled. The server may also have some level of segmentation from non-critical servers or access restrictions based on source IP addresses. But how did those bits and bytes of healthcare data get to that hardened server? Typically, in a way no one would ever expect... 100% unencrypted and unverified. HL7 is the fundamentally flawed, insecure standard used throughout healthcare for nearly all system-to-system communications. This research examines the HL7 standard, potential attacks on the standard, and why medical records require better protection than current efforts provide.

When a picture is worth a thousand products: Image protection in a digital age
By Shawna Turner
September 12, 2017

  • Today, a lack of fashion industry specific information security controls and legal protection puts fashion industry companies at significant risk of Intellectual Property theft and counterfeiting. This risk is only growing as traditional methods of manufacturing are rapidly evolving toward digital models of design and mass production, using Industrial Control System (ICS) approaches for mass production. As mass production moves to digital manufacturing, the effect of losing new product 2D and 3D imagery, as well as the speed and lack of traceability around those losses could significantly impact corporate bottom lines and risk profiles.

A Technical Approach at Securing SaaS using Cloud Access Security Brokers
By Luciana Obregon
September 6, 2017

  • The adoption of cloud services allows organizations to become more agile in the way they conduct business, providing scalable, reliable, and highly available services or solutions for their employees and customers. Cloud adoption significantly reduces total cost of ownership (TCO) and minimizes hardware footprint in data centers. This paradigm shift has left security professionals securing abstract environments for which conventional security products are no longer effective. The goal of this paper is to analyze a set of cloud security controls and security deployment models for SaaS applications that are purely technical in nature while developing practical applications of such controls to solve real-world problems facing most organizations. The paper will also provide an overview of the threats targeting SaaS, present use cases for SaaS security controls, test cases to assess effectiveness, and reference architectures to visually represent the implementation of cloud security controls.

Packet Capture on AWS
By Teri Radichel
August 14, 2017

  • Companies using AWS (Amazon Web Services) will find that traditional means of full packet capture using span ports is not possible. As defined in the AWS Service Level Agreement, Amazon runs certain aspects of the cloud platform and does not give customers access to physical networking hardware. Although access to physical network equipment is limited, packet capture is still possible on AWS but needs to be architected in a different way. Instead of using span ports, security professionals can leverage the software that runs on top of the cloud platform. The tools and services provided by AWS may facilitate more automated, cost-effective, scalable packet capture solutions for some companies when compared to traditional data center approaches.

Complement a Vulnerability Management Program with PowerShell
By Colm Kennedy
August 10, 2017

  • A vulnerability management program is a critical task that all organizations should be running. Part of this program involves the need to patch systems regularly and to keep installed software up to date. Once a vulnerability program is in place organizations need to remediate discovered vulnerabilities quickly. Occasionally some discovered vulnerabilities are false positives. The problem with false positives is that manually vetting them is time-consuming. There are tools available, which assist in showing what patches may be missing, like SCCM, but can be rather costly. For organizations concerned that these types of programs hurt their budgets, there are free options available. PowerShell is free software that, if utilized, can complement an organization's vulnerability management program by assisting in scanning for unpatched systems. This paper presents a PowerShell script that provides Administrators with further insight into what systems are unpatched and streamlines investigations of possible false positives, with no additional cost.

Forensicating Docker with ELK
By Stefan Winkel
July 17, 2017

  • Docker has made an immense impact on how software is developed and deployed in today's information technology environments. The quick and broad adoption of Docker as part of the DevOps movement has not come without cost. The introduction of vulnerabilities in the development cycle has increased many times. While efforts like Docker Notary and Security Testing as a Service are trying to catch up and mitigate some of these risks, Docker Container Escapes through Linux kernel exploits like the recent widespread Dirty COW privilege escalation exploit in late 2016, can be disastrous in a cloud and other production environments. Organizations find themselves more in need of forensicating Docker setups as part of incident investigations. Centralized event logging of Docker containers is becoming crucial in successful incident response. This paper explores how to use the Elastic stack (Elasticsearch, Logstash, and Kibana) as part of incident investigations of Docker images. It will describe the effectiveness of ELK as result of a forensic investigation of a Docker Container Escape through the use of Dirty COW.

Using Docker to Create Multi-Container Environments for Research and Sharing Lateral Movement
By Shaun McCullough
July 3, 2017

  • Docker, a program for running applications in containers, can be used to create multi-container infrastructures that mimic a more sophisticated network for research in penetration techniques. This paper will demonstrate how Docker can be used by information security researchers to build and share complex environments for recreation by anyone. The scenarios in this paper recreate previous research done in SSH tunneling, pivoting, and other lateral movement operations. By using Docker to build sharable and reusable test infrastructure, information security researchers can help readers recreate the research in their own environments, enhancing learning with a more immersive and hands on research project.

No Safe Harbor: Collecting and Storing European Personal Information in the U.S.
By Alyssa Robinson
April 24, 2017

  • When the European Court of Justice nullified the Safe Harbor Framework in October of 2015, it left more than 4,000 companies in legal limbo regarding their transfer of personal data for millions of European customers (Nakashima, 2015). The acceptance of the Privacy Shield Framework in July of 2016 expands the options for U.S. companies that need to transfer EU personal data to the US but does little to ameliorate the upheaval caused by the Safe Harbor annulment. This paper covers the history of data privacy negotiations between the Europe and the United States, providing an understanding of how the current compromises were reached and what threats they may face. It outlines the available mechanisms for data transfer, including Binding Corporate Rules, Standard Contractual Clauses, and the Privacy Shield Framework and compares their requirements, advantages, and risks. With this information, US organizations considering storing or processing European personal data can choose the transfer mechanism best suited to their situation.

Identifying Vulnerable Network Protocols with PowerShell
By David Fletcher
April 6, 2017

  • Microsoft Windows PowerShell has led to several exploit frameworks such as PowerSploit, PowerView,and PowerShell Empire. However, few of these frameworks investigate network traffic for exploitative potential. Analyzing a small amount of network traffic can lead to the discovery of possible network-based attack vectors such as Virtual Router Redundancy Protocol (VRRP), Dynamic Trunking Protocol (DTP), Link Local Multicast Name Resolution (LL-MNR) and PXE boot attacks, to name a few. How does one gather and analyze this traffic when Windows does not include an integrated packet analysis tool? Microsoft Windows PowerShell includes several network analysis and network traffic related capabilities. This paper will explore the use of these capabilities with the goal of building a PowerShell reconnaissance module which will capture, analyze, and identify commonly misconfigured protocols without the need to install a third-party tool within a Microsoft Windows environment.

Securing the Home IoT Network
By Manuel Leos Rivas
April 5, 2017

  • The Internet of Things (IoT) has proven its ability to cause massive service disruption because of the lack of security in many devices. The vulnerabilities that allow those denial of service attacks are often caused due to poor or no security practices when developing or installing the products. The common home network is not designed to protect against the design errors in IoT devices that expose the privacy of the users. The affordable price of single board computers (SBC) and their small power requirements and customization capabilities can help improve the protection of the home IoT network. SBC can also add powerful features such as auditing, inspection, authentication, and authorization to improve controls pertaining to who and what can have access. Implementing a home-control gateway when properly configured reduces some common risks associated with IoT such as vendor-embedded backdoors and default credentials. Having an open source trusted device with a configuration shared and audited by many experts can reduce many of the bugs and misconfigurations introduced by vendor security program deficiencies.

Auto-Nuke It from Orbit: A Framework for Critical Security Control Automation
By Jeremiah Hainly
March 15, 2017

  • Over 83% of security teams report that the use of automation in security needs to increase within the next three years (Algosec, 2016). With automation becoming a reality for a growing number of companies, there will also be an increased demand for open-sourced scripts to get started. This paper will provide a framework for prioritizing and developing security automation and will demonstrate this process by creating a script to automate a common information security response procedure - the reimaging of an infected endpoint. The primary function of the script will be to access the application program interface (API) of various enterprise software solutions to speed up the manual tasks involved in performing a reimage.

Cloud Security Monitoring
By Balaji Balakrishnan
March 13, 2017

  • This paper discusses how to apply security log monitoring capabilities for Amazon Web Services (AWS) Infrastructure as a Service(IaaS) cloud environments. It will provide an overview of AWS CloudTrail and CloudWatch Logs, which can be stored and mined for suspicious events. Security teams implementing AWS solutions will benefit from applying security monitoring techniques to prevent unauthorized access and data loss. Splunk will be used to ingest all AWS CloudTrail and CloudWatch Logs. Machine learning models are used to identify the suspicious activities in the AWS cloud infrastructure. The audience for this paper are the security teams trying to implement AWS security monitoring.

In-Depth Look at Tuckman's Ladder and Subsequent Works as a Tool for Managing a Project Team
By Aron Warren
March 1, 2017

  • Bruce Tuckman's 1965 research on modeling group development, titled "Developmental Sequence in Small Groups," laid out a framework consisting of four stages a group will transition between while members interact with each other: forming, storming, norming, and performing. This paper will describe in detail the original Tuckman model as well as derivative research in group development models. Traditional and virtual team environments will both be addressed to assist IT project managers in understanding how a team evolves over time with a goal of achieving a successful project outcome.

Medical Data Sharing: Establishing Trust in Health Information Exchange
By Barbara Filkins
March 1, 2017

  • Health information exchange (HIE) "allows doctors, nurses, pharmacists, other health care providers and patients to appropriately access and securely share a patient's vital medical information electronically--improving the speed, quality, safety and cost of patient care" (, 2014). The greatest gain in the use of HIE is the ability to achieve interoperability across providers that, except for the care of a given patient, are unrelated. But, by its very nature, HIE also raises concern around the protection and integrity of shared, sensitive data. Trust is a major barrier to interoperability.

Tor Browser Artifacts in Windows 10
By Aron Warren
February 24, 2017

  • The Tor network is a popular, encrypted, worldwide, anonymizing virtual network in existence since 2002 and is used by all facets of society such as privacy advocates, journalists, governments, and criminals. This paper will provide a forensic analysis of the Tor Browser version 5 client on a Windows 10 host for an individual or group interested in remnants left by the software. This paper will utilize various free and commercial tools to provide a detailed analysis of filesystem artifacts as well as a comparison between pre- and post- connection to the Tor network using memory analysis.

OS X as a Forensic Platform
By David M. Martin
February 22, 2017

  • The Apple Macintosh and its OS X operating system have seen increasing adoption by technical professionals, including digital forensic analysts. Forensic software support for OS X remains less mature than that of Windows or Linux. While many Linux forensic tools will work on OS X, instructions for how to configure the tool in OS X are often missing or confusing. OS X also lacks an integrated package management system for command line tools. Python, which serves as the basis for many open-source forensic tools, can be difficult to maintain and easy to misconfigure on OS X. Due to these challenges, many OS X users choose to run their forensic tools from Windows or Linux virtual machines. While this can be an effective and expedient solution, those users miss out on the much of the power of the Macintosh platform. This research will examine the process of configuring a native OS X forensic environment that includes many open-source forensic tools, including Bulk Extractor, Plaso, Rekall, Sleuthkit, Volatility, and Yara. This process includes choosing the correct hardware and software, configuring it properly, and overcoming some of the unique challenges of the OS X environment. A series of performance tests will help determine the optimal hardware and software configuration and examine the performance impact of virtualization options.

Indicators of Compromise TeslaCrypt Malware
By Kevin Kelly
February 16, 2017

  • Malware has become a growing concern in a society of interconnected devices and realtime communications. This paper will show how to analyze live ransomware malware samples, how malware processes locally, over time and within the network. Analyzing live ransomware gives a unique three-dimensional perspective, visually locating crucial signatures and behaviors efficiently. In lieu of reverse engineering or parsing the malware executable’s infrastructure, live analysis provides a simpler method to root out indicators. Ransomware touches just about every file and many of the registry keys. Analysis can be done, but it needs to be focused. The analysis of malware capabilities from different datasets, including process monitoring, flow data, registry key changes, and network traffic will yield indicators of compromise. These indicators will be collected using various open source tools such as Sysinternals suite, Fiddler, Wireshark, and Snort, to name a few. Malware indicators of compromise will be collected to produce defensive countermeasures against unwanted advanced adversary activity on a network. A virtual appliance platform with simulated production Windows 8 O/S will be created, infected and processed to collect indicators to be used to secure enterprise systems. Different tools will leverage datasets to gather indicators, view malware on multiple layers, contain compromised hosts and prevent future infections.

Impediments to Adoption of Two-factor Authentication by Home End-Users
By Preston Ackerman
February 10, 2017

  • Cyber criminals have proven to be both capable and motivated to profit from compromised personal information. The FBI has reported that victims have suffered over $3 billion in losses through compromise of email accounts alone (IC3 2016). One security measure which has been demonstrated to be effective against many of these attacks is two-factor authentication (2FA). The FBI, the Department of Homeland Security US Computer Emergency Readiness Team (US-CERT), and the internationally recognized security training and awareness organization, the SANS Institute, all strongly recommend the use of two-factor authentication. Nevertheless, adoption rates of 2FA are low.

Dissect the Phish to Hunt Infections
By Seth Polley
February 3, 2017

  • Internal defense is a perilous problem facing many organizations today. The sole reliance on external defenses is all too common, leaving the internal organization largely unprotected. The times when internal defense is actually considered, how many think beyond the fallible antivirus (AV) or immature data loss prevention (DLP) solutions? Considering the rise of phishing emails and other social engineering campaigns, there is a significantly increased risk that an organization’s current external and internal defenses will fail to prevent compromises. How would a cyber security team detect an attacker establishing a foothold within the center of the organization or undetectable malware being downloaded internally if a user were to fall for a phishing attempt?

Forensication Education: Towards a Digital Forensics Instructional Framework
By J. Richard “Rick” Kiper
February 3, 2017

  • The field of digital forensics is a diverse and fast-paced branch of cyber investigations. Unfortunately, common efforts to train individuals in this area have been inconsistent and ineffective, as curriculum managers attempt to plug in off-the-shelf courses without an overall educational strategy. The aim of this study is to identify the most effective instructional design features for a future entry-level digital forensics course. To achieve this goal, an expert panel of digital forensics professionals was assembled to identify and prioritize the features, which included general learning outcomes, specific learning goals, instructional delivery formats, instructor characteristics, and assessment strategies. Data was collected from participants using validated group consensus methods such as Delphi and cumulative voting. The product of this effort was the Digital Forensics Framework for Instruction Design (DFFID), a comprehensive digital forensics instructional framework meant to guide the development of future digital forensics curricula.

Superfish and TLS: A Case Study of Betrayed Trust and Legal Liability
By Sandra Dunn
January 24, 2017

  • Superfish, the bloat adware included in Lenovo consumer laptops from 2014-2015 which intentionally broke TLS, exposed user's personal data to compromise and theft, and altered search result ads in user's browsers severely impacted Lenovo brand reputation. There have been other high profile cases of intentionally modifying and breaking TLS that used questionable and deceptive practices but few that generated as much attention and provide such a clear example of a chain of missteps between Lenovo, Superfish, and their customers. A case study of the Superfish mishap exposes the danger, risk, legal liability, and potential government investigation for organization deploying TLS certificates and keys that breaks or weakens the security design and puts private data or people at risk. The Superfish case further demonstrates the importance of a company's disclosure transparency to avoid accusations of deceptive practices if breaking TLS is required to protect users or an organization's data.

Minimizing Legal Risk When Using Cybersecurity Scanning Tools
By John Dittmer
January 19, 2017

  • When cybersecurity professionals use scanning tools on the networks and devices of organizations, there can be legal risks that need to be managed by individuals and enterprises. Often, scanning tools are used to measure compliance with cybersecurity policies and laws, so they must be used with due care. There are protocols that should be followed to ensure proper use of the scanning tools to prevent interference with normal network or system operations and to ensure the accuracy of the scanning results. Several challenges will be examined in depth, such as, measuring for scanner accuracy, proper methods of obtaining written consent for scanning, and how to set up a scanning session for optimum examination of systems or networks. This paper will provide cybersecurity professionals and managers with a better understanding of how and when to use the scanning tools while minimizing the legal risk to themselves and their enterprises.

Data Breach Impact Estimation
By Paul Hershberger
January 3, 2017

  • Internal and External auditors spend a significant amount of time planning their audit processes to align their efforts with the needs of the audited organization. The initial phase of that audit cycle is the risk assessment. Establishing a firm understanding of the likelihood and impact of risk guides the audit function and aligns its work with the risks the organization faces. The challenge many auditors and security professionals face is effectively quantifying the potential impact of a data breach to their organization. This paper compares the data breach cost research of the Ponemon Institute and the RAND Corporation, comparing the models against breach costs reported by publicly traded companies by the Securities and Exchange Commission (SEC) reporting requirements. The comparisons will show that the RAND Corporation's approach provides organizations with a more accurate and flexible model to estimate the potential cost of data breaches as they relate to the direct cost of investigating and remediating a breach and the indirect financial impact associated with regulatory and legal action of a data breach. Additionally, the comparison indicates that data breach-related impacts to revenue and stock valuation are only realized in the short-term.

Real-World Case Study: The Overloaded Security Professional's Guide to Prioritizing Critical Security Controls
By Phillip Bosco
December 27, 2016

  • Using a real-world case study of a recently compromised company as a framework, we will step inside the aftermath of an actual breach and determine how the practical implementation of Critical Security Controls (CSC) may have prevented the compromise entirely while providing greater visibility inside the attack as it occurred. The breached company's information security "team" consisted of a single over-worked individual, who found it arduous to identify which critical controls he should focus his limited time implementing. Lastly, we will delve into real-world examples, using previously unpublished research, that serve as practical approaches for teams with limited resources to prioritize and schedule which CSCs will provide the largest impact towards reducing the company's overall risk. Ideally, the observations and approaches identified in this research paper will assist security professionals who may be in similar circumstances.

Finding Bad with Splunk
By David Brown
December 16, 2016

  • There is such a deluge of information that it can be hard for information security teams to know where to focus their time and energy. This paper will recommend common Linux and Windows tools to scan networks and systems, store results to local filesystems, analyze results, and pass any new data to Splunk. Splunk will then help security teams narrow in on what has changed within the networks and systems by alerting the security teams to any differences between old baselines and new scans. In addition, security teams may not even be paying attention to controls, like whitelisting blocks, that successfully prevent malicious activities. Monitoring failed application execution attempts can give security teams and administrators early warnings that someone may be trying to subvert a system. This paper will guide the security professional on setting up alerts to detect security events of interest like failed application executions due to whitelisting. To solve these problems, the paper will discuss the first five Critical Security Controls and explain what malicious behaviors can be uncovered as a result of alerting. As the paper progresses through the controls, the security professional is shown how to set up baseline analysis, how to configure the systems to pass the proper data to Splunk, and how to configure Splunk to alert on events of interest. The paper does not revolve around how to implement technical controls like whitelisting, but rather how to effectively monitor the controls once they have been implemented.

Continuous Monitoring: Build A World Class Monitoring System for Enterprise, Small Office, or Home
By Austin Taylor
December 15, 2016

  • For organizations who wish to prevent data breaches, incident prevention is ideal, but detection of an attempted or successful breach is a must. This paper outlines guidance for network visibility, threat intelligence implementation and methods to reduce analyst alert fatigue. Additionally, this document includes a workflow for Security Operations Centers (SOC) to efficiently process events of interest thereby increasing the likelihood of detecting a breach. Methods include Intrusion Detection System (IDS) setup with tips on efficient data collection, sensor placement, identification of critical infrastructure along with network and metric visualization. These recommendations are useful for enterprises, small homes, or offices who wish to implement threat intelligence and network analysis.

Detecting Malicious SMB Activity Using Bro
By Richie Cyrus
December 13, 2016

  • Attackers utilize the Server Message Block (SMB) protocol to blend in with network activity, often carrying out their objectives undetected. Post-compromise, attackers use file shares to move laterally, looking for sensitive or confidential data to exfiltrate out a network. Traditional methods for detecting such activity call for storing and analyzing large volumes of Windows event logs, or deploying a signature-based intrusion detection solution. For some organizations, processing and storing large amounts of Windows events may not be feasible. Pattern based intrusion detection solutions can be bypassed by malicious entities, potentially failing to detect malicious activity. Bro Network Security Monitor (Bro) provides an alternative solution allowing for rapid detection through custom scripts and log data. This paper introduces methods to detect malicious SMB activity using Bro.

Active Defense via a Labyrinth of Deception
By Nathaniel Quist
December 5, 2016

  • A network baseline allows for the identification of malicious activity in real time. However, a baseline requires that every listed action is known and accounted, presenting a nearly impossible task in any production environment due to an ever-changing application footprint, system and application updates, changing project requirements, and not least of all, unpredictable user behaviors. Each obstacle presents a significant challenge in the development and maintenance of an accurate and false positive free network baseline. To surmount these hurdles, network architects need to design a network free from continuous change including, changing company requirements, untested systems or application updates, and the presence of unpredictable users. Creating a static, never-changing environment is the goal. However, this completely removes the functionality of a production network. Or does it? Within this paper, I will detail how this type of static environment, referred to as the Labyrinth, can be placed in front of a production environment and provide real time defensive measures against hostile and dispersed attacks, from both human actors and automated machines. I expect to prove the Labyrinth is capable of detecting changes in its environment in real time. It will provide a listing of dynamic defensive capabilities like identifying attacking IP addresses, rogue-process start commands, modifications to registry values, alterations in system memory and recording the movements of an attacker's tactics, techniques, and procedures. At the same time, the Labyrinth will add these values to block list, protecting the production network lying behind. Successful accomplishment of these goals will prove the viability and sustainability of a Labyrinth defending network (Revelle, 2011) environments.

Next Generation of Privacy in Europe and the Impact on Information Security: Complying with the GDPR
By Edward Yuwono
December 5, 2016

  • Human rights have a strong place within Europe, part of this includes the fundamental right to privacy. Over the years, individual privacy has strengthened through various European directives. With the evolution of privacy continuing in Europe through the release of the General Data Protection Regulation (GDPR), how will the latest iteration of European Union (EU) regulation affect organisations and what will information security leaders need to do to meet this change? This paper will explore the evolution of privacy in Europe, the objectives and changes this iteration of EU privacy regulation will provide, what challenges organisations will experience, and how information security could be leveraged to satisfy the regulation.

A Checklist for Audit of Docker Containers
By Alyssa Robinson
November 22, 2016

  • Docker and other container technologies are increasingly popular methods for deploying applications in DevOps environments, due to advantages in portability, efficiency in resource sharing and speed of deployment. The very properties that make Docker containers useful, however, can pose challenges for audit, and the security capabilities and best practices are changing rapidly. As adoption of this technology grows, it is, therefore, necessary to create a standardized checklist for audit of Dockerized environments based on the latest tools and recommendations.

Security Assurance of Docker Containers
By Stefan Winkle
November 22, 2016

  • With recent movements like DevOps and the conversion towards application security as a service, the IT industry is in the middle of a set of substantial changes with how software is developed and deployed. In the infrastructure space, we see the uptake of lightweight container technology, while application technologies are moving towards distributed micros services. There is a recent explosion in popularity of package managers and distributors like OneGet, NPM, RubyGems and PyPI. More and more software development becomes dependent on small, reusable components developed by many different developers and often distributed by infrastructures outside our control. In the midst of this all, we often find application containers like Docker, LXC, and Rocket to compartmentalize software components. The Notary project, recently introduced in Docker, is built upon the assumption the software distribution pipeline can no longer be trusted. Notary attempts to protect against attacks on the software distribution pipeline by association of trust and duty separation to Docker containers. In this paper, we explore the Notary service and take a look at security testing of Docker containers.

Implementing Full Packet Capture
By Matt Koch
November 7, 2016

  • Full Packet Capture (FPC) provides a network defender an after-the-fact investigative capability that other security tools cannot provide. Uses include capturing malware samples, network exploits and determining if data exfiltration has occurred. Full packet captures are a valuable troubleshooting tool for operations and security teams alike. Successful implementation requires an understanding of organization-specific requirements, capacity planning, and delivery of unaltered network traffic to the packet capture system.

Intrusion Detection Through Relationship Analysis
By Patrick Neise
October 24, 2016

  • With the average time to detection of a network intrusion in enterprise networks assessed to be 6-8 months, network defenders require additional tools and techniques to shorten detection time. Perimeter, endpoint, and network traffic detection methods today are mainly focused on detecting individual incidents while security incident and event management (SIEM) products are then used to correlate the isolated events. Although proven to be able to detect network intrusions, these methods can be resource intensive in both time and personnel. Through the use of network flows and graph database technologies, analysts can rapidly gain insight into which hosts are communicating with each other and identify abnormal behavior such as a single client machine communicating with other clients via Server Message Block (SMB). Combining the power of tools such as Bro, a network analysis framework, and neo4j, a native graph database that is built to examine data and its relationships, rapid detection of anomalous behavior within the network becomes possible. This paper will identify the tools and techniques necessary to extract relevant network information, create the data model within a graph database, and query the resulting data to identify potential malicious activity.

Building a Home Network Configured to Collect Artifacts for Supporting Network Forensic Incident Response
By Gordon Fraser
September 21, 2016

  • A commonly accepted Incident Response process includes six phases: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. Preparation is key. It sets the foundation for a successful incident response. The incident responder does not want to be trying to figure out where to collect the information necessary to quickly assess the situation and to respond appropriately to the incident. Nor does the incident responder want to hope that the information he needs is available at the level of detail necessary to most effectively analyze the situation so he can make informed decisions on the best course of action. This paper identifies artifacts that are important to support network forensics during incident response and discusses an architecture and implementation for a home lab to support the collection of them. It then validates the architecture using an incident scenario.

Using Vagrant to Build a Manageable and Sharable Intrusion Detection Lab
By Shaun McCullough
September 20, 2016

  • This paper investigates how the Vagrant software application can be used by Information Security (InfoSec) professionals looking to provide their audience with an infrastructure environment to accompany their research. InfoSec professionals conducting research or publishing write-ups can provide opportunities for their audience to replicate or walk through the research themselves in their own environment. Vagrant is a popular DevOps tool for providing portable and repeatable production environments for application developers, and may solve the needs of the InfoSec professional. This paper will investigate how Vagrant works, the pros and cons of the technology, and how it is typically used. The paper describes how to build or repurpose three environments, highlighting different features of Vagrant. Finally, the paper will discuss lessons learned.

Know Thy Network - Cisco Firepower and Critical Security Controls 1 & 2
By Ryan Firth
September 19, 2016

  • Previously known as the SANS Top 20, the Critical Security Controls are based on real-world attack and security breach data from around the world, and are objectively the most effective technical controls against known cyber-attacks. Due to competing priorities and demands, however, organizations may not have the expertise to figure out how to implement and operationalize the Critical Security Controls in their environments. This paper will help bridge that gap for security and network teams using Cisco Firepower.

Windows Installed Software Inventory
By Jonathan Risto
September 7, 2016

  • The 20 Critical Controls provide a guideline for the controls that need to be placed in our networks to manage and secure our systems. The second control states there should be a software inventory that contains the names and versions of the products for all devices within the infrastructure. The challenge for a large number of organizations is the ability to have accurate information available with minimal impact on tight IT budgets. This paper will discuss the Microsoft Windows command line tools that will gather this information, and provide example scripts that can be run by the reader.

In but not Out: Protecting Confidentiality during Penetration Testing
By Andrew Andrasik
August 22, 2016

  • Penetration testing is imperative for organizations committed to security. However, independent penetration testers are rarely greeted with open arms when initiating an assessment. As firms implement the Critical Security Controls or the Risk Management Framework, independent penetration testing will likely become standard practice as opposed to supplemental exercises. Ethical hacking is a common tactic to view a company's network from an attacker's perspective, but inviting external personnel into a network may increase risk. Penetration testers strive to gain superuser privileges wherever possible and utilize thousands of open-source tools and scripts, many of which do not originate from validated sources.

Introduction to Rundeck for Secure Script Executions
By John Becker
August 11, 2016

  • Many organizations today support physical, virtual, and cloud-based systems across a wide range of operating systems. Providing least privilege access to systems can be a complex mesh of sudoers files, profiles, policies, and firewall rules. While configuration management tools such as Puppet or Chef help ensure consistency, they do not inherently simplify the process for users or administrators. Additionally, current DevOps teams are pushing changes faster than ever. Keeping pace with new services and applications often force sysadmins to use more general access rules and thus expose broader access than necessary. Rundeck is a web-based orchestration platform with powerful ACLs and ssh-based connectivity to a wide range of operating systems and devices. The simple user interface for Rundeck couples with DevOps-friendly REST APIs and YAML or XML configuration files. Using Rundeck for server access improves security while keeping pace with rapidly changing environments.

Legal Aspects of Privacy and Security: A Case- Study of Apple versus FBI Arguments
By Muzamil Riffat
June 3, 2016

  • The debate regarding privacy versus security has been going on for some time now. The matter is complicated due to the fact that the concept of privacy is a subjective phenomenon, shaped by several factors such as cultural norms or geographical location. In a paradoxical situation, rapid advancements in technology are fast making the technology both the guardian and invader of the privacy. Governments and organizations around the globe are using technology to achieve their objectives in the name of security and convenience. It appears that sporadic fights of the proponents of privacy and security had eventually found an avenue to express their opinions i.e. the USA court system. In February 2016, FBI was able to obtain a court order requiring Apple to modify the security features of an iPhone to enable the law enforcement agency access the contents of the device. Apple, backed by other leading technology firms, had vehemently opposed the idea and intended to file a legal appeal against the court order. Before both parties could present their arguments in the court, the case was dropped by FBI as it claimed that it was able to access the contents of the device without Apple's assistance. By using FBI vs. Apple as a case-study, this paper discusses different legal aspects of the opinions of both parties. With the pervasiveness of advanced technology, it can be reasonably anticipated that such requests by law enforcement and government agencies will become more frequent. The paper presents the privacy concerns that should be taken into consideration regarding all such requests.

Under The Ocean of the Internet - The Deep Web
By Brett Hawkins
May 27, 2016

  • The Internet was a revolutionary invention, and its use continues to evolve. People around the world use the Internet every day for things such as social media, shopping, email, reading news, and much more. However, this only makes up a very small piece of the Internet, and the rest is filled by an area called The Deep Web.

Securing Jenkins CI Systems
By Allen Jeng
April 8, 2016

  • With over 100,000 active installations worldwide, Jenkins became the top choice for continuous integration and automation. A survey conducted by Cloudbees during the 2012 Jenkins Users Conference concluded that 83 percent of the respondents consider Jenkins to be mission critical. The November 2015 remotely exploitable Java deserialization vulnerability stresses the need to lock down and monitor Jenkins systems. Exploitation of this weakness enables hackers to gain access to critical assets such as source code that Jenkins manages. Enabling password security is the general recommendations for securing Jenkins. Unfortunately, this necessary security measure can easily be defeated with a packet sniffer because passwords are transmitted over the wire as clear text. This paper will look at ways to secure Jenkins system as well as the deployment of intrusion detection systems to monitor critical assets controlled by Jenkins CI systems.

Secure Network Design: Micro Segmentation
By Brandon Peterson
February 29, 2016

  • Hackers, once on to a network, often go undetected as they freely move from system to system looking for valuable information to steal. Credentials, intellectual property, and personal information are all at risk. It is generally accepted that the attacker has the upper hand and can eventually penetrate most networks. A secure network design that focuses on micro segmentation can slow the rate at which an attacker moves through a network and provide more opportunities for detecting that movement. Organizations that implement a secure network design will find that the added cost and complexity of micro segmentation is more than offset by a reduction in the number and severity of incidents. In fact, the effort extended in learning, classifying, and segmenting the network adds value and strengthens all of the organization’s controls.

Selling Your Information Security Strategy
By David Todd
February 18, 2016

  • It is the Chief Information Security Officer’s (CISO) responsibility to identify the gaps between the most significant security threats and vulnerabilities, compared with the organization's current state. The CISO should develop an information security strategy that aligns with the strategic goals of the organization and sells the gap mitigation strategy to executive management and the board of directors. Before embarking on this new adventure, clearly articulate what success looks like to your organization. What is the result you are driving to accomplish? Then develop a strategy to get you there. Take a play directly from the Sales organization’s playbook – Know yourself; know your customer; and know the benefits from your customer’s perspective. Following this simple strategy will help the CISO close the deal of selling your Information Security Strategy.

Dont Always Judge a Packet by Its Cover
By Gabriel Sanchez
February 16, 2016

  • Distinguishing between friend and foe as millions of packets traverse a network at any given moment can be a very tedious and trying objective. Packets can contain viruses, malware, and botnets which necessitates the need to detect them fast. However, chasing every packet often becomes unmanageable and can often lead to many dead ends. Traditional approaches to this problem rely on heuristics or signatures with a known bad which tend to be ineffective to the advanced attacker. Instead, this paper will go beyond the known bad and describe a general approach of honing in on packets of interest utilizing the behavior and profiling of a network. The use of behavior analysis and profiling for packets that ordinarily traverse a network can shine light on the shadows that the enemy lurks in that bypass traditional detection. This behavior analysis and profiling is especially imperative since knowing the characteristics of your packets can certainly reveal their true intentions.

Security Systems Engineering Approach in Evaluating Commercial and Open Source Software Products
By Jesus Abelarde
January 29, 2016

  • The use of commercial and free open source software (FOSS) is becoming more common in commercial, corporate, and government settings as they develop complex systems. This carries a set of risks until the system is retired or replaced. Unfortunately during project development, the amount of security resources and time necessary to accommodate proper security evaluations is usually underestimated. Also, there is no widely used or standardized evaluation process that engineers and scientists can utilize as a guideline. Therefore, the evaluation process usually ends up lacking or widely different from project to project and company to company. This paper provides a suggested evaluation process and a set of methodologies, along with associated costs and risks that projects can utilize as a guideline when they integrate commercial and FOSS products during system development life cycle (SDLC).

Network Forensics and HTTP/2
By Stefan Winkel
January 18, 2016

  • Last May, a major new version of the HTTP protocol, HTTP/2, has been published and finalized in RFC 7540. HTTP/2, based on the SPDY protocol, which was primarily developed by Google, is a multiplexed, binary protocol where TLS has become the de- facto mandatory standard. Most of the modern web browsers (e.g. Chrome, Firefox, Edge) are now supporting HTTP/2 and some Fortune 500 companies like Google, Facebook and Twitter have enabled HTTP/2 traffic to and from their servers already. We also have seen a recent uptake in security breaches related to HTTP data compression (e.g. Crime, Beast) which is part of HTTP/2. From a network perspective there is currently limited support for analyzing HTTP/2 traffic. This paper will explore how best to analyze such traffic and discuss how the new version might change the future of network forensics.

Cybersecurity Inventory at Home
By Glen Roberts
January 7, 2016

  • Consumers need better home network security guidance for taking stock of the hardware and software applications installed on their network and devices. The primary sources of information security advice for the average person are TV, magazines, newspapers, websites and social media. Unfortunately, these sources typically repeat the same advice, provide limited guidance and miss key areas of security that should be taken into consideration when securing home networks. On the other hand, enterprises receive comprehensive, prioritized guidance such as the Critical Security Controls from The Center for Internet Security. Unfortunately, these controls were not designed with securing home networks in mind. The wide gap between consumer-media advice columns and highly professional corporate security controls needs to be bridged. This can be done by using the Critical Security Controls as a comprehensive foundation from which to craft an authoritative yet easy-to-understand set of home network security recommendations for individuals. The first step is distilling the guidance for inventorying hardware and software applications.

Infrastructure Security Architecture for Effective Security Monitoring
By Luciana Obregon
December 11, 2015

  • Many organizations struggle to architect and implement adequate network infrastructures to optimize network security monitoring. This challenge often leads to data loss with regards to monitored traffic and security events, increased cost in new hardware and technology needed to address monitoring gaps, and additional Information Security personnel to keep up with the overwhelming number of security alerts. Organizations spend a lot of time, effort, and money deploying the latest and greatest tools without ever addressing the fundamental problem of adequate network security design. This paper provides a best practice approach to designing and building scalable and repeatable infrastructure security architectures to optimize network security monitoring. It will expand on four network security domains including network segmentation, intrusion detection and prevention, security event logging, and packet capturing. The goal is a visual representation of an infrastructure security architecture that will allow stakeholders to understand how to architect their networks to address monitoring gaps and protect their organizations.

Compliant but not Secure: Why PCI-Certified Companies Are Being Breached
By Christian Moldes
December 9, 2015

  • The Payment Card Industry published the Data Security Standard 11 years ago; however, criminals are still breaching companies and getting access to cardholder data. The number of security breaches in the past two years has increased considerable, even among the companies for which assessors deemed compliant. In this paper, the author conducts a detailed analysis of why this is still occurring and proposes changes companies should adopt to avoid a security breach.

Web Application File Upload Vulnerabilities
By Matthew Koch
December 7, 2015

  • Uploading files to a web application can be a key feature to many web applications. Without it cloud backup services, photograph sharing and other functions would not be possible.

There's No Going it Alone: Disrupting Well Organized Cyber Crime
By John Garris
November 23, 2015

  • The identification and eventual disruption of a sophisticated criminal enterprise, requiring on-the-fly problem solving and groundbreaking international collaboration, offers a model of how an international cooperative effort can succeed. The efforts that ultimately brought down Rove Digital, an Estonian-based criminal operation that compromised millions of computers, provides just such an example. The approach taken by law enforcement from several countries, coupled with the important roles played by security researchers, can be built upon to address burgeoning threats that can only be tackled cooperatively.

A Network Analysis of a Web Server Compromise
By Kiel Wadner
September 8, 2015

  • Through the analysis of a known scenario, the reader will be given the opportunity to explore a website being compromised. From the initial reconnaissance to gaining root access, each step is viewed at the network level. The benefit of a known scenario is assumptions about the attackers’ reasons are avoided, allowing focus to remain on the technical details of the attack. Steps such as file extraction, timing analysis and reverse engineering an encrypted C2 channel are covered.

Breaking the Ice: Gaining Initial Access
By Phillip Bosco
August 28, 2015

  • While companies are spending an increasing amount of resources on security equipment, attackers are still successful at finding ways to breach networks. This is a compounded problem with many moving parts, due to misinformation within the security industry and companies placing focus on areas of security that yield unimpressive results. A company cannot properly defend and protect against what they do not adequately understand, which tends to be a misunderstanding of their own security defense systems and relevant attacks that cyber criminals commonly use today. These misunderstandings result in attackers bypassing even the most seemingly robust security systems using the simplest methods. The author will outline the common misconceptions within the security industry that ultimately lead to insecure networks. Such misconceptions include a company’s misallocation of their security budget, while other misconceptions include the controversies regarding which methods are most effective at fending off an attacker. Common attack vectors and misconfigurations that are devastating, but are highly preventable, are also detailed.

Forensic Timeline Analysis using Wireshark GIAC (GCFA) Gold Certification
By David Fletcher
August 10, 2015

  • The objective of this paper is to demonstrate analysis of timeline evidence using the Wireshark protocol analyzer. To accomplish this, sample timelines will be generated using tools from The Sleuth Kit (TSK) as well as Log2Timeline. The sample timelines will then be converted into Packet Capture (PCAP) format. Once in this format, Wireshark's native analysis capabilities will be demonstrated in the context of forensic timeline analysis. The underlying hypothesis is that Wireshark can provide a suitable interface for enhancing analyst's ability. This is accomplished through use of built-in features such as analysis profiles, filtering, colorization, marking, and annotation.

Coding For Incident Response: Solving the Language Dilemma
By Shelly Giesbrecht
July 28, 2015

  • Incident responders frequently are faced with the reality of "doing more with less" due to budget or manpower deficits. The ability to write scripts from scratch or modify the code of others to solve a problem or find data in a data "haystack" are necessary skills in a responder's personal toolkit. The question for IR practitioners is what language should they learn that will be the most useful in their work? In this paper, we will examine several coding languages used in writing tools and scripts used for incident response including Perl, Python, C#, PowerShell and Go. In addition, we will discuss why one language may be more helpful than another depending on the use-case, and look at examples of code for each language.

Accessing the inaccessible: Incident investigation in a world of embedded devices
By Eric Jodoin
June 24, 2015

  • There are currently an estimated 4.9 billion embedded systems distributed worldwide. By 2020, that number is expected to have grown to 25 billion. Embedded systems can be found virtually everywhere, ranging from consumer products such as Smart TVs, Blu-ray players, fridges, thermostats, smart phones, and many more household devices. They are also ubiquitous in businesses where they are found in alarm systems, climate control systems, and most networking equipment such as routers, managed switches, IP cameras, multi-function printers, etc. Unfortunately, recent events have taught us these devices can also be vulnerable to malware and hackers. Therefore, it is highly likely that one of these devices may become a key source of evidence in an incident investigation. This paper introduces the reader to embedded systems technology. Using a Blu-ray player embedded system as an example; it demonstrates the process to connect to and then access data through the serial console to collect evidence from an embedded system non-volatile memory.

Honeytokens and honeypots for web ID and IH
By Rich Graves
May 14, 2015

  • Honeypots and honey tokens can be useful tools for examining follow-up to phishing attacks. In this exercise, we respond using valid email addresses that actually received the phish, and wrong passwords. We demonstrate using custom single sign-on code to redirect logins with those fake passwords and any other logins from presumed attacker source IP addresses to a dedicated phishing-victim web honeypot. Although the proof-of- concept described did not become a production deployment, it provided insight into current attacks.

Group Gold Papers

Endpoint Security through Device Configuration, Policy and Network Isolation
By Barbara Filkins & Jonathan Risto
July 15, 2016

  • Sensitive data leaked from endpoints unbeknownst to the user can be detrimental to both an organization and its workforce. The CIO of GIAC Enterprises, alarmed by reports from a newly installed, host-based firewall on his MacBook Pro, commissioned an investigation concerning the security of GIAC Enterprise endpoints.