Student White Papers

Student White Papers

Under the guidance and review of our world-class instructors, SANS Technology Institute master's degree candidates conduct research that is relevant, has real-world impact, and often contributes cutting-edge advancements to the field of cybersecurity knowledge. Here are some highlights of their recent findings.


Building an Audit Engine to Detect, Record, and Validaten Internal Employees' Need for Accessing Customer Data
By Jekeon Jack Cha
December 11, 2019

  • When using Software-as-a-Service (SaaS) products, customers are asked to store and entrust a large volume of personal data to SaaS companies. Unfortunately, consumers are living in a world of numerous data breaches and significant public privacy violations. As a result, customers are rightfully skeptical of the privacy policies that businesses provide and are looking for service providers who can distinguish their commitment to customer data privacy. This paper examines the viability of building an accurate audit engine to detect, record, and validate internal employees’ reasons for accessing a particular customer’s data. In doing so, businesses can gain clear visibility into their current processes and access patterns to meet the rising privacy demand of their customers.

Looking for Linux: WSL Key Evidence
By Amanda Draeger
December 11, 2019

  • Microsoft released Windows Subsystem for Linux (WSL) in 2016 to much fanfare, but little research into the security implications of installing this feature followed. This lack of research, and lack of documentation, is a problem for the administrators who want to take advantage of its feature set while monitoring their systems for unusual behavior. Native Windows logging can provide visibility into WSL’s behavior, but there has been no research on which logs can provide this visibility, and what exact information they can provide. This paper examines how to monitor a Windows 10 system with WSL installed for common indicators of malicious activity.

Detecting Malicious Authentication Events in SaaS Applications Using Anomaly Detection
By Gavin Grisamore
December 11, 2019

  • SaaS applications have been exploding in popularity due to their ease of deployment, use, and maintenance. Security teams are struggling to keep pace with the growing list of applications used in their environment as well as with the process of tracking the data these applications hold. Attackers have been taking advantage of these visibility gaps and have targeted SaaS applications regularly. By using log data from the applications themselves, security teams can use anomaly detection techniques to find and respond to such attacks. Anomaly detection allows security teams to more quickly identify and remedy a data breach by condensing large amounts of data into a shortened list of events that are outliers. The detection techniques used can help security teams respond to or prevent the next data breach.

Catch Me If You Can: Detecting Server-Side Request Forgery Attacks on Amazon Web Services
By Sean McElroy
November 27, 2019

  • Cloud infrastructure offers significant benefits to organizations capable of leveraging rich application programming interfaces (APIs) to automate environments at scale. However, unauthorized access to management APIs can enable threat actors to compromise the security of large amounts of sensitive data very quickly. Practitioners have documented techniques for gaining access through Server-Side Request Forgery (SSRF) vulnerabilities that exploit management APIs within cloud providers. However, mature organizations have failed to detect some of the most significant breaches, sometimes for months after a security incident. Cloud services adoption is increasing, and firms need effective methods of detecting SSRF attempts to identify threats and mitigate vulnerabilities. This paper examines a variety of tools and techniques to detect SSRF activity within an Amazon Web Services (AWS) environment that can be used to monitor for real-time SSRF exploit attempts against the AWS API. The research findings outline the efficacy of four different strategies to answer the question of whether security professionals can leverage additional vendor-provided and open-source tools to detect SSRF attacks.

Securing the Supply Chain - A Hybrid Approach to Effective SCRM Policies and Procedures
By Daniel Carbonaro
November 7, 2019

  • Organizations’ supply chains are growing increasingly interdependent and complex, the result of which is an ever-increasing attack surface that must be defended. Current supply chain security frameworks offer effective guidance to organizations to help mitigate their supply chains from attack. However, they are limited in their scope and impact and can be extremely complex for organizations to adopt effectively. To further complicate issues, the ability of an organization to identify the scope of their supply chains may be a complicated endeavor. This paper seeks to give context not only to the challenges facing security within the ICT Supply Chain, but attempts to give a hybrid framework for any business regardless of size or function to follow when attempting to mitigate threats both to and from within their supply chain.

Guarding the Modern Castle: Providing Visibility into the BACnet Protocol
By Aaron Heller
October 30, 2019

  • Building automation devices are used to monitor and control HVAC, security, fire, lighting, and other similar functions in a building or across a campus. Over 60% of the global market for building automation relies on the BACnet protocol to enable communication between field devices (BSRIA, 2018). There are few open-source network intrusion detection or prevention systems (NIDS/NIPS) capable of interpreting and monitoring the BACnet protocol (Hurd & McCarty, 2017). This blind spot presents a significant security risk. The maloperation of building automation systems can cause physical damage and financial losses, and can allow an attacker to pivot from a building automation network into other networks (Balent & Gordy, 2013). A BACnet/IP protocol analyzer was created for an open-source NIDS/NIPS called Zeek to help minimize this network security blind spot. The analyzer was tested with publicly available BACnet capture files, including some with protocol anomalies. The new analyzer and test cases provide network defenders with a tool to implement a BACnet/IP capable NIDS/NIPS as well as insight into how to defend the modern-day “castles” that rely on the Building Automation and Control network protocol.

An AWS Network Monitoring Comparison
By Nichole Dugan
October 30, 2019

  • AWS recently released network traffic mirroring in their environment. As this is a relatively new feature, users of the service in the past have used tools such as Security Onion to monitor traffic using a hosted base model of forwarding network traffic to analyze the data. It may not be apparent to an organization which option works best for them, so an analysis should be done of both the traffic mirroring and host based options to determine the benefits and drawbacks of each method. This paper seeks to compare the two types of network monitoring available in the AWS environment, traffic mirroring and host based, and determine which method is more cost-effective, and, through testing, determine which method generates more alerts.

Challenges in Effective DNS Query Monitoring
By Caleb Baker
October 23, 2019

  • Domain Name System (DNS) queries are fundamental functions of modern computer networks. Capturing the contents of DNS queries and analyzing the logged data is a recommended practice for gaining insight into activity on a network and monitoring for unusual behavior. Multiple solutions and approaches are available for monitoring DNS queries. Some methods add the capability to redirect queries identified as malicious, stopping an attack. This paper investigates the effectiveness of solutions that utilize the monitoring of DNS queries to detect and block behavior DNS queries identified as potential indicators of compromise. The performance of each tool will be evaluated against a sample of real-world threats that utilize DNS queries. As the prevalence of DNS query monitoring increases, attackers will need to take steps to bypass monitoring by obfuscating DNS queries. Accordingly, this paper will also assess the capabilities of each tool to detect techniques for DNS query obfuscation.

BITS Forensics
By Roberto Nardella
October 14, 2019

  • The “Background Intelligent Transfer Service” (BITS) is a technology developed by Microsoft in order to manage file uploads and downloads, to and from HTTP servers and SMB shares, in a more controlled and load balanced way. If the user starting the download were to log out the computer, or if a network connection is lost, BITS will resume the download automatically; the capability to survive reboots makes it an ideal tool for attackers to drop malicious files into an impacted Windows workstation, especially considering that Microsoft boxes do not have tools like “wget” or “curl” installed by default, and that web browsers (especially those in Corporate environments) may have filters and plugins preventing the download of bad files. In recent years, BITS has been increasingly used not only as a means to place malicious files into targets but also to exfiltrate data from compromised computers. This paper shows how BITS can be used for malicious purposes and examines the traces left by its usage in network traffic, hard disk and RAM. The purpose of this research is also to compare the eventual findings that can surface from each type of examination (network traffic examination, hard disk examination and RAM examination) and highlight the limitation of each analysis type.

Pass-the-Hash in Windows 10
By Lukasz Cyra
September 27, 2019

  • Attackers have used the Pass-the-Hash (PtH) attack for over two decades. Its effectiveness has led to several changes to the design of Windows. Those changes influenced the feasibility of the attack and the effectiveness of the tools used to execute it. At the same time, novel PtH attack strategies appeared. All this has led to confusion about what is still feasible and what configurations of Windows are vulnerable. This paper examines various methods of hash extraction and execution of the PtH attack. It identifies the prerequisites for the attack and suggests hardening options. Testing in Windows 10 v1903 supports the findings. Ultimately, this paper shows the level of risk posed by PtH to environments using the latest version of Windows 10.

Exploring Osquery, Fleet, and Elastic Stack as an Open-source solution to Endpoint Detection and Response
By Christopher Hurless
September 10, 2019

  • Endpoint Detection and Response (EDR) capabilities are rapidly evolving as a method of identifying threats to an organization's computing environment. Global research and advisory company, Gartner defines EDR as: "Solutions that record and store endpoint-system-level behaviors, use various data analytics techniques to detect suspicious system behavior, provide contextual information, block malicious activity, and provide remediation suggestions to restore affected systems" (Gartner, 2019). This paper explores the feasibility and difficulty of using open-source tools as a practical alternative to commercial EDR solutions. A business with sufficiently mature Incident Response (IR) processes might find that building an EDR solution “in house” with open-source tools provides both the knowledge and the technical capability to detect and investigate security incidents. The required skill level to begin using and gaining value from these tools is relatively low and can be acquired during the build process through problem deconstruction and solution engineering.

A New Needle and Haystack: Detecting DNS over HTTPS Usage
By Drew Hjelm
September 10, 2019

  • Encrypted DNS technologies such as DNS over HTTPS (DoH) give users new means to protect privacy while using the Internet. Organizations will face new obstacles for monitoring network traffic on their networks as users attempt to use encrypted DNS. First, the paper presents several tests to perform to detect encrypted DNS using endpoint tools and network traffic monitoring. The goal of this research is to present several controls that organizations can implement to prevent the use of encrypted DNS on enterprise networks.

Changing the DevOps Culture One Security Scan at a Time
By Jon-Michael Lacek
August 28, 2019

  • Information Security has always been considered a roadblock when it comes to project management and execution. This mentality is even further solidified when discussing Information Security from a DevOps perspective. A fundamental principle of a DevOps lifecycle is a development and operations approach to delivering a product that supports automation and continuous delivery. When an Information Technology (IT) Security team has to manually obtain the application code and scan it for vulnerabilities each time a DevOps team wants to perform a release, the goals of DevOps can be significantly impacted. This frequently leads to IT Security teams and their tools being left out of the release management lifecycle. The research presented in this paper will demonstrate that available pipeline plugins do not introduce significant delays into the release process and are able to identify all of the vulnerabilities detected by traditional application scanning tools. The art of DevOps is driving organizations to produce and release code at speeds faster than ever before, which means that IT Security teams need to figure out a way to insert themselves into this practice.

Container-Based Networks: Lowering the TCO of the Modern Cyber Range
By Bryan Scarbrough
August 26, 2019

  • The rapid pace and ever-changing environment of cybersecurity make it difficult for companies to find qualified individuals, and for those same individuals to receive the training and experience they need to succeed. Some are fortunate enough to use cyber ranges for training and proficiency testing, but access is often limited to company employees. Limited access to cyber ranges precludes outsiders or newcomers from learning the skills necessary to meet the ever-growing demand for cybersecurity professionals. There have been several open-sourced initiatives such as Japan's Cybersecurity Training and Operation Network Environment (CyTrONE), and the University of Rhode Island's Open Cyber Challenge Platform (OCCP), but they require significant hardware to support. The average security professional needs a cyber range environment that replicates real-world Internet topologies, networks, and services, but operates on affordable equipment.

Cyber Protectionism: Global Policies are Adversely Impacting Cybersecurity
By Erik Avery
August 21, 2019

  • Cyber Protectionist policies are adversely impacting global cybersecurity despite their intent to mitigate threats to national security. These policies threaten the information security community by generating effects which increase the risk to the networks they are intended to protect. International product bans, data-flow restrictions, and increased internet-enabled crime are notable results of protectionist policies – all of which may be countered through identifying protectionist climates and subsequent threat. Analyzed historical evidence facilitates a metrics-based comparison between protectionist climate and cybersecurity threats to comprise the Cyber Protectionist Risk Matrix - a risk framework that establishes a new cybersecurity industry standard.

ATT&CKing Threat Management: A Structured Methodology for Cyber Threat Analysis
By Andy Piazza
July 29, 2019

  • Risk management is a principal focus for most information security programs. Executives rely on their IT security staff to provide timely and accurate information regarding the threats and vulnerabilities within the enterprise so that they can effectively manage the risks facing their organizations. Threat intelligence teams provide analysis that supports executive decision-makers at the strategic and operational levels. This analysis aids decision makers in their commission to balance risk management with resource management. By leveraging the MITRE Adversarial Tactics Techniques & Common Knowledge (ATT&CK) framework as a quantitative data model, analysts can bridge the gap between strategic, operational, and tactical intelligence while advising their leadership on how to prioritize computer network defense, incident response, and threat hunting efforts to maximize resources while addressing priority threats.

Attackers Inside the Walls: Detecting Malicious Activity
By Sean Goodwin
July 2, 2019

  • Small and medium-sized businesses (SMBs) do not always have the budget for an advanced intrusion detection system (IDS) technology. Open-source software can fill this gap, but these free solutions may not provide full coverage for known attacks, especially once the attacker is inside the perimeter. This paper investigates the IDS capabilities of a stand-alone Security Onion device when combined with built-in event logging in a small Windows environment to detect malicious actors on the internal network.

Building Cloud-Based Automated Response Systems
By Mishka McCowan
July 2, 2019

  • When moving to public cloud infrastructures such as Amazon Web Services (AWS), organizations gain access to tools and services that enable automated responses to specific threats. This paper will explore the advantages and disadvantages of using native AWS services to build an automated response system. It will examine the elements that organizations should consider including developing the proper skills and systems that are required for the long-term viability of such a system.

Defending with Graphs: Create a Graph Data Map to Visualize Pivot Paths
By Brianne Fahey
June 26, 2019

  • Preparations made during the Identify Function of the NIST Cybersecurity Framework can often pay dividends once an event response is warranted. Knowing what log data is available improves incident response readiness and providing a visual layout of those sources enables responders to pivot rapidly across relevant elements. Thinking in graphs is a multi-dimensional approach that improves upon defense that relies on one-dimensional lists and two-dimensional link analyses. This paper proposes a methodology to survey available data element relationships and apply a graph database schema to create a visual map. This graph data map can be used by analysts to query relationships and determine paths through the available data sources. A graph data map also allows for the consideration of log sources typically found in a SIEM alongside other data sources like an asset management database, application whitelist, or HR information which may be particularly useful for event context and to review potential Insider Threats. The templates and techniques described in this paper are available in GitHub for immediate use and further testing.

Automating Response to Phish Reporting
By Geoffrey Parker
June 12, 2019

  • Phish Reporting buttons have become easy buttons. They are used universally for reporting spam, real phishing attacks when detected, and legitimate emails. Phish Reporting buttons automate the reporting process for users; however, they have become a catch-all to dispose of unwanted messages and are now overwhelming Response Teams and overflowing Help Desk ticket queues. The excessive reporting leads to a problem of managing timely responses to real phishing attacks. Response times to false positives, spam, and legitimate messages incorrectly reported are also significant factors. Vendors sold phish alert buttons with phishing simulation systems which then became part of more in-depth training systems and later threat management systems. Because of this organic growth, many companies implemented a phish reporting system but did not know that they needed an automation system to manage the resulting influx of tickets. Triage systems can automate a high percentage of these phish alerts, freeing the incident response teams to deal with the genuine threats to the enterprise on a prioritized basis.

Mobile A/V: Is it worth it?
By Nicholas Dorris
June 5, 2019

  • In the mid 2010’s, mobile devices such as smartphones and tablets have become ubiquitous with users employing these gadgets for various applications. While this pervasive adoption of mobile devices offers numerous advantages, attackers have leveraged the languid attitude of device owners to secure the owner’s gadgets. The diversity of mobile devices exposes them to a variety of security threats, as the industry lacks a comprehensive solution to protect mobile devices. In a bid to secure their assets and informational resources, individuals and corporations have turned to commercial mobile antivirus software. Most security providers present mobile versions of their PC antivirus applications, which are primarily based on the conventional signature-based detection techniques. Although the signature-based strategy can be valuable in identifying and mitigating profiled malware, it is not as effective in detecting unknown, new, or evolving threats, as it lacks adequate information and signature regarding these infections. Mobile attackers have remained ahead via obfuscation and transformation methods to bypass detection techniques. This paper seeks to ascertain whether current mobile antivirus solutions are effective, in addition to which default Android settings assist in the prevention or mitigation of various malware and their consequences.

Finding Secrets in Source Code the DevOps Way
By Phillip Marlow
June 5, 2019

  • Secrets, such as private keys or API tokens, are regularly leaked by developers in source code repositories. In 2016, researchers found over 1500 Slack API tokens in public GitHub repositories belonging to major companies (Detectify Labs, 2016). Moreover, a single leak can lead to widespread effects in dependent projects (JS Foundation, 2018) or direct monetary costs (Mogull, 2014). Existing tools for detecting these leaks are designed for either prevention or detection during full penetration-test-style scans. This paper presents a way to reduce detection time by integrating incremental secrets scanning into a continuous integration pipeline.

DICE and MUD Protocols for Securing IoT Devices
By Muhammed Ayar
June 5, 2019

  • An exponential growth of Internet of Things (IoT) devices on communication networks is creating an increasing security challenge that is threatening the entire Internet community. Attackers operating networks of IoT devices can target any site on the Internet and bring it down using denial of service attacks. As exemplified in various DDoS attacks that took down portions of the Internet in the past few years (such as the attacks on Dyn and KrebsOnSecurity (Hallman, Bryan, Palavicini Jr, Divita, Romero- Mariona, 2017)), IoT users need to take drastic steps in securing them. This research will discuss the steps in attempting to secure IoT devices using DICE and MUD.

Digging for Gold: Examining DNS Logs on Windows Clients
By Amanda Draeger
May 22, 2019

  • Investigators can examine Domain Name Service (DNS) queries to find potentially compromised hosts by searching for queries that are unusual or to known malicious domains. Once the investigator identifies the compromised host, they must then locate the process that is generating the DNS queries. The problem is that Windows hosts do not log DNS client transactions by default, and there is little documentation on the structure of those logs. This paper examines how to configure several modern versions of Windows to log DNS client transactions to determine the originating process for any given DNS query. These configurations will allow investigators to determine not only what host is compromised, but what the malicious process is more quickly.

Overcoming the Compliance Challenges of Biometrics
By David Todd
May 22, 2019

  • Due to increased regulations designed to protect sensitive data such as personally identifiable information (PII) and protected health information (PHI), hospitals and other industries requiring improved data protections are starting to adopt biometrics. However, adoption has been slow within many of the industries that have suffered most of the breaches over the last several years. One reason adoption has been slow is that companies hesitate to implement biometrics across their organization without first understanding the vast complexities of the various state-by-state privacy regulations. By adopting a common biometrics compliance framework, this research will show how organizations can implement biometric solutions that comply with the overall spirit of the different state privacy and biometric regulations, enabling those companies to improve global data protections.

Runtime Application Self-Protection (RASP), Investigation of the Effectiveness of a RASP Solution in Protecting Known Vulnerable Target Applications
By Alexander Fry
April 30, 2019

  • Year after year, attackers target application-level vulnerabilities. To address these vulnerabilities, application security teams have increasingly focused on shifting left - identifying and fixing vulnerabilities earlier in the software development life cycle. However, at the same time, development and operations teams have been accelerating the pace of software release, moving towards continuous delivery. As software is released more frequently, gaps remain in test coverage leading to the introduction of vulnerabilities in production. To prevent these vulnerabilities from being exploited, it is necessary that applications become self-defending. RASP is a means to quickly make both new and legacy applications self-defending. However, because most applications are custom-coded and therefore unique, RASP is not one-size-fits-all - it must be trialed to ensure that it meets performance and attack protection goals. In addition, RASP integrates with critical applications, whose stakeholders typically span the entire organization. To convince these varied stakeholders, it is necessary to both prove the benefits and show that RASP does not adversely affect application performance or stability. This paper helps organizations that may be evaluating a RASP solution by outlining activities that measure the effectiveness and performance of a RASP solution against a given application portfolio.

Security Considerations for Voice over Wi-Fi (VoWiFi) Systems
By Joel Chapman
April 30, 2019

  • As the world pivots from Public Switched Telephony Networks (PSTN) to Voice over Internet Protocol (VoIP)-based telephony architectures, users are employing VoIP-based solutions in more situations. Mobile devices have become a ubiquitous part of a person's identity in the developed world. In the United States in 2017, there were an estimated 224.3 million smartphone users, representing about 68% of the total population. The ability to route telephone call traffic over Wi-Fi networks will continue to expand the coverage area of mobile devices, especially into urban areas where high-density construction has previously caused high signal attenuation. Estimates show that by 2020, Wi-Fi-based calling will make up 53% of mobile IP voice service usage (roughly 9 trillion minutes per year) (Xie, 2018). In contrast to the more traditional VoIP solutions, however, the standards for carrier-based Voice over Wi-Fi (VoWiFi) are often proprietary and have not been well-publicized or vetted. This paper examines the vulnerabilities of VoWiFi calling, assesses what common and less well-known attacks are able to exploit those vulnerabilities, and then proposes technological or procedural security protocols to harden telephony systems against adversary exploitation.

Security Monitoring of Windows Containers
By Peter Di Giorgio
March 27, 2019

  • The information technology community has utilized container technology since the LXC project began in 2008 (Hildred, 2015). Containers are a form of virtualization that package application code and its dependencies together. Containers share the operating system kernel but maintain isolated processes. Until recently, it was not possible for the Windows operating system to share its kernel. As such, developers were long unable to package many Windows-specific applications into containers. However, after ten years of waiting, Microsoft finally delivered Windows containers in 2018. Today, container security best practices focus on container integrity and container host security. The industry is just beginning to consider techniques to monitor Windows containers. This research focuses on the possibility of using known techniques and open source tools to extract Windows event logs, processes, services, and registry data from containers to observe attacks.

Gaining Endpoint Log Visibility in ICS Environments
By Michael Hoffman
March 11, 2019

  • Security event logging is a base IT security practice and is referenced in Industrial Control Security (ICS) standards and best practices. Although there are many techniques and tools available to gather event logs and provide visibility to SOC analysis in the IT realm, there are limited resources available that discuss this topic specifically within the context of the ICS industry. As many in the ICS community struggle with gaining logging visibility in their environments and understanding collection methodologies, logging implementation guidance is further needed to address this concern. Logging methods used in ICS, such as WMI, Syslog, and Windows Event Forwarding (WEF), are common to the IT industry. This paper examines WEF in the context of Windows ICS environments to determine if WEF is better suited for ICS environments than WMI pulling regarding bandwidth, security, and deployment considerations. The comparison between the two logging methods is made in an ICS lab representing automation equipment commonly found in energy facilities.

PowerShell Security: Is it Enough?
By Timothy Hoffman
February 20, 2019

  • PowerShell is a core component of any modern Microsoft Windows environment and is used daily by administrators around the world. However, it has also become an “attacker’s tool of choice when conducting fileless malware attacks” (O’Connor, 2017). According to a study by Symantec, the number of prevented PowerShell attacks increased by over 600% between the last half of 2017 and the first half of 2018 (Wueest, 2018). This is a staggering number of prevented attacks, but the more concerning problem is the unknown number of undetected attacks that occurred during this time. Modern attackers often prefer to “live off the land,” using native tools already in an environment to prevent detection; PowerShell is a prime example of this is. These statistics lead to a suggestion that current PowerShell security may not be effective enough, or organizations are improperly implementing it. This paper investigates the efficiency of PowerShell security, analyzing the success of security features like execution policies, language modes, and Windows Defender, as well as the vulnerabilities introduced by leaving PowerShell 2.0 enabled in an environment. Multiple attack campaigns will be conducted against these security features while implemented individually and collectively to validate their effectiveness in preventing PowerShell from being used maliciously.

Cyber Threats to the Bioengineering Supply Chain
By Scott Nawrocki
February 12, 2019

  • Biotechnology and pharmaceutical companies rely on the sequencing of DNA to conduct research, develop new drug therapies, solve environmental challenges and study emerging infectious diseases. Synthetic biology combines biology and computer engineering disciplines to read, synthetically write and store DNA sequences utilizing bioinformatics applications. Bioengineers begin with a computerized genetic model and turn that model into a living cell (2011, Smolke). Genetic editing is making headlines as there are rumors that a genetically modified human, immune to HIV, was born in China. As the soil on our farms becomes depleted of nitrogen, genetic research is focusing on applications as a means to reintroduce nitrogen into the ground. Reliance on oil and pollution has paved the way for research into bio-fuels. Genomic research advances have outpaced the security of these applications and technology which leaves them vulnerable to attack (2017, Ney). As information security professionals, we must keep pace with these advances. This research will demonstrate the stages of a network-based attack, recommend Critical Security Controls countermeasures and introduce the concept of a Bioengineering Systems Kill Chain.

PyFunnels: Data Normalization for InfoSec Workflows
By TJ Nicholls
February 1, 2019

  • Information security professionals cannot afford delays in their workflow due to the challenge of integrating data. For example, when data is gathered using multiple tools, the varying output formats must be normalized before automation can occur. This research details a Python library to normalize output from industry standard tools and act as a consolidation point for that functionality. Information security professionals should collaborate using a centralized resource that facilitates easy access to output data. Doing so will bypass extraneous tasks and jump straight to the application of practical data.

Onion-Zeek-RITA: Improving Network Visibility and Detecting C2 Activity
By Dallas Haselhorst
January 4, 2019

  • The information security industry is predicted to exceed 100 billion dollars in the next few years. Despite the dollars invested, breaches continue to dominate the headlines. Despite best efforts, all attempts to keep the enemies at the gates have ultimately failed. Meanwhile, attacker dwell times on compromised systems and networks remain absurdly high. Traditional defenses fall short in detecting post-compromise activity even when properly configured and monitored. Prevention must remain a top priority, but every security plan must also include hunting for threats after the initial compromise. High price tags often accompany quality solutions, yet tools such as Security Onion, Zeek (Bro), and RITA require little more than time and skill. With these freely available tools, organizations can effectively detect advanced threats including real-world command and control frameworks.

Don't Knock Bro
By Brian Nafziger
December 12, 2018

  • Today's defenders often focus detections on host-level tools and techniques thereby requiring host logging setup and management. However, network-level techniques may provide an alternative without host changes. The Bro Network Security Monitor (NSM) tool allows today's defenders to focus detection techniques at the network-level. An old method for controlling a concealed backdoor on a system using a defined sequence of packets to various ports is known as port-knocking. Unsurprisingly, old methods still offer value and malware, defenders, and attackers still use port-knocking. Current port-knocking detection relies on traffic data mining techniques that only exist in academia writing without any applicable tools. Since Bro is a network-level tool, it should be possible to adapt these data mining techniques to detect port-knocking within Bro. This research will document the process of creating and confirming a port-knocking network-level detection with Bro that will provide an immediate and accessible detection technique for organizations.

A Swipe and a Tap: Does Marketing Easier 2FA Increase Adoption?
By Preston Ackerman
November 19, 2018

  • Data breaches and Internet-enabled fraud remain a costly and troubling issue for businesses and home end-users alike. Two-factor authentication (2FA) has long held promise as one of the most viable solutions that enables ordinary users to implement extraordinary protection. A security industry push for widespread 2FA availability has resulted in the service being offered free of charge on most major platforms; however, user adoption remains low. A previous study (Ackerman, 2017) indicated that awareness videos can influence user behavior by providing a clear message which outlines personal risks, offers a mitigation strategy, and demonstrates the ease of implementing the mitigating measure. Building on that previous work, this study, focused on younger millennials between 21 and 26 years of age, seeks to reveal additional insights by designing experiments around the following key questions: 1) Does including a real-time implementation demonstration increase user adoption? 2) Does marketing the convenient push notification form of 2FA, rather than the popular SMS text method, increase user adoption? To address these questions, a two-phase study exposed groups of users to different video messages advocating use of 2FA. Each phase of the survey collected data measuring self-efficacy, fear, response costs and efficacy, perceived threat vulnerability and severity, and behavioral intent. The second phase also collected survey data regarding actual 2FA adoption. The insights derived from subsequent analysis could be applicable not just to increasing 2FA adoption but to security awareness programs more generally.

Microsoft DNS Logs Parsing and Analysis: Establishing a Standard Toolset and Methodology for Incident Responders
By Shelly Giesbrecht
November 2, 2018

  • Microsoft DNS request and response event logs are frequently ignored by incident responders within an investigation due to a historical reputation of being hard to parse and analyze. The fundamental importance of DNS to networking and the functioning of the Internet suggests this oversight could lead to a lack of crucial contextual information in an investigative timeline. This paper seeks to define a best practice for parsing, exporting and analyzing Microsoft DNS Debug and Analytical logs through the comparison of existing tool combinations to DNSplice, a purpose-built utility coded during the development of this paper. Findings suggest that DNSplice is superior to other toolsets tested where time to completion is a critical factor in the investigative process. Further research is required to determine if the findings are still valid on larger datasets or different analysis hardware.

Tearing up Smart Contract Botnets
By Jonathan Sweeny
October 22, 2018

  • The distributed resiliency of smart contracts on private blockchains is enticing to bot herders as a method of maintaining a capable communications channel with the members of a botnet. This research explores the weaknesses that are inherent to this approach of botnet management. These weaknesses, when targeted properly by law enforcement or malware researchers, could limit the capabilities and effectiveness of the botnet. Depending on the weakness targeted, the results vary from partial takedown to total dismantlement of the botnet.

To Block or not to Block? Impact and Analysis of Actively Blocking Shodan Scans
By Andre Shori
October 22, 2018

  • This paper details an experiment constructed to evaluate the effectiveness of blocking Shodan search engine scans in reducing overall attack traffic volumes. Shodan is considered to be part of an attacker’s toolset, and there is a persistent perception that blocking Shodan Scans will reduce an organization’s attack surface. An attempt was made to determine what effect, if any, such a block would result in by comparing attacker traffic before and after implementing a block on Shodan scans, and by determining the complexity of performing such a block. The analysis here may provide defenders and managers with useful data when deciding on whether or not to devote resources to blocking Shodan or other similar internet-connected device search engines.

Generating Anomalies Improves Return on Investment: A Case Study for Implementing Honeytokens
By Wes Earnest
October 11, 2018

  • Putting the right information security architecture into practice within an organization can be a daunting challenge. Many organizations have implemented a Security Information and Event Management (SIEM) to comply with the logging requirements of various security standards, only to find that it does not meet their information security expectations. According to a recent survey, more than half of respondents say they are not satisfied with their organization's SIEM. The following case study deconstructs these logging requirements and the assumptions that lead to a typical SIEM implementation, and discusses an alternative approach focused on improving the organization’s return on investment, decreasing security risk, and decreasing mean time to detection of a potential security breach.

Testing Web Application Security Scanners against a Web 2.0 Vulnerable Web Application
By Edmund Foster
October 11, 2018

  • Web application security scanners are used to perform proactive security testing of web applications. Their effectiveness is far from certain, and few studies have tested them against modern ‘Web 2.0' technologies which present significant challenges to scanners. In this study three web application security scanners are tested in 'point-and-shoot' mode against a Web 2.0 vulnerable web application with AJAX and HTML use cases. Significant variations in performance were observed and almost three-quarters of vulnerabilities went undetected. The web application security scanners did not identify Stored XSS, OS Command, Remote File Inclusion, and Integer Overflow vulnerabilities. This study supports the recommendation to combine multiple web application security scanners and use them in conjunction with a specific scanning strategy.

All-Seeing Eye or Blind Man? Understanding the Linux Kernel Auditing System
By David Kennel
September 21, 2018

  • The Linux kernel auditing system provides powerful capabilities for monitoring system activity. While the auditing system is well documented, the manual pages, user guides, and much of the published writings on the audit system fail to provide guidance on the types of attacker-related activities that are, and are not, likely to be logged by the auditing system. This paper uses simulated attacks and analyzes logged artifacts for the Linux kernel auditing system in its default state and when configured using the Controlled Access Protection Profile (CAPP) and the Defense Information Systems Agency’s (DISA) Security Implementation Guide (STIG) auditing rules. This analysis provides a clearer understanding of the capabilities and limitations of the Linux audit system in detecting various types of attacker activity and helps to guide defenders on how to best utilize the Linux auditing system.

Which YARA Rules Rule: Basic or Advanced?
By Chris Culling
August 10, 2018

  • YARA rules, if used effectively, can be a powerful tool in the fight against malware. However, it appears that the majority of individuals who use YARA write only the most basic of rules, instead of taking advantage of YARA’s full functionality. Basic YARA rules, which focus primarily on identifying malware signatures via detection of predetermined strings within the target file, folder, or process, can be evaded as malware variants are created. Advanced YARA rules, on the other hand, which often include signatures as well, also focus on the malware’s behavior and characteristics, such as size and file type. While it is not uncommon for strings within malware to change, it is much rarer that its primary behavior will. After analyzing multiple samples of two different malware strains within the same family, it became clear that using both basic and advanced YARA rules is the most effective way for users and analysts to implement this powerful tool. As there are a large number of advanced capabilities contained within YARA, this paper will focus on easy-to-use, advanced features, including YARA's Portable Execution (PE) module, to highlight some of the more powerful aspects of YARA. While it takes more time and effort to learn and utilize advanced YARA rules, in the long run, this method is a worthwhile investment towards a safer networking environment.

Times Change and Your Training Data Should Too: The Effect of Training Data Recency on Twitter Classifiers
By Ryan O'Grady
July 11, 2018

  • Sophisticated adversaries are moving their botnet command and control infrastructure to social media microblogging sites such as Twitter. As security practitioners work to identify new methods for detecting and disrupting such botnets, including machine-learning approaches, we must better understand what effect training data recency has on classifier performance. This research investigates the performance of several binary classifiers and their ability to distinguish between non-verified and verified tweets as the offset between the age of the training data and test data changed. Classifiers were trained on three feature sets: tweet-only features, user-only features, and all features. Key findings show that classifiers perform best at +0 offset, feature importance changes over time, and more features are not necessarily better. Classifiers using user-only features performed best, with a mean Matthews correlation coefficient of 0.95 ± 0.04 at +0 offset, 0.58 ± 0.43 at -8 offset, and 0.51 ± 0.21 at +8 offset. The R2 values are 0.90, 0.34, and 0.26, respectively. Thus, the classifiers tested with +0 offset accounted for 56% to 64% more variance than those tested with −8 and +8 offset. These results suggest that classifier performance is sensitive to the recency of the training data relative to the test data. Further research is needed to replicate this experiment with botnet vs. non-botnet tweets to determine if similar classifier performance is possible and the degree to which performance is sensitive to training data recency.

Extracting Timely Sign-in Data from Office 365 Logs
By Mark Lucas
May 22, 2018

  • Office 365 is quickly becoming a repository of valuable organizational information, including data that falls under multiple privacy laws. Timely detection of a compromised account and stopping the bad guy before data is exfiltrated, destroyed, or the account used for nefarious purposes is the difference between an incident and a compromise. Microsoft provides audit logging and alerting tools that can assist system administrators find these incidents. An examination of the efficacy and efficiency of these tools and the shortcomings and advantages provides insight into how to best use the tools to protect individual accounts and the organization as a whole.

Evaluation of Comprehensive Taxonomies for Information Technology Threats
By Steven Launius
March 26, 2018

  • Categorization of all information technology threats can improve communication of risk for an organization’s decision-makers who must determine the investment strategy of security controls. While there are several comprehensive taxonomies for grouping threats, there is an opportunity to establish the foundational terminology and perspective for communicating threats across the organization. This is important because confusion about information technology threats pose a direct risk of damaging an organization’s operational longevity. In order for leadership to allocate security resources to counteract prevalent threats in a timely manner, they must understand those threats quickly. A study that investigates categorization techniques of information technology threats to nontechnical decision-makers through a qualitative review of grouping methods for published threat taxonomies could remedy the situation.

Pick a Tool, the Right Tool: Developing a Practical Typology for Selecting Digital Forensics Tools
By J. Richard “Rick” Kiper, Ph.D.
March 16, 2018

  • One of the most common challenges for a digital forensic examiner is tool selection. In recent years, examiners have enjoyed a significant expansion of the digital forensic toolbox – in both commercial and open source software. However, the increase of digital forensics tools did not come with a corresponding organizational structure for the toolbox. As a result, examiners must conduct their own research and experiment with tools to find one appropriate for a particular task. This study collects input from forty six practicing digital forensic examiners to develop a Digital Forensics Tools Typology, an organized collection of tool characteristics that can be used as selection criteria in a simple search engine. In addition, a novel method is proposed for depicting quantifiable digital forensic tool characteristics.

PCAP Next Generation: Is Your Sniffer Up to Snuff?
By Scott D. Fether
March 16, 2018

  • The PCAP file format is widely used for packet capture within the network and security industry, but it is not the only standard. The PCAP Next Generation (PCAPng) Capture File Format is a refreshing improvement that adds extensibility, portability, and the ability to merge and append data to a wire trace. While Wireshark has led the way in supporting the new format, other tools have been slow to follow. With advantages such as the ability to capture from multiple interfaces, improved time resolution, and the ability to add per-packet comments, support for the PCAPng format should be developing more quickly than it has. This paper describes the new standard, displays methods to take advantage of new features, introduces scripting that can make the format useable, and makes the argument that migration to PCAPng is necessary.

Bug Bounty Programs: Enterprise Implementation
By Jason Pubal
January 17, 2018

  • Bug bounty programs are incentivized, results-focused programs that encourage security researchers to report security issues to the sponsoring organization. These programs create a cooperative relationship between security researchers and organizations that allow the researchers to receive rewards for identifying application vulnerabilities. Bug bounty programs have gone from obscurity to being embraced as a best practice in just a few years: application security maturity models have added bug bounty programs and there are standards for vulnerability disclosure best practices. Through leveraging a global community of researchers available 24 hours a day, 7 days a week, information security teams can continuously deliver application security assessments keeping pace with agile development and continuous integration deployments complementing existing controls such as penetration testing and source code reviews.

Container Intrusions: Assessing the Efficacy of Intrusion Detection and Analysis Methods for Linux Container Environments
By Alfredo Hickman
January 13, 2018

  • The unique and intrinsic methods by which Linux application containers are created, deployed, networked, and operated do not lend themselves well to the conventional application of methods for conducting intrusion detection and analysis in traditional physical and virtual machine networks. While similarities exist in some of the methods used to perform intrusion detection and analysis in conventional networks as compared to container networks, the effectiveness between the two has not been thoroughly measured and assessed: this presents a gap in application container security knowledge. By researching the efficacy of these methods as implemented in container networks compared to traditional networks, this research will provide empirical evidence to identify the gap, and provide data useful for identifying and developing new and more effective methods to secure application container networks

Looking Under the Rock: Deployment Strategies for TLS Decryption
By Chris Farrell
January 13, 2018

  • Attackers can freely exfiltrate confidential information all while under the guise of ordinary web traffic. A remedy for businesses concerned about these risks is to decrypt the communication to inspect the traffic, then block it if it presents a risk to the organization. However, these solutions can be challenging to implement. Existing infrastructure, privacy and legal concerns, latency, and differing monitoring tool requirements are a few of the obstacles facing organizations wishing to monitor encrypted traffic. TLS decryption projects can be successful with proper scope definition, an understanding of the architectural challenges presented by decryption, and the options available for overcoming those obstacles.

Digital Forensic Analysis of Amazon Linux EC2 Instances
By Ken Hartman
January 13, 2018

  • Companies continue to shift business-critical workloads to cloud services such as Amazon Web Services Elastic Cloud Computing (EC2). With demand for skilled security engineers at an all-time high, many organizations do not have the capability to do an adequate forensic analysis to determine the root cause of an intrusion or to identify indicators of compromise. To help organizations improve their incident response capability, this paper presents specific tactics for the forensic analysis of Amazon Linux that align with the SANS Finding Malware Step by Step process for Microsoft Windows.

BYOD Security Implementation for Small Organizations
By Raphael Simmons
December 15, 2017

  • The exponential improvement of the mobile industry has caused a shift in the way organizations work across all industry sectors. Bring your own device (BYOD) is a current industry trend that allows employees to use their personal devices such as laptops, tablets, mobile phones and other devices, to connect to the internal network. The number of external devices that can now connect to a company that implements a BYOD policy has allowed for a proliferation of security risks. The National Institute of Standards and Technology lists these high-level threats and vulnerabilities of mobile devices: lack of physical security controls, use of untrusted mobile devices, use of untrusted networks, use of untrusted applications, interaction with other systems, use of untrusted content, and use of location services. A well implemented Mobile Device Management (MDM) tool combined with network access controls can be used to mitigate the risks associated with a BYOD policy.

Who's in the Zone? A Qualitative Proof-of-Concept for Improving Remote Access Least-Privilege in ICS-SCADA Environments
By Kevin Altman
December 4, 2017

  • Remote access control in many ICS-SCADA environments is of limited effectiveness leading to excessive privilege for staff who have responsibilities bounded by region, site, or device. Inability to implement more restrictive least-privilege access controls may result in unacceptable residual risk from internal and external threats. Security vendors and ICS cybersecurity practitioners have recognized this issue and provide options to address these concerns, such as inline security appliances, network authentication, and user-network based access control. Each of these solutions reduces privileges but has tradeoffs. This paper evaluates network-based access control combined with security zones and its benefits for existing ICS-SCADA environments. A Proof-of-Concept (PoC) evaluates a promising option that is not widely known or deployed in ICS-SCADA.

Hacking Humans: The Evolving Paradigm with Virtual Reality
By Andrew Andrasik
November 22, 2017

  • Virtual reality (VR) systems are evolving from high-end gaming and military applications to being used in day-to-day business operations and daily life. Cyber security professionals must begin now to prepare proactive threat analysis and incident handling plans that cover information systems and users. Previous compromises illustrate the devastating effects malware can have on the confidentiality, integrity, and availability of information systems. These disastrous consequences may be transferred directly to the user given his or her perception of events. Even in the early stages, VR represents a new paradigm within the information age. Today, users view information systems through a monitor that acts as a window into a virtual environment. Within VR, a user may become completely immersed while absorbing information from all five senses. VR represents a dichotomy that adds a potential human component to an information system compromise. This research project examines offensive tactics, techniques, and procedures, then exploits and extrapolates them to a compromised VR system and the user to illustrate the hazards associated with VR.

Leverage Risk Focused Teams to Strengthen Resilience against Cyber Risks
By Dave Bishop
November 17, 2017

  • Information security, risk management, audit and business continuity teams must continue to evolve and mature to combat the growing cyber risks impacting business operations. Each team has standards and frameworks, but they often dont speak the same language or understand how each group intersects in protecting the organization. This research identifies opportunities to reduce resource duplication and integrate information security and risk-focused teams to strengthen the organizations resilience against cyber risks.

The State of Honeypots: Understanding the Use of Honey Technologies Today
By Andrea Dominguez,
November 17, 2017

  • The aim of this study is to fill in the gaps in data on the real-world use of honey technologies. The goal has also been to better understand information security professionals views and attitudes towards them. While there is a wealth of academic research in cutting-edge honey technologies, there is a dearth of data related to the practical use of these technologies outside of research laboratories. The data for this research was collected via a survey which was distributed to information security professionals. This research paper includes details on the design of the survey, its distribution, analysis of the results, insights, lessons learned and two appendices: the survey in its entirety and a summary of the data collected.

Exploring the Effectiveness of Approaches to Discovering and Acquiring Virtualized Servers on ESXi
By Scott Perry
November 17, 2017

  • As businesses continue to move to virtualized environments, investigators need updated techniques to acquire virtualized servers. These virtualized servers contain a plethora of relevant data and may hold proprietary software and databases that are relatively impossible to recreate. Before an acquisition, investigators sometimes rely on the host administrators to provide them with network topologies and server information. This paper will demonstrate tools and techniques to conduct server and network discovery in a virtualized environment and how to leverage the software used by administrators to acquire virtual machines hosted on vSphere and ESXi.

Tackling the Unique Digital Forensic Challenges for Law Enforcement in the Jurisdiction of the Ninth U.S. Circuit Court
By John Garris
November 17, 2017

  • The creation of a restrictive digital evidence search protocol by the U.S. Ninth Circuit Court of Appeals - the most stringent in the United States - triggered intense legal debate and caused significant turmoil regarding digital forensics procedures and practices in law enforcement operations. Understanding the Court's legal reasoning and the U.S. Department of Justice's counter-arguments regarding this protocol is critical in appreciating how the tension between privacy concerns and the challenges to law enforcement stand at the center of this unique Information Age issue. By focusing on the Court's core assumption that the seizure and search of electronically stored information are inherently overly intrusive, digital forensics practitioners have a worthy target to focus their efforts in the advancement of digital forensics processes, procedures, techniques, and tool-sets. This paper provides an overview of various proposals, developments, and possible approaches to help address the privacy concerns central to the Court's decision, while potentially improving the overall effectiveness and efficiency of digital forensic operations in law enforcement.

Can the "Gorilla" Deliver? Assessing the Security of Google's New "Thread" Internet of Things (IoT) Protocol
By Kenneth Strayer
October 6, 2017

  • Security incidents associated with Internet of Things (IoT) devices have recently gained high visibility, such as the Mirai botnet that exploited vulnerabilities in remote cameras and home routers. Currently, no industry standard exists to provide the right combination of security and ease-of-use in a low-power, low-bandwidth environment. In 2014, the Thread Group, Inc. released the new Thread networking protocol. Google's Nest Labs recently open-sourced their implementation of Thread in an attempt to become a market standard for the home automation environment. The Thread Group claims that Thread provides improved security for IoT devices. But in what way is this claim true, and how does Thread help address the most significant security risks associated with IoT devices? This paper assesses the new IEEE 802.15.4 "Thread" protocol for IoT devices to determine its potential contributions in mitigating the OWASP Top 10 IoT Security Concerns. It provides developers and security professionals a better understanding of what risks Thread addresses and what challenges remain.

Hardening BYOD: Implementing Critical Security Control 3 in a Bring Your Own Device (BYOD) Architecture
By Christopher Jarko
September 22, 2017

  • The increasing prevalence of Bring Your Own Device (BYOD) architecture poses many challenges to information security professionals. These include, but are not limited to: the risk of loss or theft, unauthorized access to sensitive corporate data, and lack of standardization and control. This last challenge can be particularly troublesome for an enterprise trying to implement the Center for Internet Security (CIS) Critical Security Controls for Effective Cyber Defense (CSCs). CSC 3, Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers, calls for hardened operating systems and applications. Even in traditional enterprise environments, this requires a certain amount of effort, but it is much more difficult in a BYOD architecture where computer hardware and software is unique to each employee and company control of that hardware and software is constrained. Still, it is possible to implement CSC 3 in a BYOD environment. This paper will examine options for managing a standard, secure Windows 10 laptop as part of a BYOD program, and will also discuss the policies, standards, and guidelines necessary to ensure the implementation of this Critical Security Control is as seamless as possible.

Botnet Resiliency via Private Blockchains
By Jonny Sweeny
September 22, 2017

  • Criminals operating botnets are persistently in an arms race with network security engineers and law enforcement agencies to make botnets more resilient. Innovative features constantly increase the resiliency of botnets but cannot mitigate all the weaknesses exploited by researchers. Blockchain technology includes features which could improve the resiliency of botnet communications. A trusted, distributed, resilient, fully-functioning command and control communication channel can be achieved using the combined features of private blockchains and smart contracts.

OSSIM: CIS Critical Security Controls Assessment in a Windows Environment.
By Kevin Geil
September 22, 2017

  • Use of a Security Information and Event Management (SIEM) or log management platform is a recommendation common to several of the “CIS Critical Security Controls For Effective Cyber Defense” (2016). Because the CIS Critical Security Controls (CSC) focus on automation, measurement and continuous improvement of control application, a SIEM is a valuable tool. Alienvault's Open Source SIEM (OSSIM) is free and capable, making it a popular choice for administrators seeking experience with SIEM. While there is a great deal of documentation on OSSIM, specific information that focuses on exactly what events to examine, and then how to report findings is not readily accessible. This paper uses a demo environment to provide specific examples and instructions for using OSSIM to assess a CIS Critical Security Controls implementation in a common environment: A Windows Active Directory domain. The 20 Critical Security Controls can be mapped to other controls in most compliance frameworks and guidelines; therefore, the techniques in this document should be applicable across a wide variety of control implementations.

Trust No One: A Gap Analysis of Moving IP-Based Network Perimeters to A Zero Trust Network Architecture
By John Becker
September 22, 2017

  • Traditional IP-based access controls (e.g., firewall rules based on source and destination addresses) have defined the network perimeter for decades. Threats have evolved to evade and bypass these IP restrictions using techniques such as spear phishing, malware, credential theft, and lateral movement. As these threats evolve, so have the demands from end users for increased accessibility. Remote employees require secure access to internal resources. Cloud services have moved the perimeter outside of the enterprise network. The DevOps movement has emphasized speed and agility over up front network designs. This paper identifies gaps to implementation for organizations in the discovery phase of migrating to identity-based access controls as described by leading cloud companies.

A Spicy Approach to WebSockets: Enhancing Bros WebSockets Network Analysis by Generating a Custom Protocol Parser with Spicy
By Jennifer Gates
September 22, 2017

  • Although the Request for Comments (RFC) defining WebSockets was released in 2011, there has been little focus on using the Bro Intrusion Detection System (IDS) to analyze WebSockets traffic. However, there has been progress in exploiting the WebSockets protocol. The ability to customize and expand Bro’s capabilities to analyze new protocols is one of its chief benefits. The developers of Bro are also working on a new framework called Spicy that allows security professionals to generate new protocol parsers. This paper focuses on the development of Spicy and Bro scripts that allow visibility into WebSockets traffic. The research conducted compared the data that can be logged with existing Bro protocol analyzers to data that can be logged after writing a WebSockets protocol analyzer in Spicy. The research shows increased effectiveness in detecting malicious WebSockets traffic using Bro when the traffic is parsed with a Spicy script. Writing Bro logging scripts tailored to a particular WebSockets application further increases their effectiveness.

Does Network Micro-segmentation Provide Additional Security?
By Steve Jaworski
September 15, 2017

  • Network segmentation is a concept of taking a large group of hosts and creating smaller groups of hosts that can communicate with each other without traversing a security control. The smaller groups of hosts each have defined security controls, and groups are independent of each other. Network micro-segmentation takes the smaller group of hosts by configuring controls around individual hosts. The goal of network microsegmentation is to provide more granular security and reduce an attackers capability to easily compromise an entire network. If an attacker is successful in compromising a host, he or she is limited to only the network segment on which the host resides. If the host resides in a micro-segment, then the attacker is restricted to only that host. This paper will discuss what network and network micro-segmentation is, where it applies, any additional layer of security including levels of complexity.

HL7 Data Interfaces in Medical Environments: Attacking and Defending the Achille's Heel of Healthcare
By Dallas Haselhorst
September 12, 2017

  • On any given day, a hospital operating room can be chaotic. The atmosphere can make one’s head spin with split-second decisions. In the same hospital environment, medical data also whizzes around, albeit virtually. Beyond the headlines involving medical device insecurities and hospital breaches, healthcare communication standards are equally as insecure. This fundamental design flaw places patient data at risk in nearly every hospital worldwide. Without protections in place, a hospital visit today could become a patient’s worst nightmare tomorrow. Could an attacker collect the data and sell it to the highest bidder for credit card or tax fraud? Or perhaps they have far more malicious plans such as causing bodily harm? Regardless of their intentions, healthcare data is under attack and it is highly vulnerable. This research focuses on attacking and defending HL7, the unencrypted and unverified data standard used in healthcare for nearly all system-to-system communications.

HL7 Data Interfaces in Medical Environments: Understanding the Fundamental Flaw in Healthcare
By Dallas Haselhorst
September 12, 2017

  • Ask healthcare IT professionals where the sensitive data resides and most will inevitably direct attention to a hardened server or database with large amounts of protected health information (PHI). The respondent might even know details about data storage, backup plans, etc. Asked the same question, a penetration tester or security expert may provide a similar answer before discussing database or operating system vulnerabilities. Fortunately, there is likely nothing wrong with the data at that point in its lifetime. It potentially sits on a fully encrypted disk protected by usernames, passwords, and it might have audit-level tracking enabled. The server may also have some level of segmentation from non-critical servers or access restrictions based on source IP addresses. But how did those bits and bytes of healthcare data get to that hardened server? Typically, in a way no one would ever expect... 100% unencrypted and unverified. HL7 is the fundamentally flawed, insecure standard used throughout healthcare for nearly all system-to-system communications. This research examines the HL7 standard, potential attacks on the standard, and why medical records require better protection than current efforts provide.

When a picture is worth a thousand products: Image protection in a digital age
By Shawna Turner
September 12, 2017

  • Today, a lack of fashion industry specific information security controls and legal protection puts fashion industry companies at significant risk of Intellectual Property theft and counterfeiting. This risk is only growing as traditional methods of manufacturing are rapidly evolving toward digital models of design and mass production, using Industrial Control System (ICS) approaches for mass production. As mass production moves to digital manufacturing, the effect of losing new product 2D and 3D imagery, as well as the speed and lack of traceability around those losses could significantly impact corporate bottom lines and risk profiles.

A Technical Approach at Securing SaaS using Cloud Access Security Brokers
By Luciana Obregon
September 6, 2017

  • The adoption of cloud services allows organizations to become more agile in the way they conduct business, providing scalable, reliable, and highly available services or solutions for their employees and customers. Cloud adoption significantly reduces total cost of ownership (TCO) and minimizes hardware footprint in data centers. This paradigm shift has left security professionals securing abstract environments for which conventional security products are no longer effective. The goal of this paper is to analyze a set of cloud security controls and security deployment models for SaaS applications that are purely technical in nature while developing practical applications of such controls to solve real-world problems facing most organizations. The paper will also provide an overview of the threats targeting SaaS, present use cases for SaaS security controls, test cases to assess effectiveness, and reference architectures to visually represent the implementation of cloud security controls.

Packet Capture on AWS
By Teri Radichel
August 14, 2017

  • Companies using AWS (Amazon Web Services) will find that traditional means of full packet capture using span ports is not possible. As defined in the AWS Service Level Agreement, Amazon runs certain aspects of the cloud platform and does not give customers access to physical networking hardware. Although access to physical network equipment is limited, packet capture is still possible on AWS but needs to be architected in a different way. Instead of using span ports, security professionals can leverage the software that runs on top of the cloud platform. The tools and services provided by AWS may facilitate more automated, cost-effective, scalable packet capture solutions for some companies when compared to traditional data center approaches.

Complement a Vulnerability Management Program with PowerShell
By Colm Kennedy
August 10, 2017

  • A vulnerability management program is a critical task that all organizations should be running. Part of this program involves the need to patch systems regularly and to keep installed software up to date. Once a vulnerability program is in place organizations need to remediate discovered vulnerabilities quickly. Occasionally some discovered vulnerabilities are false positives. The problem with false positives is that manually vetting them is time-consuming. There are tools available, which assist in showing what patches may be missing, like SCCM, but can be rather costly. For organizations concerned that these types of programs hurt their budgets, there are free options available. PowerShell is free software that, if utilized, can complement an organization's vulnerability management program by assisting in scanning for unpatched systems. This paper presents a PowerShell script that provides Administrators with further insight into what systems are unpatched and streamlines investigations of possible false positives, with no additional cost.

Forensicating Docker with ELK
By Stefan Winkel
July 17, 2017

  • Docker has made an immense impact on how software is developed and deployed in today's information technology environments. The quick and broad adoption of Docker as part of the DevOps movement has not come without cost. The introduction of vulnerabilities in the development cycle has increased many times. While efforts like Docker Notary and Security Testing as a Service are trying to catch up and mitigate some of these risks, Docker Container Escapes through Linux kernel exploits like the recent widespread Dirty COW privilege escalation exploit in late 2016, can be disastrous in a cloud and other production environments. Organizations find themselves more in need of forensicating Docker setups as part of incident investigations. Centralized event logging of Docker containers is becoming crucial in successful incident response. This paper explores how to use the Elastic stack (Elasticsearch, Logstash, and Kibana) as part of incident investigations of Docker images. It will describe the effectiveness of ELK as result of a forensic investigation of a Docker Container Escape through the use of Dirty COW.

Using Docker to Create Multi-Container Environments for Research and Sharing Lateral Movement
By Shaun McCullough
July 3, 2017

  • Docker, a program for running applications in containers, can be used to create multi-container infrastructures that mimic a more sophisticated network for research in penetration techniques. This paper will demonstrate how Docker can be used by information security researchers to build and share complex environments for recreation by anyone. The scenarios in this paper recreate previous research done in SSH tunneling, pivoting, and other lateral movement operations. By using Docker to build sharable and reusable test infrastructure, information security researchers can help readers recreate the research in their own environments, enhancing learning with a more immersive and hands on research project.

No Safe Harbor: Collecting and Storing European Personal Information in the U.S.
By Alyssa Robinson
April 24, 2017

  • When the European Court of Justice nullified the Safe Harbor Framework in October of 2015, it left more than 4,000 companies in legal limbo regarding their transfer of personal data for millions of European customers (Nakashima, 2015). The acceptance of the Privacy Shield Framework in July of 2016 expands the options for U.S. companies that need to transfer EU personal data to the US but does little to ameliorate the upheaval caused by the Safe Harbor annulment. This paper covers the history of data privacy negotiations between the Europe and the United States, providing an understanding of how the current compromises were reached and what threats they may face. It outlines the available mechanisms for data transfer, including Binding Corporate Rules, Standard Contractual Clauses, and the Privacy Shield Framework and compares their requirements, advantages, and risks. With this information, US organizations considering storing or processing European personal data can choose the transfer mechanism best suited to their situation.

Identifying Vulnerable Network Protocols with PowerShell
By David Fletcher
April 6, 2017

  • Microsoft Windows PowerShell has led to several exploit frameworks such as PowerSploit, PowerView,and PowerShell Empire. However, few of these frameworks investigate network traffic for exploitative potential. Analyzing a small amount of network traffic can lead to the discovery of possible network-based attack vectors such as Virtual Router Redundancy Protocol (VRRP), Dynamic Trunking Protocol (DTP), Link Local Multicast Name Resolution (LL-MNR) and PXE boot attacks, to name a few. How does one gather and analyze this traffic when Windows does not include an integrated packet analysis tool? Microsoft Windows PowerShell includes several network analysis and network traffic related capabilities. This paper will explore the use of these capabilities with the goal of building a PowerShell reconnaissance module which will capture, analyze, and identify commonly misconfigured protocols without the need to install a third-party tool within a Microsoft Windows environment.

Securing the Home IoT Network
By Manuel Leos Rivas
April 5, 2017

  • The Internet of Things (IoT) has proven its ability to cause massive service disruption because of the lack of security in many devices. The vulnerabilities that allow those denial of service attacks are often caused due to poor or no security practices when developing or installing the products. The common home network is not designed to protect against the design errors in IoT devices that expose the privacy of the users. The affordable price of single board computers (SBC) and their small power requirements and customization capabilities can help improve the protection of the home IoT network. SBC can also add powerful features such as auditing, inspection, authentication, and authorization to improve controls pertaining to who and what can have access. Implementing a home-control gateway when properly configured reduces some common risks associated with IoT such as vendor-embedded backdoors and default credentials. Having an open source trusted device with a configuration shared and audited by many experts can reduce many of the bugs and misconfigurations introduced by vendor security program deficiencies.

Auto-Nuke It from Orbit: A Framework for Critical Security Control Automation
By Jeremiah Hainly
March 15, 2017

  • Over 83% of security teams report that the use of automation in security needs to increase within the next three years (Algosec, 2016). With automation becoming a reality for a growing number of companies, there will also be an increased demand for open-sourced scripts to get started. This paper will provide a framework for prioritizing and developing security automation and will demonstrate this process by creating a script to automate a common information security response procedure - the reimaging of an infected endpoint. The primary function of the script will be to access the application program interface (API) of various enterprise software solutions to speed up the manual tasks involved in performing a reimage.

Cloud Security Monitoring
By Balaji Balakrishnan
March 13, 2017

  • This paper discusses how to apply security log monitoring capabilities for Amazon Web Services (AWS) Infrastructure as a Service(IaaS) cloud environments. It will provide an overview of AWS CloudTrail and CloudWatch Logs, which can be stored and mined for suspicious events. Security teams implementing AWS solutions will benefit from applying security monitoring techniques to prevent unauthorized access and data loss. Splunk will be used to ingest all AWS CloudTrail and CloudWatch Logs. Machine learning models are used to identify the suspicious activities in the AWS cloud infrastructure. The audience for this paper are the security teams trying to implement AWS security monitoring.

In-Depth Look at Tuckman's Ladder and Subsequent Works as a Tool for Managing a Project Team
By Aron Warren
March 1, 2017

  • Bruce Tuckman's 1965 research on modeling group development, titled "Developmental Sequence in Small Groups," laid out a framework consisting of four stages a group will transition between while members interact with each other: forming, storming, norming, and performing. This paper will describe in detail the original Tuckman model as well as derivative research in group development models. Traditional and virtual team environments will both be addressed to assist IT project managers in understanding how a team evolves over time with a goal of achieving a successful project outcome.

Medical Data Sharing: Establishing Trust in Health Information Exchange
By Barbara Filkins
March 1, 2017

  • Health information exchange (HIE) "allows doctors, nurses, pharmacists, other health care providers and patients to appropriately access and securely share a patient's vital medical information electronically--improving the speed, quality, safety and cost of patient care" (HealthIT.gov, 2014). The greatest gain in the use of HIE is the ability to achieve interoperability across providers that, except for the care of a given patient, are unrelated. But, by its very nature, HIE also raises concern around the protection and integrity of shared, sensitive data. Trust is a major barrier to interoperability.

Tor Browser Artifacts in Windows 10
By Aron Warren
February 24, 2017

  • The Tor network is a popular, encrypted, worldwide, anonymizing virtual network in existence since 2002 and is used by all facets of society such as privacy advocates, journalists, governments, and criminals. This paper will provide a forensic analysis of the Tor Browser version 5 client on a Windows 10 host for an individual or group interested in remnants left by the software. This paper will utilize various free and commercial tools to provide a detailed analysis of filesystem artifacts as well as a comparison between pre- and post- connection to the Tor network using memory analysis.

OS X as a Forensic Platform
By David M. Martin
February 22, 2017

  • The Apple Macintosh and its OS X operating system have seen increasing adoption by technical professionals, including digital forensic analysts. Forensic software support for OS X remains less mature than that of Windows or Linux. While many Linux forensic tools will work on OS X, instructions for how to configure the tool in OS X are often missing or confusing. OS X also lacks an integrated package management system for command line tools. Python, which serves as the basis for many open-source forensic tools, can be difficult to maintain and easy to misconfigure on OS X. Due to these challenges, many OS X users choose to run their forensic tools from Windows or Linux virtual machines. While this can be an effective and expedient solution, those users miss out on the much of the power of the Macintosh platform. This research will examine the process of configuring a native OS X forensic environment that includes many open-source forensic tools, including Bulk Extractor, Plaso, Rekall, Sleuthkit, Volatility, and Yara. This process includes choosing the correct hardware and software, configuring it properly, and overcoming some of the unique challenges of the OS X environment. A series of performance tests will help determine the optimal hardware and software configuration and examine the performance impact of virtualization options.

Indicators of Compromise TeslaCrypt Malware
By Kevin Kelly
February 16, 2017

  • Malware has become a growing concern in a society of interconnected devices and realtime communications. This paper will show how to analyze live ransomware malware samples, how malware processes locally, over time and within the network. Analyzing live ransomware gives a unique three-dimensional perspective, visually locating crucial signatures and behaviors efficiently. In lieu of reverse engineering or parsing the malware executable’s infrastructure, live analysis provides a simpler method to root out indicators. Ransomware touches just about every file and many of the registry keys. Analysis can be done, but it needs to be focused. The analysis of malware capabilities from different datasets, including process monitoring, flow data, registry key changes, and network traffic will yield indicators of compromise. These indicators will be collected using various open source tools such as Sysinternals suite, Fiddler, Wireshark, and Snort, to name a few. Malware indicators of compromise will be collected to produce defensive countermeasures against unwanted advanced adversary activity on a network. A virtual appliance platform with simulated production Windows 8 O/S will be created, infected and processed to collect indicators to be used to secure enterprise systems. Different tools will leverage datasets to gather indicators, view malware on multiple layers, contain compromised hosts and prevent future infections.

Impediments to Adoption of Two-factor Authentication by Home End-Users
By Preston Ackerman
February 10, 2017

  • Cyber criminals have proven to be both capable and motivated to profit from compromised personal information. The FBI has reported that victims have suffered over $3 billion in losses through compromise of email accounts alone (IC3 2016). One security measure which has been demonstrated to be effective against many of these attacks is two-factor authentication (2FA). The FBI, the Department of Homeland Security US Computer Emergency Readiness Team (US-CERT), and the internationally recognized security training and awareness organization, the SANS Institute, all strongly recommend the use of two-factor authentication. Nevertheless, adoption rates of 2FA are low.

Dissect the Phish to Hunt Infections
By Seth Polley
February 3, 2017

  • Internal defense is a perilous problem facing many organizations today. The sole reliance on external defenses is all too common, leaving the internal organization largely unprotected. The times when internal defense is actually considered, how many think beyond the fallible antivirus (AV) or immature data loss prevention (DLP) solutions? Considering the rise of phishing emails and other social engineering campaigns, there is a significantly increased risk that an organization’s current external and internal defenses will fail to prevent compromises. How would a cyber security team detect an attacker establishing a foothold within the center of the organization or undetectable malware being downloaded internally if a user were to fall for a phishing attempt?

Forensication Education: Towards a Digital Forensics Instructional Framework
By J. Richard “Rick” Kiper
February 3, 2017

  • The field of digital forensics is a diverse and fast-paced branch of cyber investigations. Unfortunately, common efforts to train individuals in this area have been inconsistent and ineffective, as curriculum managers attempt to plug in off-the-shelf courses without an overall educational strategy. The aim of this study is to identify the most effective instructional design features for a future entry-level digital forensics course. To achieve this goal, an expert panel of digital forensics professionals was assembled to identify and prioritize the features, which included general learning outcomes, specific learning goals, instructional delivery formats, instructor characteristics, and assessment strategies. Data was collected from participants using validated group consensus methods such as Delphi and cumulative voting. The product of this effort was the Digital Forensics Framework for Instruction Design (DFFID), a comprehensive digital forensics instructional framework meant to guide the development of future digital forensics curricula.

Superfish and TLS: A Case Study of Betrayed Trust and Legal Liability
By Sandra Dunn
January 24, 2017

  • Superfish, the bloat adware included in Lenovo consumer laptops from 2014-2015 which intentionally broke TLS, exposed user's personal data to compromise and theft, and altered search result ads in user's browsers severely impacted Lenovo brand reputation. There have been other high profile cases of intentionally modifying and breaking TLS that used questionable and deceptive practices but few that generated as much attention and provide such a clear example of a chain of missteps between Lenovo, Superfish, and their customers. A case study of the Superfish mishap exposes the danger, risk, legal liability, and potential government investigation for organization deploying TLS certificates and keys that breaks or weakens the security design and puts private data or people at risk. The Superfish case further demonstrates the importance of a company's disclosure transparency to avoid accusations of deceptive practices if breaking TLS is required to protect users or an organization's data.

Minimizing Legal Risk When Using Cybersecurity Scanning Tools
By John Dittmer
January 19, 2017

  • When cybersecurity professionals use scanning tools on the networks and devices of organizations, there can be legal risks that need to be managed by individuals and enterprises. Often, scanning tools are used to measure compliance with cybersecurity policies and laws, so they must be used with due care. There are protocols that should be followed to ensure proper use of the scanning tools to prevent interference with normal network or system operations and to ensure the accuracy of the scanning results. Several challenges will be examined in depth, such as, measuring for scanner accuracy, proper methods of obtaining written consent for scanning, and how to set up a scanning session for optimum examination of systems or networks. This paper will provide cybersecurity professionals and managers with a better understanding of how and when to use the scanning tools while minimizing the legal risk to themselves and their enterprises.

Data Breach Impact Estimation
By Paul Hershberger
January 3, 2017

  • Internal and External auditors spend a significant amount of time planning their audit processes to align their efforts with the needs of the audited organization. The initial phase of that audit cycle is the risk assessment. Establishing a firm understanding of the likelihood and impact of risk guides the audit function and aligns its work with the risks the organization faces. The challenge many auditors and security professionals face is effectively quantifying the potential impact of a data breach to their organization. This paper compares the data breach cost research of the Ponemon Institute and the RAND Corporation, comparing the models against breach costs reported by publicly traded companies by the Securities and Exchange Commission (SEC) reporting requirements. The comparisons will show that the RAND Corporation's approach provides organizations with a more accurate and flexible model to estimate the potential cost of data breaches as they relate to the direct cost of investigating and remediating a breach and the indirect financial impact associated with regulatory and legal action of a data breach. Additionally, the comparison indicates that data breach-related impacts to revenue and stock valuation are only realized in the short-term.

Real-World Case Study: The Overloaded Security Professional's Guide to Prioritizing Critical Security Controls
By Phillip Bosco
December 27, 2016

  • Using a real-world case study of a recently compromised company as a framework, we will step inside the aftermath of an actual breach and determine how the practical implementation of Critical Security Controls (CSC) may have prevented the compromise entirely while providing greater visibility inside the attack as it occurred. The breached company's information security "team" consisted of a single over-worked individual, who found it arduous to identify which critical controls he should focus his limited time implementing. Lastly, we will delve into real-world examples, using previously unpublished research, that serve as practical approaches for teams with limited resources to prioritize and schedule which CSCs will provide the largest impact towards reducing the company's overall risk. Ideally, the observations and approaches identified in this research paper will assist security professionals who may be in similar circumstances.

Finding Bad with Splunk
By David Brown
December 16, 2016

  • There is such a deluge of information that it can be hard for information security teams to know where to focus their time and energy. This paper will recommend common Linux and Windows tools to scan networks and systems, store results to local filesystems, analyze results, and pass any new data to Splunk. Splunk will then help security teams narrow in on what has changed within the networks and systems by alerting the security teams to any differences between old baselines and new scans. In addition, security teams may not even be paying attention to controls, like whitelisting blocks, that successfully prevent malicious activities. Monitoring failed application execution attempts can give security teams and administrators early warnings that someone may be trying to subvert a system. This paper will guide the security professional on setting up alerts to detect security events of interest like failed application executions due to whitelisting. To solve these problems, the paper will discuss the first five Critical Security Controls and explain what malicious behaviors can be uncovered as a result of alerting. As the paper progresses through the controls, the security professional is shown how to set up baseline analysis, how to configure the systems to pass the proper data to Splunk, and how to configure Splunk to alert on events of interest. The paper does not revolve around how to implement technical controls like whitelisting, but rather how to effectively monitor the controls once they have been implemented.

Continuous Monitoring: Build A World Class Monitoring System for Enterprise, Small Office, or Home
By Austin Taylor
December 15, 2016

  • For organizations who wish to prevent data breaches, incident prevention is ideal, but detection of an attempted or successful breach is a must. This paper outlines guidance for network visibility, threat intelligence implementation and methods to reduce analyst alert fatigue. Additionally, this document includes a workflow for Security Operations Centers (SOC) to efficiently process events of interest thereby increasing the likelihood of detecting a breach. Methods include Intrusion Detection System (IDS) setup with tips on efficient data collection, sensor placement, identification of critical infrastructure along with network and metric visualization. These recommendations are useful for enterprises, small homes, or offices who wish to implement threat intelligence and network analysis.

Detecting Malicious SMB Activity Using Bro
By Richie Cyrus
December 13, 2016

  • Attackers utilize the Server Message Block (SMB) protocol to blend in with network activity, often carrying out their objectives undetected. Post-compromise, attackers use file shares to move laterally, looking for sensitive or confidential data to exfiltrate out a network. Traditional methods for detecting such activity call for storing and analyzing large volumes of Windows event logs, or deploying a signature-based intrusion detection solution. For some organizations, processing and storing large amounts of Windows events may not be feasible. Pattern based intrusion detection solutions can be bypassed by malicious entities, potentially failing to detect malicious activity. Bro Network Security Monitor (Bro) provides an alternative solution allowing for rapid detection through custom scripts and log data. This paper introduces methods to detect malicious SMB activity using Bro.

Active Defense via a Labyrinth of Deception
By Nathaniel Quist
December 5, 2016

  • A network baseline allows for the identification of malicious activity in real time. However, a baseline requires that every listed action is known and accounted, presenting a nearly impossible task in any production environment due to an ever-changing application footprint, system and application updates, changing project requirements, and not least of all, unpredictable user behaviors. Each obstacle presents a significant challenge in the development and maintenance of an accurate and false positive free network baseline. To surmount these hurdles, network architects need to design a network free from continuous change including, changing company requirements, untested systems or application updates, and the presence of unpredictable users. Creating a static, never-changing environment is the goal. However, this completely removes the functionality of a production network. Or does it? Within this paper, I will detail how this type of static environment, referred to as the Labyrinth, can be placed in front of a production environment and provide real time defensive measures against hostile and dispersed attacks, from both human actors and automated machines. I expect to prove the Labyrinth is capable of detecting changes in its environment in real time. It will provide a listing of dynamic defensive capabilities like identifying attacking IP addresses, rogue-process start commands, modifications to registry values, alterations in system memory and recording the movements of an attacker's tactics, techniques, and procedures. At the same time, the Labyrinth will add these values to block list, protecting the production network lying behind. Successful accomplishment of these goals will prove the viability and sustainability of a Labyrinth defending network (Revelle, 2011) environments.

Next Generation of Privacy in Europe and the Impact on Information Security: Complying with the GDPR
By Edward Yuwono
December 5, 2016

  • Human rights have a strong place within Europe, part of this includes the fundamental right to privacy. Over the years, individual privacy has strengthened through various European directives. With the evolution of privacy continuing in Europe through the release of the General Data Protection Regulation (GDPR), how will the latest iteration of European Union (EU) regulation affect organisations and what will information security leaders need to do to meet this change? This paper will explore the evolution of privacy in Europe, the objectives and changes this iteration of EU privacy regulation will provide, what challenges organisations will experience, and how information security could be leveraged to satisfy the regulation.

A Checklist for Audit of Docker Containers
By Alyssa Robinson
November 22, 2016

  • Docker and other container technologies are increasingly popular methods for deploying applications in DevOps environments, due to advantages in portability, efficiency in resource sharing and speed of deployment. The very properties that make Docker containers useful, however, can pose challenges for audit, and the security capabilities and best practices are changing rapidly. As adoption of this technology grows, it is, therefore, necessary to create a standardized checklist for audit of Dockerized environments based on the latest tools and recommendations.

Security Assurance of Docker Containers
By Stefan Winkle
November 22, 2016

  • With recent movements like DevOps and the conversion towards application security as a service, the IT industry is in the middle of a set of substantial changes with how software is developed and deployed. In the infrastructure space, we see the uptake of lightweight container technology, while application technologies are moving towards distributed micros services. There is a recent explosion in popularity of package managers and distributors like OneGet, NPM, RubyGems and PyPI. More and more software development becomes dependent on small, reusable components developed by many different developers and often distributed by infrastructures outside our control. In the midst of this all, we often find application containers like Docker, LXC, and Rocket to compartmentalize software components. The Notary project, recently introduced in Docker, is built upon the assumption the software distribution pipeline can no longer be trusted. Notary attempts to protect against attacks on the software distribution pipeline by association of trust and duty separation to Docker containers. In this paper, we explore the Notary service and take a look at security testing of Docker containers.

Implementing Full Packet Capture
By Matt Koch
November 7, 2016

  • Full Packet Capture (FPC) provides a network defender an after-the-fact investigative capability that other security tools cannot provide. Uses include capturing malware samples, network exploits and determining if data exfiltration has occurred. Full packet captures are a valuable troubleshooting tool for operations and security teams alike. Successful implementation requires an understanding of organization-specific requirements, capacity planning, and delivery of unaltered network traffic to the packet capture system.

Intrusion Detection Through Relationship Analysis
By Patrick Neise
October 24, 2016

  • With the average time to detection of a network intrusion in enterprise networks assessed to be 6-8 months, network defenders require additional tools and techniques to shorten detection time. Perimeter, endpoint, and network traffic detection methods today are mainly focused on detecting individual incidents while security incident and event management (SIEM) products are then used to correlate the isolated events. Although proven to be able to detect network intrusions, these methods can be resource intensive in both time and personnel. Through the use of network flows and graph database technologies, analysts can rapidly gain insight into which hosts are communicating with each other and identify abnormal behavior such as a single client machine communicating with other clients via Server Message Block (SMB). Combining the power of tools such as Bro, a network analysis framework, and neo4j, a native graph database that is built to examine data and its relationships, rapid detection of anomalous behavior within the network becomes possible. This paper will identify the tools and techniques necessary to extract relevant network information, create the data model within a graph database, and query the resulting data to identify potential malicious activity.

Building a Home Network Configured to Collect Artifacts for Supporting Network Forensic Incident Response
By Gordon Fraser
September 21, 2016

  • A commonly accepted Incident Response process includes six phases: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. Preparation is key. It sets the foundation for a successful incident response. The incident responder does not want to be trying to figure out where to collect the information necessary to quickly assess the situation and to respond appropriately to the incident. Nor does the incident responder want to hope that the information he needs is available at the level of detail necessary to most effectively analyze the situation so he can make informed decisions on the best course of action. This paper identifies artifacts that are important to support network forensics during incident response and discusses an architecture and implementation for a home lab to support the collection of them. It then validates the architecture using an incident scenario.

Using Vagrant to Build a Manageable and Sharable Intrusion Detection Lab
By Shaun McCullough
September 20, 2016

  • This paper investigates how the Vagrant software application can be used by Information Security (InfoSec) professionals looking to provide their audience with an infrastructure environment to accompany their research. InfoSec professionals conducting research or publishing write-ups can provide opportunities for their audience to replicate or walk through the research themselves in their own environment. Vagrant is a popular DevOps tool for providing portable and repeatable production environments for application developers, and may solve the needs of the InfoSec professional. This paper will investigate how Vagrant works, the pros and cons of the technology, and how it is typically used. The paper describes how to build or repurpose three environments, highlighting different features of Vagrant. Finally, the paper will discuss lessons learned.

Know Thy Network - Cisco Firepower and Critical Security Controls 1 & 2
By Ryan Firth
September 19, 2016

  • Previously known as the SANS Top 20, the Critical Security Controls are based on real-world attack and security breach data from around the world, and are objectively the most effective technical controls against known cyber-attacks. Due to competing priorities and demands, however, organizations may not have the expertise to figure out how to implement and operationalize the Critical Security Controls in their environments. This paper will help bridge that gap for security and network teams using Cisco Firepower.

Windows Installed Software Inventory
By Jonathan Risto
September 7, 2016

  • The 20 Critical Controls provide a guideline for the controls that need to be placed in our networks to manage and secure our systems. The second control states there should be a software inventory that contains the names and versions of the products for all devices within the infrastructure. The challenge for a large number of organizations is the ability to have accurate information available with minimal impact on tight IT budgets. This paper will discuss the Microsoft Windows command line tools that will gather this information, and provide example scripts that can be run by the reader.

In but not Out: Protecting Confidentiality during Penetration Testing
By Andrew Andrasik
August 22, 2016

  • Penetration testing is imperative for organizations committed to security. However, independent penetration testers are rarely greeted with open arms when initiating an assessment. As firms implement the Critical Security Controls or the Risk Management Framework, independent penetration testing will likely become standard practice as opposed to supplemental exercises. Ethical hacking is a common tactic to view a company's network from an attacker's perspective, but inviting external personnel into a network may increase risk. Penetration testers strive to gain superuser privileges wherever possible and utilize thousands of open-source tools and scripts, many of which do not originate from validated sources.

Introduction to Rundeck for Secure Script Executions
By John Becker
August 11, 2016

  • Many organizations today support physical, virtual, and cloud-based systems across a wide range of operating systems. Providing least privilege access to systems can be a complex mesh of sudoers files, profiles, policies, and firewall rules. While configuration management tools such as Puppet or Chef help ensure consistency, they do not inherently simplify the process for users or administrators. Additionally, current DevOps teams are pushing changes faster than ever. Keeping pace with new services and applications often force sysadmins to use more general access rules and thus expose broader access than necessary. Rundeck is a web-based orchestration platform with powerful ACLs and ssh-based connectivity to a wide range of operating systems and devices. The simple user interface for Rundeck couples with DevOps-friendly REST APIs and YAML or XML configuration files. Using Rundeck for server access improves security while keeping pace with rapidly changing environments.

Legal Aspects of Privacy and Security: A Case- Study of Apple versus FBI Arguments
By Muzamil Riffat
June 3, 2016

  • The debate regarding privacy versus security has been going on for some time now. The matter is complicated due to the fact that the concept of privacy is a subjective phenomenon, shaped by several factors such as cultural norms or geographical location. In a paradoxical situation, rapid advancements in technology are fast making the technology both the guardian and invader of the privacy. Governments and organizations around the globe are using technology to achieve their objectives in the name of security and convenience. It appears that sporadic fights of the proponents of privacy and security had eventually found an avenue to express their opinions i.e. the USA court system. In February 2016, FBI was able to obtain a court order requiring Apple to modify the security features of an iPhone to enable the law enforcement agency access the contents of the device. Apple, backed by other leading technology firms, had vehemently opposed the idea and intended to file a legal appeal against the court order. Before both parties could present their arguments in the court, the case was dropped by FBI as it claimed that it was able to access the contents of the device without Apple's assistance. By using FBI vs. Apple as a case-study, this paper discusses different legal aspects of the opinions of both parties. With the pervasiveness of advanced technology, it can be reasonably anticipated that such requests by law enforcement and government agencies will become more frequent. The paper presents the privacy concerns that should be taken into consideration regarding all such requests.

Under The Ocean of the Internet - The Deep Web
By Brett Hawkins
May 27, 2016

  • The Internet was a revolutionary invention, and its use continues to evolve. People around the world use the Internet every day for things such as social media, shopping, email, reading news, and much more. However, this only makes up a very small piece of the Internet, and the rest is filled by an area called The Deep Web.

Securing Jenkins CI Systems
By Allen Jeng
April 8, 2016

  • With over 100,000 active installations worldwide, Jenkins became the top choice for continuous integration and automation. A survey conducted by Cloudbees during the 2012 Jenkins Users Conference concluded that 83 percent of the respondents consider Jenkins to be mission critical. The November 2015 remotely exploitable Java deserialization vulnerability stresses the need to lock down and monitor Jenkins systems. Exploitation of this weakness enables hackers to gain access to critical assets such as source code that Jenkins manages. Enabling password security is the general recommendations for securing Jenkins. Unfortunately, this necessary security measure can easily be defeated with a packet sniffer because passwords are transmitted over the wire as clear text. This paper will look at ways to secure Jenkins system as well as the deployment of intrusion detection systems to monitor critical assets controlled by Jenkins CI systems.

Secure Network Design: Micro Segmentation
By Brandon Peterson
February 29, 2016

  • Hackers, once on to a network, often go undetected as they freely move from system to system looking for valuable information to steal. Credentials, intellectual property, and personal information are all at risk. It is generally accepted that the attacker has the upper hand and can eventually penetrate most networks. A secure network design that focuses on micro segmentation can slow the rate at which an attacker moves through a network and provide more opportunities for detecting that movement. Organizations that implement a secure network design will find that the added cost and complexity of micro segmentation is more than offset by a reduction in the number and severity of incidents. In fact, the effort extended in learning, classifying, and segmenting the network adds value and strengthens all of the organization’s controls.

Selling Your Information Security Strategy
By David Todd
February 18, 2016

  • It is the Chief Information Security Officer’s (CISO) responsibility to identify the gaps between the most significant security threats and vulnerabilities, compared with the organization's current state. The CISO should develop an information security strategy that aligns with the strategic goals of the organization and sells the gap mitigation strategy to executive management and the board of directors. Before embarking on this new adventure, clearly articulate what success looks like to your organization. What is the result you are driving to accomplish? Then develop a strategy to get you there. Take a play directly from the Sales organization’s playbook – Know yourself; know your customer; and know the benefits from your customer’s perspective. Following this simple strategy will help the CISO close the deal of selling your Information Security Strategy.

Dont Always Judge a Packet by Its Cover
By Gabriel Sanchez
February 16, 2016

  • Distinguishing between friend and foe as millions of packets traverse a network at any given moment can be a very tedious and trying objective. Packets can contain viruses, malware, and botnets which necessitates the need to detect them fast. However, chasing every packet often becomes unmanageable and can often lead to many dead ends. Traditional approaches to this problem rely on heuristics or signatures with a known bad which tend to be ineffective to the advanced attacker. Instead, this paper will go beyond the known bad and describe a general approach of honing in on packets of interest utilizing the behavior and profiling of a network. The use of behavior analysis and profiling for packets that ordinarily traverse a network can shine light on the shadows that the enemy lurks in that bypass traditional detection. This behavior analysis and profiling is especially imperative since knowing the characteristics of your packets can certainly reveal their true intentions.

Security Systems Engineering Approach in Evaluating Commercial and Open Source Software Products
By Jesus Abelarde
January 29, 2016

  • The use of commercial and free open source software (FOSS) is becoming more common in commercial, corporate, and government settings as they develop complex systems. This carries a set of risks until the system is retired or replaced. Unfortunately during project development, the amount of security resources and time necessary to accommodate proper security evaluations is usually underestimated. Also, there is no widely used or standardized evaluation process that engineers and scientists can utilize as a guideline. Therefore, the evaluation process usually ends up lacking or widely different from project to project and company to company. This paper provides a suggested evaluation process and a set of methodologies, along with associated costs and risks that projects can utilize as a guideline when they integrate commercial and FOSS products during system development life cycle (SDLC).

Network Forensics and HTTP/2
By Stefan Winkel
January 18, 2016

  • Last May, a major new version of the HTTP protocol, HTTP/2, has been published and finalized in RFC 7540. HTTP/2, based on the SPDY protocol, which was primarily developed by Google, is a multiplexed, binary protocol where TLS has become the de- facto mandatory standard. Most of the modern web browsers (e.g. Chrome, Firefox, Edge) are now supporting HTTP/2 and some Fortune 500 companies like Google, Facebook and Twitter have enabled HTTP/2 traffic to and from their servers already. We also have seen a recent uptake in security breaches related to HTTP data compression (e.g. Crime, Beast) which is part of HTTP/2. From a network perspective there is currently limited support for analyzing HTTP/2 traffic. This paper will explore how best to analyze such traffic and discuss how the new version might change the future of network forensics.

Cybersecurity Inventory at Home
By Glen Roberts
January 7, 2016

  • Consumers need better home network security guidance for taking stock of the hardware and software applications installed on their network and devices. The primary sources of information security advice for the average person are TV, magazines, newspapers, websites and social media. Unfortunately, these sources typically repeat the same advice, provide limited guidance and miss key areas of security that should be taken into consideration when securing home networks. On the other hand, enterprises receive comprehensive, prioritized guidance such as the Critical Security Controls from The Center for Internet Security. Unfortunately, these controls were not designed with securing home networks in mind. The wide gap between consumer-media advice columns and highly professional corporate security controls needs to be bridged. This can be done by using the Critical Security Controls as a comprehensive foundation from which to craft an authoritative yet easy-to-understand set of home network security recommendations for individuals. The first step is distilling the guidance for inventorying hardware and software applications.

Infrastructure Security Architecture for Effective Security Monitoring
By Luciana Obregon
December 11, 2015

  • Many organizations struggle to architect and implement adequate network infrastructures to optimize network security monitoring. This challenge often leads to data loss with regards to monitored traffic and security events, increased cost in new hardware and technology needed to address monitoring gaps, and additional Information Security personnel to keep up with the overwhelming number of security alerts. Organizations spend a lot of time, effort, and money deploying the latest and greatest tools without ever addressing the fundamental problem of adequate network security design. This paper provides a best practice approach to designing and building scalable and repeatable infrastructure security architectures to optimize network security monitoring. It will expand on four network security domains including network segmentation, intrusion detection and prevention, security event logging, and packet capturing. The goal is a visual representation of an infrastructure security architecture that will allow stakeholders to understand how to architect their networks to address monitoring gaps and protect their organizations.

Compliant but not Secure: Why PCI-Certified Companies Are Being Breached
By Christian Moldes
December 9, 2015

  • The Payment Card Industry published the Data Security Standard 11 years ago; however, criminals are still breaching companies and getting access to cardholder data. The number of security breaches in the past two years has increased considerable, even among the companies for which assessors deemed compliant. In this paper, the author conducts a detailed analysis of why this is still occurring and proposes changes companies should adopt to avoid a security breach.

Web Application File Upload Vulnerabilities
By Matthew Koch
December 7, 2015

  • Uploading files to a web application can be a key feature to many web applications. Without it cloud backup services, photograph sharing and other functions would not be possible.

There's No Going it Alone: Disrupting Well Organized Cyber Crime
By John Garris
November 23, 2015

  • The identification and eventual disruption of a sophisticated criminal enterprise, requiring on-the-fly problem solving and groundbreaking international collaboration, offers a model of how an international cooperative effort can succeed. The efforts that ultimately brought down Rove Digital, an Estonian-based criminal operation that compromised millions of computers, provides just such an example. The approach taken by law enforcement from several countries, coupled with the important roles played by security researchers, can be built upon to address burgeoning threats that can only be tackled cooperatively.

A Network Analysis of a Web Server Compromise
By Kiel Wadner
September 8, 2015

  • Through the analysis of a known scenario, the reader will be given the opportunity to explore a website being compromised. From the initial reconnaissance to gaining root access, each step is viewed at the network level. The benefit of a known scenario is assumptions about the attackers’ reasons are avoided, allowing focus to remain on the technical details of the attack. Steps such as file extraction, timing analysis and reverse engineering an encrypted C2 channel are covered.

Breaking the Ice: Gaining Initial Access
By Phillip Bosco
August 28, 2015

  • While companies are spending an increasing amount of resources on security equipment, attackers are still successful at finding ways to breach networks. This is a compounded problem with many moving parts, due to misinformation within the security industry and companies placing focus on areas of security that yield unimpressive results. A company cannot properly defend and protect against what they do not adequately understand, which tends to be a misunderstanding of their own security defense systems and relevant attacks that cyber criminals commonly use today. These misunderstandings result in attackers bypassing even the most seemingly robust security systems using the simplest methods. The author will outline the common misconceptions within the security industry that ultimately lead to insecure networks. Such misconceptions include a company’s misallocation of their security budget, while other misconceptions include the controversies regarding which methods are most effective at fending off an attacker. Common attack vectors and misconfigurations that are devastating, but are highly preventable, are also detailed.

Forensic Timeline Analysis using Wireshark GIAC (GCFA) Gold Certification
By David Fletcher
August 10, 2015

  • The objective of this paper is to demonstrate analysis of timeline evidence using the Wireshark protocol analyzer. To accomplish this, sample timelines will be generated using tools from The Sleuth Kit (TSK) as well as Log2Timeline. The sample timelines will then be converted into Packet Capture (PCAP) format. Once in this format, Wireshark's native analysis capabilities will be demonstrated in the context of forensic timeline analysis. The underlying hypothesis is that Wireshark can provide a suitable interface for enhancing analyst's ability. This is accomplished through use of built-in features such as analysis profiles, filtering, colorization, marking, and annotation.

Coding For Incident Response: Solving the Language Dilemma
By Shelly Giesbrecht
July 28, 2015

  • Incident responders frequently are faced with the reality of "doing more with less" due to budget or manpower deficits. The ability to write scripts from scratch or modify the code of others to solve a problem or find data in a data "haystack" are necessary skills in a responder's personal toolkit. The question for IR practitioners is what language should they learn that will be the most useful in their work? In this paper, we will examine several coding languages used in writing tools and scripts used for incident response including Perl, Python, C#, PowerShell and Go. In addition, we will discuss why one language may be more helpful than another depending on the use-case, and look at examples of code for each language.

Accessing the inaccessible: Incident investigation in a world of embedded devices
By Eric Jodoin
June 24, 2015

  • There are currently an estimated 4.9 billion embedded systems distributed worldwide. By 2020, that number is expected to have grown to 25 billion. Embedded systems can be found virtually everywhere, ranging from consumer products such as Smart TVs, Blu-ray players, fridges, thermostats, smart phones, and many more household devices. They are also ubiquitous in businesses where they are found in alarm systems, climate control systems, and most networking equipment such as routers, managed switches, IP cameras, multi-function printers, etc. Unfortunately, recent events have taught us these devices can also be vulnerable to malware and hackers. Therefore, it is highly likely that one of these devices may become a key source of evidence in an incident investigation. This paper introduces the reader to embedded systems technology. Using a Blu-ray player embedded system as an example; it demonstrates the process to connect to and then access data through the serial console to collect evidence from an embedded system non-volatile memory.

Honeytokens and honeypots for web ID and IH
By Rich Graves
May 14, 2015

  • Honeypots and honey tokens can be useful tools for examining follow-up to phishing attacks. In this exercise, we respond using valid email addresses that actually received the phish, and wrong passwords. We demonstrate using custom single sign-on code to redirect logins with those fake passwords and any other logins from presumed attacker source IP addresses to a dedicated phishing-victim web honeypot. Although the proof-of- concept described did not become a production deployment, it provided insight into current attacks.

Group Gold Papers


Endpoint Security through Device Configuration, Policy and Network Isolation
By Barbara Filkins & Jonathan Risto
July 15, 2016

  • Sensitive data leaked from endpoints unbeknownst to the user can be detrimental to both an organization and its workforce. The CIO of GIAC Enterprises, alarmed by reports from a newly installed, host-based firewall on his MacBook Pro, commissioned an investigation concerning the security of GIAC Enterprise endpoints.