Student White Papers

Student White Papers

STI master's program candidates conduct research which is relevant, has real world impact, and which often provides cutting edge advancements to the field of cybersecurity knowledge, all under the guidance and review of our world class instructors. Here are some highlights of their recent findings.

Don't Knock Bro
By Brian Nafziger
December 12, 2018

  • Today's defenders often focus detections on host-level tools and techniques thereby requiring host logging setup and management. However, network-level techniques may provide an alternative without host changes. The Bro Network Security Monitor (NSM) tool allows today's defenders to focus detection techniques at the network-level. An old method for controlling a concealed backdoor on a system using a defined sequence of packets to various ports is known as port-knocking. Unsurprisingly, old methods still offer value and malware, defenders, and attackers still use port-knocking. Current port-knocking detection relies on traffic data mining techniques that only exist in academia writing without any applicable tools. Since Bro is a network-level tool, it should be possible to adapt these data mining techniques to detect port-knocking within Bro. This research will document the process of creating and confirming a port-knocking network-level detection with Bro that will provide an immediate and accessible detection technique for organizations.

A Swipe and a Tap: Does Marketing Easier 2FA Increase Adoption?
By Preston Ackerman
November 19, 2018

  • Data breaches and Internet-enabled fraud remain a costly and troubling issue for businesses and home end-users alike. Two-factor authentication (2FA) has long held promise as one of the most viable solutions that enables ordinary users to implement extraordinary protection. A security industry push for widespread 2FA availability has resulted in the service being offered free of charge on most major platforms; however, user adoption remains low. A previous study (Ackerman, 2017) indicated that awareness videos can influence user behavior by providing a clear message which outlines personal risks, offers a mitigation strategy, and demonstrates the ease of implementing the mitigating measure. Building on that previous work, this study, focused on younger millennials between 21 and 26 years of age, seeks to reveal additional insights by designing experiments around the following key questions: 1) Does including a real-time implementation demonstration increase user adoption? 2) Does marketing the convenient push notification form of 2FA, rather than the popular SMS text method, increase user adoption? To address these questions, a two-phase study exposed groups of users to different video messages advocating use of 2FA. Each phase of the survey collected data measuring self-efficacy, fear, response costs and efficacy, perceived threat vulnerability and severity, and behavioral intent. The second phase also collected survey data regarding actual 2FA adoption. The insights derived from subsequent analysis could be applicable not just to increasing 2FA adoption but to security awareness programs more generally.

Microsoft DNS Logs Parsing and Analysis: Establishing a Standard Toolset and Methodology for Incident Responders
By Shelly Giesbrecht
November 2, 2018

  • Microsoft DNS request and response event logs are frequently ignored by incident responders within an investigation due to a historical reputation of being hard to parse and analyze. The fundamental importance of DNS to networking and the functioning of the Internet suggests this oversight could lead to a lack of crucial contextual information in an investigative timeline. This paper seeks to define a best practice for parsing, exporting and analyzing Microsoft DNS Debug and Analytical logs through the comparison of existing tool combinations to DNSplice, a purpose-built utility coded during the development of this paper. Findings suggest that DNSplice is superior to other toolsets tested where time to completion is a critical factor in the investigative process. Further research is required to determine if the findings are still valid on larger datasets or different analysis hardware.

Tearing up Smart Contract Botnets
By Jonathan Sweeny
October 22, 2018

  • The distributed resiliency of smart contracts on private blockchains is enticing to bot herders as a method of maintaining a capable communications channel with the members of a botnet. This research explores the weaknesses that are inherent to this approach of botnet management. These weaknesses, when targeted properly by law enforcement or malware researchers, could limit the capabilities and effectiveness of the botnet. Depending on the weakness targeted, the results vary from partial takedown to total dismantlement of the botnet.

To Block or not to Block? Impact and Analysis of Actively Blocking Shodan Scans
By Andre Shori
October 22, 2018

  • This paper details an experiment constructed to evaluate the effectiveness of blocking Shodan search engine scans in reducing overall attack traffic volumes. Shodan is considered to be part of an attacker’s toolset, and there is a persistent perception that blocking Shodan Scans will reduce an organization’s attack surface. An attempt was made to determine what effect, if any, such a block would result in by comparing attacker traffic before and after implementing a block on Shodan scans, and by determining the complexity of performing such a block. The analysis here may provide defenders and managers with useful data when deciding on whether or not to devote resources to blocking Shodan or other similar internet-connected device search engines.

Generating Anomalies Improves Return on Investment: A Case Study for Implementing Honeytokens
By Wes Earnest
October 11, 2018

  • Putting the right information security architecture into practice within an organization can be a daunting challenge. Many organizations have implemented a Security Information and Event Management (SIEM) to comply with the logging requirements of various security standards, only to find that it does not meet their information security expectations. According to a recent survey, more than half of respondents say they are not satisfied with their organization's SIEM. The following case study deconstructs these logging requirements and the assumptions that lead to a typical SIEM implementation, and discusses an alternative approach focused on improving the organization’s return on investment, decreasing security risk, and decreasing mean time to detection of a potential security breach.

Testing Web Application Security Scanners against a Web 2.0 Vulnerable Web Application
By Edmund Foster
October 11, 2018

  • Web application security scanners are used to perform proactive security testing of web applications. Their effectiveness is far from certain, and few studies have tested them against modern ‘Web 2.0' technologies which present significant challenges to scanners. In this study three web application security scanners are tested in 'point-and-shoot' mode against a Web 2.0 vulnerable web application with AJAX and HTML use cases. Significant variations in performance were observed and almost three-quarters of vulnerabilities went undetected. The web application security scanners did not identify Stored XSS, OS Command, Remote File Inclusion, and Integer Overflow vulnerabilities. This study supports the recommendation to combine multiple web application security scanners and use them in conjunction with a specific scanning strategy.

All-Seeing Eye or Blind Man? Understanding the Linux Kernel Auditing System
By David Kennel
September 21, 2018

  • The Linux kernel auditing system provides powerful capabilities for monitoring system activity. While the auditing system is well documented, the manual pages, user guides, and much of the published writings on the audit system fail to provide guidance on the types of attacker-related activities that are, and are not, likely to be logged by the auditing system. This paper uses simulated attacks and analyzes logged artifacts for the Linux kernel auditing system in its default state and when configured using the Controlled Access Protection Profile (CAPP) and the Defense Information Systems Agency’s (DISA) Security Implementation Guide (STIG) auditing rules. This analysis provides a clearer understanding of the capabilities and limitations of the Linux audit system in detecting various types of attacker activity and helps to guide defenders on how to best utilize the Linux auditing system.

Which YARA Rules Rule: Basic or Advanced?
By Chris Culling
August 10, 2018

  • YARA rules, if used effectively, can be a powerful tool in the fight against malware. However, it appears that the majority of individuals who use YARA write only the most basic of rules, instead of taking advantage of YARA’s full functionality. Basic YARA rules, which focus primarily on identifying malware signatures via detection of predetermined strings within the target file, folder, or process, can be evaded as malware variants are created. Advanced YARA rules, on the other hand, which often include signatures as well, also focus on the malware’s behavior and characteristics, such as size and file type. While it is not uncommon for strings within malware to change, it is much rarer that its primary behavior will. After analyzing multiple samples of two different malware strains within the same family, it became clear that using both basic and advanced YARA rules is the most effective way for users and analysts to implement this powerful tool. As there are a large number of advanced capabilities contained within YARA, this paper will focus on easy-to-use, advanced features, including YARA's Portable Execution (PE) module, to highlight some of the more powerful aspects of YARA. While it takes more time and effort to learn and utilize advanced YARA rules, in the long run, this method is a worthwhile investment towards a safer networking environment.

Times Change and Your Training Data Should Too: The Effect of Training Data Recency on Twitter Classifiers
By Ryan O'Grady
July 11, 2018

  • Sophisticated adversaries are moving their botnet command and control infrastructure to social media microblogging sites such as Twitter. As security practitioners work to identify new methods for detecting and disrupting such botnets, including machine-learning approaches, we must better understand what effect training data recency has on classifier performance. This research investigates the performance of several binary classifiers and their ability to distinguish between non-verified and verified tweets as the offset between the age of the training data and test data changed. Classifiers were trained on three feature sets: tweet-only features, user-only features, and all features. Key findings show that classifiers perform best at +0 offset, feature importance changes over time, and more features are not necessarily better. Classifiers using user-only features performed best, with a mean Matthews correlation coefficient of 0.95 ± 0.04 at +0 offset, 0.58 ± 0.43 at -8 offset, and 0.51 ± 0.21 at +8 offset. The R2 values are 0.90, 0.34, and 0.26, respectively. Thus, the classifiers tested with +0 offset accounted for 56% to 64% more variance than those tested with −8 and +8 offset. These results suggest that classifier performance is sensitive to the recency of the training data relative to the test data. Further research is needed to replicate this experiment with botnet vs. non-botnet tweets to determine if similar classifier performance is possible and the degree to which performance is sensitive to training data recency.

Extracting Timely Sign-in Data from Office 365 Logs
By Mark Lucas
May 22, 2018

  • Office 365 is quickly becoming a repository of valuable organizational information, including data that falls under multiple privacy laws. Timely detection of a compromised account and stopping the bad guy before data is exfiltrated, destroyed, or the account used for nefarious purposes is the difference between an incident and a compromise. Microsoft provides audit logging and alerting tools that can assist system administrators find these incidents. An examination of the efficacy and efficiency of these tools and the shortcomings and advantages provides insight into how to best use the tools to protect individual accounts and the organization as a whole.

Evaluation of Comprehensive Taxonomies for Information Technology Threats
By Steven Launius
March 26, 2018

  • Categorization of all information technology threats can improve communication of risk for an organization’s decision-makers who must determine the investment strategy of security controls. While there are several comprehensive taxonomies for grouping threats, there is an opportunity to establish the foundational terminology and perspective for communicating threats across the organization. This is important because confusion about information technology threats pose a direct risk of damaging an organization’s operational longevity. In order for leadership to allocate security resources to counteract prevalent threats in a timely manner, they must understand those threats quickly. A study that investigates categorization techniques of information technology threats to nontechnical decision-makers through a qualitative review of grouping methods for published threat taxonomies could remedy the situation.

Pick a Tool, the Right Tool: Developing a Practical Typology for Selecting Digital Forensics Tools
By J. Richard “Rick” Kiper, Ph.D.
March 16, 2018

  • One of the most common challenges for a digital forensic examiner is tool selection. In recent years, examiners have enjoyed a significant expansion of the digital forensic toolbox – in both commercial and open source software. However, the increase of digital forensics tools did not come with a corresponding organizational structure for the toolbox. As a result, examiners must conduct their own research and experiment with tools to find one appropriate for a particular task. This study collects input from forty six practicing digital forensic examiners to develop a Digital Forensics Tools Typology, an organized collection of tool characteristics that can be used as selection criteria in a simple search engine. In addition, a novel method is proposed for depicting quantifiable digital forensic tool characteristics.

PCAP Next Generation: Is Your Sniffer Up to Snuff?
By Scott D. Fether
March 16, 2018

  • The PCAP file format is widely used for packet capture within the network and security industry, but it is not the only standard. The PCAP Next Generation (PCAPng) Capture File Format is a refreshing improvement that adds extensibility, portability, and the ability to merge and append data to a wire trace. While Wireshark has led the way in supporting the new format, other tools have been slow to follow. With advantages such as the ability to capture from multiple interfaces, improved time resolution, and the ability to add per-packet comments, support for the PCAPng format should be developing more quickly than it has. This paper describes the new standard, displays methods to take advantage of new features, introduces scripting that can make the format useable, and makes the argument that migration to PCAPng is necessary.

Bug Bounty Programs: Enterprise Implementation
By Jason Pubal
January 17, 2018

  • Bug bounty programs are incentivized, results-focused programs that encourage security researchers to report security issues to the sponsoring organization. These programs create a cooperative relationship between security researchers and organizations that allow the researchers to receive rewards for identifying application vulnerabilities. Bug bounty programs have gone from obscurity to being embraced as a best practice in just a few years: application security maturity models have added bug bounty programs and there are standards for vulnerability disclosure best practices. Through leveraging a global community of researchers available 24 hours a day, 7 days a week, information security teams can continuously deliver application security assessments keeping pace with agile development and continuous integration deployments complementing existing controls such as penetration testing and source code reviews.

Container Intrusions: Assessing the Efficacy of Intrusion Detection and Analysis Methods for Linux Container Environments
By Alfredo Hickman
January 13, 2018

  • The unique and intrinsic methods by which Linux application containers are created, deployed, networked, and operated do not lend themselves well to the conventional application of methods for conducting intrusion detection and analysis in traditional physical and virtual machine networks. While similarities exist in some of the methods used to perform intrusion detection and analysis in conventional networks as compared to container networks, the effectiveness between the two has not been thoroughly measured and assessed: this presents a gap in application container security knowledge. By researching the efficacy of these methods as implemented in container networks compared to traditional networks, this research will provide empirical evidence to identify the gap, and provide data useful for identifying and developing new and more effective methods to secure application container networks

Looking Under the Rock: Deployment Strategies for TLS Decryption
By Chris Farrell
January 13, 2018

  • Attackers can freely exfiltrate confidential information all while under the guise of ordinary web traffic. A remedy for businesses concerned about these risks is to decrypt the communication to inspect the traffic, then block it if it presents a risk to the organization. However, these solutions can be challenging to implement. Existing infrastructure, privacy and legal concerns, latency, and differing monitoring tool requirements are a few of the obstacles facing organizations wishing to monitor encrypted traffic. TLS decryption projects can be successful with proper scope definition, an understanding of the architectural challenges presented by decryption, and the options available for overcoming those obstacles.

Digital Forensic Analysis of Amazon Linux EC2 Instances
By Ken Hartman
January 13, 2018

  • Companies continue to shift business-critical workloads to cloud services such as Amazon Web Services Elastic Cloud Computing (EC2). With demand for skilled security engineers at an all-time high, many organizations do not have the capability to do an adequate forensic analysis to determine the root cause of an intrusion or to identify indicators of compromise. To help organizations improve their incident response capability, this paper presents specific tactics for the forensic analysis of Amazon Linux that align with the SANS Finding Malware Step by Step process for Microsoft Windows.

BYOD Security Implementation for Small Organizations
By Raphael Simmons
December 15, 2017

  • The exponential improvement of the mobile industry has caused a shift in the way organizations work across all industry sectors. Bring your own device (BYOD) is a current industry trend that allows employees to use their personal devices such as laptops, tablets, mobile phones and other devices, to connect to the internal network. The number of external devices that can now connect to a company that implements a BYOD policy has allowed for a proliferation of security risks. The National Institute of Standards and Technology lists these high-level threats and vulnerabilities of mobile devices: lack of physical security controls, use of untrusted mobile devices, use of untrusted networks, use of untrusted applications, interaction with other systems, use of untrusted content, and use of location services. A well implemented Mobile Device Management (MDM) tool combined with network access controls can be used to mitigate the risks associated with a BYOD policy.

Who's in the Zone? A Qualitative Proof-of-Concept for Improving Remote Access Least-Privilege in ICS-SCADA Environments
By Kevin Altman
December 4, 2017

  • Remote access control in many ICS-SCADA environments is of limited effectiveness leading to excessive privilege for staff who have responsibilities bounded by region, site, or device. Inability to implement more restrictive least-privilege access controls may result in unacceptable residual risk from internal and external threats. Security vendors and ICS cybersecurity practitioners have recognized this issue and provide options to address these concerns, such as inline security appliances, network authentication, and user-network based access control. Each of these solutions reduces privileges but has tradeoffs. This paper evaluates network-based access control combined with security zones and its benefits for existing ICS-SCADA environments. A Proof-of-Concept (PoC) evaluates a promising option that is not widely known or deployed in ICS-SCADA.

Hacking Humans: The Evolving Paradigm with Virtual Reality
By Andrew Andrasik
November 22, 2017

  • Virtual reality (VR) systems are evolving from high-end gaming and military applications to being used in day-to-day business operations and daily life. Cyber security professionals must begin now to prepare proactive threat analysis and incident handling plans that cover information systems and users. Previous compromises illustrate the devastating effects malware can have on the confidentiality, integrity, and availability of information systems. These disastrous consequences may be transferred directly to the user given his or her perception of events. Even in the early stages, VR represents a new paradigm within the information age. Today, users view information systems through a monitor that acts as a window into a virtual environment. Within VR, a user may become completely immersed while absorbing information from all five senses. VR represents a dichotomy that adds a potential human component to an information system compromise. This research project examines offensive tactics, techniques, and procedures, then exploits and extrapolates them to a compromised VR system and the user to illustrate the hazards associated with VR.

Leverage Risk Focused Teams to Strengthen Resilience against Cyber Risks
By Dave Bishop
November 17, 2017

  • Information security, risk management, audit and business continuity teams must continue to evolve and mature to combat the growing cyber risks impacting business operations. Each team has standards and frameworks, but they often dont speak the same language or understand how each group intersects in protecting the organization. This research identifies opportunities to reduce resource duplication and integrate information security and risk-focused teams to strengthen the organizations resilience against cyber risks.

The State of Honeypots: Understanding the Use of Honey Technologies Today
By Andrea Dominguez,
November 17, 2017

  • The aim of this study is to fill in the gaps in data on the real-world use of honey technologies. The goal has also been to better understand information security professionals views and attitudes towards them. While there is a wealth of academic research in cutting-edge honey technologies, there is a dearth of data related to the practical use of these technologies outside of research laboratories. The data for this research was collected via a survey which was distributed to information security professionals. This research paper includes details on the design of the survey, its distribution, analysis of the results, insights, lessons learned and two appendices: the survey in its entirety and a summary of the data collected.

Exploring the Effectiveness of Approaches to Discovering and Acquiring Virtualized Servers on ESXi
By Scott Perry
November 17, 2017

  • As businesses continue to move to virtualized environments, investigators need updated techniques to acquire virtualized servers. These virtualized servers contain a plethora of relevant data and may hold proprietary software and databases that are relatively impossible to recreate. Before an acquisition, investigators sometimes rely on the host administrators to provide them with network topologies and server information. This paper will demonstrate tools and techniques to conduct server and network discovery in a virtualized environment and how to leverage the software used by administrators to acquire virtual machines hosted on vSphere and ESXi.

Tackling the Unique Digital Forensic Challenges for Law Enforcement in the Jurisdiction of the Ninth U.S. Circuit Court
By John Garris
November 17, 2017

  • The creation of a restrictive digital evidence search protocol by the U.S. Ninth Circuit Court of Appeals - the most stringent in the United States - triggered intense legal debate and caused significant turmoil regarding digital forensics procedures and practices in law enforcement operations. Understanding the Court's legal reasoning and the U.S. Department of Justice's counter-arguments regarding this protocol is critical in appreciating how the tension between privacy concerns and the challenges to law enforcement stand at the center of this unique Information Age issue. By focusing on the Court's core assumption that the seizure and search of electronically stored information are inherently overly intrusive, digital forensics practitioners have a worthy target to focus their efforts in the advancement of digital forensics processes, procedures, techniques, and tool-sets. This paper provides an overview of various proposals, developments, and possible approaches to help address the privacy concerns central to the Court's decision, while potentially improving the overall effectiveness and efficiency of digital forensic operations in law enforcement.

Can the "Gorilla" Deliver? Assessing the Security of Google's New "Thread" Internet of Things (IoT) Protocol
By Kenneth Strayer
October 6, 2017

  • Security incidents associated with Internet of Things (IoT) devices have recently gained high visibility, such as the Mirai botnet that exploited vulnerabilities in remote cameras and home routers. Currently, no industry standard exists to provide the right combination of security and ease-of-use in a low-power, low-bandwidth environment. In 2014, the Thread Group, Inc. released the new Thread networking protocol. Google's Nest Labs recently open-sourced their implementation of Thread in an attempt to become a market standard for the home automation environment. The Thread Group claims that Thread provides improved security for IoT devices. But in what way is this claim true, and how does Thread help address the most significant security risks associated with IoT devices? This paper assesses the new IEEE 802.15.4 "Thread" protocol for IoT devices to determine its potential contributions in mitigating the OWASP Top 10 IoT Security Concerns. It provides developers and security professionals a better understanding of what risks Thread addresses and what challenges remain.

Hardening BYOD: Implementing Critical Security Control 3 in a Bring Your Own Device (BYOD) Architecture
By Christopher Jarko
September 22, 2017

  • The increasing prevalence of Bring Your Own Device (BYOD) architecture poses many challenges to information security professionals. These include, but are not limited to: the risk of loss or theft, unauthorized access to sensitive corporate data, and lack of standardization and control. This last challenge can be particularly troublesome for an enterprise trying to implement the Center for Internet Security (CIS) Critical Security Controls for Effective Cyber Defense (CSCs). CSC 3, Secure Configurations for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers, calls for hardened operating systems and applications. Even in traditional enterprise environments, this requires a certain amount of effort, but it is much more difficult in a BYOD architecture where computer hardware and software is unique to each employee and company control of that hardware and software is constrained. Still, it is possible to implement CSC 3 in a BYOD environment. This paper will examine options for managing a standard, secure Windows 10 laptop as part of a BYOD program, and will also discuss the policies, standards, and guidelines necessary to ensure the implementation of this Critical Security Control is as seamless as possible.

Botnet Resiliency via Private Blockchains
By Jonny Sweeny
September 22, 2017

  • Criminals operating botnets are persistently in an arms race with network security engineers and law enforcement agencies to make botnets more resilient. Innovative features constantly increase the resiliency of botnets but cannot mitigate all the weaknesses exploited by researchers. Blockchain technology includes features which could improve the resiliency of botnet communications. A trusted, distributed, resilient, fully-functioning command and control communication channel can be achieved using the combined features of private blockchains and smart contracts.

OSSIM: CIS Critical Security Controls Assessment in a Windows Environment.
By Kevin Geil
September 22, 2017

  • Use of a Security Information and Event Management (SIEM) or log management platform is a recommendation common to several of the “CIS Critical Security Controls For Effective Cyber Defense” (2016). Because the CIS Critical Security Controls (CSC) focus on automation, measurement and continuous improvement of control application, a SIEM is a valuable tool. Alienvault's Open Source SIEM (OSSIM) is free and capable, making it a popular choice for administrators seeking experience with SIEM. While there is a great deal of documentation on OSSIM, specific information that focuses on exactly what events to examine, and then how to report findings is not readily accessible. This paper uses a demo environment to provide specific examples and instructions for using OSSIM to assess a CIS Critical Security Controls implementation in a common environment: A Windows Active Directory domain. The 20 Critical Security Controls can be mapped to other controls in most compliance frameworks and guidelines; therefore, the techniques in this document should be applicable across a wide variety of control implementations.

Trust No One: A Gap Analysis of Moving IP-Based Network Perimeters to A Zero Trust Network Architecture
By John Becker
September 22, 2017

  • Traditional IP-based access controls (e.g., firewall rules based on source and destination addresses) have defined the network perimeter for decades. Threats have evolved to evade and bypass these IP restrictions using techniques such as spear phishing, malware, credential theft, and lateral movement. As these threats evolve, so have the demands from end users for increased accessibility. Remote employees require secure access to internal resources. Cloud services have moved the perimeter outside of the enterprise network. The DevOps movement has emphasized speed and agility over up front network designs. This paper identifies gaps to implementation for organizations in the discovery phase of migrating to identity-based access controls as described by leading cloud companies.

A Spicy Approach to WebSockets: Enhancing Bros WebSockets Network Analysis by Generating a Custom Protocol Parser with Spicy
By Jennifer Gates
September 22, 2017

  • Although the Request for Comments (RFC) defining WebSockets was released in 2011, there has been little focus on using the Bro Intrusion Detection System (IDS) to analyze WebSockets traffic. However, there has been progress in exploiting the WebSockets protocol. The ability to customize and expand Bro’s capabilities to analyze new protocols is one of its chief benefits. The developers of Bro are also working on a new framework called Spicy that allows security professionals to generate new protocol parsers. This paper focuses on the development of Spicy and Bro scripts that allow visibility into WebSockets traffic. The research conducted compared the data that can be logged with existing Bro protocol analyzers to data that can be logged after writing a WebSockets protocol analyzer in Spicy. The research shows increased effectiveness in detecting malicious WebSockets traffic using Bro when the traffic is parsed with a Spicy script. Writing Bro logging scripts tailored to a particular WebSockets application further increases their effectiveness.

Does Network Micro-segmentation Provide Additional Security?
By Steve Jaworski
September 15, 2017

  • Network segmentation is a concept of taking a large group of hosts and creating smaller groups of hosts that can communicate with each other without traversing a security control. The smaller groups of hosts each have defined security controls, and groups are independent of each other. Network micro-segmentation takes the smaller group of hosts by configuring controls around individual hosts. The goal of network microsegmentation is to provide more granular security and reduce an attackers capability to easily compromise an entire network. If an attacker is successful in compromising a host, he or she is limited to only the network segment on which the host resides. If the host resides in a micro-segment, then the attacker is restricted to only that host. This paper will discuss what network and network micro-segmentation is, where it applies, any additional layer of security including levels of complexity.

HL7 Data Interfaces in Medical Environments: Attacking and Defending the Achille's Heel of Healthcare
By Dallas Haselhorst
September 12, 2017

  • On any given day, a hospital operating room can be chaotic. The atmosphere can make one’s head spin with split-second decisions. In the same hospital environment, medical data also whizzes around, albeit virtually. Beyond the headlines involving medical device insecurities and hospital breaches, healthcare communication standards are equally as insecure. This fundamental design flaw places patient data at risk in nearly every hospital worldwide. Without protections in place, a hospital visit today could become a patient’s worst nightmare tomorrow. Could an attacker collect the data and sell it to the highest bidder for credit card or tax fraud? Or perhaps they have far more malicious plans such as causing bodily harm? Regardless of their intentions, healthcare data is under attack and it is highly vulnerable. This research focuses on attacking and defending HL7, the unencrypted and unverified data standard used in healthcare for nearly all system-to-system communications.

HL7 Data Interfaces in Medical Environments: Understanding the Fundamental Flaw in Healthcare
By Dallas Haselhorst
September 12, 2017

  • Ask healthcare IT professionals where the sensitive data resides and most will inevitably direct attention to a hardened server or database with large amounts of protected health information (PHI). The respondent might even know details about data storage, backup plans, etc. Asked the same question, a penetration tester or security expert may provide a similar answer before discussing database or operating system vulnerabilities. Fortunately, there is likely nothing wrong with the data at that point in its lifetime. It potentially sits on a fully encrypted disk protected by usernames, passwords, and it might have audit-level tracking enabled. The server may also have some level of segmentation from non-critical servers or access restrictions based on source IP addresses. But how did those bits and bytes of healthcare data get to that hardened server? Typically, in a way no one would ever expect... 100% unencrypted and unverified. HL7 is the fundamentally flawed, insecure standard used throughout healthcare for nearly all system-to-system communications. This research examines the HL7 standard, potential attacks on the standard, and why medical records require better protection than current efforts provide.

When a picture is worth a thousand products: Image protection in a digital age
By Shawna Turner
September 12, 2017

  • Today, a lack of fashion industry specific information security controls and legal protection puts fashion industry companies at significant risk of Intellectual Property theft and counterfeiting. This risk is only growing as traditional methods of manufacturing are rapidly evolving toward digital models of design and mass production, using Industrial Control System (ICS) approaches for mass production. As mass production moves to digital manufacturing, the effect of losing new product 2D and 3D imagery, as well as the speed and lack of traceability around those losses could significantly impact corporate bottom lines and risk profiles.

A Technical Approach at Securing SaaS using Cloud Access Security Brokers
By Luciana Obregon
September 6, 2017

  • The adoption of cloud services allows organizations to become more agile in the way they conduct business, providing scalable, reliable, and highly available services or solutions for their employees and customers. Cloud adoption significantly reduces total cost of ownership (TCO) and minimizes hardware footprint in data centers. This paradigm shift has left security professionals securing abstract environments for which conventional security products are no longer effective. The goal of this paper is to analyze a set of cloud security controls and security deployment models for SaaS applications that are purely technical in nature while developing practical applications of such controls to solve real-world problems facing most organizations. The paper will also provide an overview of the threats targeting SaaS, present use cases for SaaS security controls, test cases to assess effectiveness, and reference architectures to visually represent the implementation of cloud security controls.

Packet Capture on AWS
By Teri Radichel
August 14, 2017

  • Companies using AWS (Amazon Web Services) will find that traditional means of full packet capture using span ports is not possible. As defined in the AWS Service Level Agreement, Amazon runs certain aspects of the cloud platform and does not give customers access to physical networking hardware. Although access to physical network equipment is limited, packet capture is still possible on AWS but needs to be architected in a different way. Instead of using span ports, security professionals can leverage the software that runs on top of the cloud platform. The tools and services provided by AWS may facilitate more automated, cost-effective, scalable packet capture solutions for some companies when compared to traditional data center approaches.

Complement a Vulnerability Management Program with PowerShell
By Colm Kennedy
August 10, 2017

  • A vulnerability management program is a critical task that all organizations should be running. Part of this program involves the need to patch systems regularly and to keep installed software up to date. Once a vulnerability program is in place organizations need to remediate discovered vulnerabilities quickly. Occasionally some discovered vulnerabilities are false positives. The problem with false positives is that manually vetting them is time-consuming. There are tools available, which assist in showing what patches may be missing, like SCCM, but can be rather costly. For organizations concerned that these types of programs hurt their budgets, there are free options available. PowerShell is free software that, if utilized, can complement an organization's vulnerability management program by assisting in scanning for unpatched systems. This paper presents a PowerShell script that provides Administrators with further insight into what systems are unpatched and streamlines investigations of possible false positives, with no additional cost.

Forensicating Docker with ELK
By Stefan Winkel
July 17, 2017

  • Docker has made an immense impact on how software is developed and deployed in today's information technology environments. The quick and broad adoption of Docker as part of the DevOps movement has not come without cost. The introduction of vulnerabilities in the development cycle has increased many times. While efforts like Docker Notary and Security Testing as a Service are trying to catch up and mitigate some of these risks, Docker Container Escapes through Linux kernel exploits like the recent widespread Dirty COW privilege escalation exploit in late 2016, can be disastrous in a cloud and other production environments. Organizations find themselves more in need of forensicating Docker setups as part of incident investigations. Centralized event logging of Docker containers is becoming crucial in successful incident response. This paper explores how to use the Elastic stack (Elasticsearch, Logstash, and Kibana) as part of incident investigations of Docker images. It will describe the effectiveness of ELK as result of a forensic investigation of a Docker Container Escape through the use of Dirty COW.

Using Docker to Create Multi-Container Environments for Research and Sharing Lateral Movement
By Shaun McCullough
July 3, 2017

  • Docker, a program for running applications in containers, can be used to create multi-container infrastructures that mimic a more sophisticated network for research in penetration techniques. This paper will demonstrate how Docker can be used by information security researchers to build and share complex environments for recreation by anyone. The scenarios in this paper recreate previous research done in SSH tunneling, pivoting, and other lateral movement operations. By using Docker to build sharable and reusable test infrastructure, information security researchers can help readers recreate the research in their own environments, enhancing learning with a more immersive and hands on research project.

No Safe Harbor: Collecting and Storing European Personal Information in the U.S.
By Alyssa Robinson
April 24, 2017

  • When the European Court of Justice nullified the Safe Harbor Framework in October of 2015, it left more than 4,000 companies in legal limbo regarding their transfer of personal data for millions of European customers (Nakashima, 2015). The acceptance of the Privacy Shield Framework in July of 2016 expands the options for U.S. companies that need to transfer EU personal data to the US but does little to ameliorate the upheaval caused by the Safe Harbor annulment. This paper covers the history of data privacy negotiations between the Europe and the United States, providing an understanding of how the current compromises were reached and what threats they may face. It outlines the available mechanisms for data transfer, including Binding Corporate Rules, Standard Contractual Clauses, and the Privacy Shield Framework and compares their requirements, advantages, and risks. With this information, US organizations considering storing or processing European personal data can choose the transfer mechanism best suited to their situation.

Identifying Vulnerable Network Protocols with PowerShell
By David Fletcher
April 6, 2017

  • Microsoft Windows PowerShell has led to several exploit frameworks such as PowerSploit, PowerView,and PowerShell Empire. However, few of these frameworks investigate network traffic for exploitative potential. Analyzing a small amount of network traffic can lead to the discovery of possible network-based attack vectors such as Virtual Router Redundancy Protocol (VRRP), Dynamic Trunking Protocol (DTP), Link Local Multicast Name Resolution (LL-MNR) and PXE boot attacks, to name a few. How does one gather and analyze this traffic when Windows does not include an integrated packet analysis tool? Microsoft Windows PowerShell includes several network analysis and network traffic related capabilities. This paper will explore the use of these capabilities with the goal of building a PowerShell reconnaissance module which will capture, analyze, and identify commonly misconfigured protocols without the need to install a third-party tool within a Microsoft Windows environment.

Securing the Home IoT Network
By Manuel Leos Rivas
April 5, 2017

  • The Internet of Things (IoT) has proven its ability to cause massive service disruption because of the lack of security in many devices. The vulnerabilities that allow those denial of service attacks are often caused due to poor or no security practices when developing or installing the products. The common home network is not designed to protect against the design errors in IoT devices that expose the privacy of the users. The affordable price of single board computers (SBC) and their small power requirements and customization capabilities can help improve the protection of the home IoT network. SBC can also add powerful features such as auditing, inspection, authentication, and authorization to improve controls pertaining to who and what can have access. Implementing a home-control gateway when properly configured reduces some common risks associated with IoT such as vendor-embedded backdoors and default credentials. Having an open source trusted device with a configuration shared and audited by many experts can reduce many of the bugs and misconfigurations introduced by vendor security program deficiencies.

Auto-Nuke It from Orbit: A Framework for Critical Security Control Automation
By Jeremiah Hainly
March 15, 2017

  • Over 83% of security teams report that the use of automation in security needs to increase within the next three years (Algosec, 2016). With automation becoming a reality for a growing number of companies, there will also be an increased demand for open-sourced scripts to get started. This paper will provide a framework for prioritizing and developing security automation and will demonstrate this process by creating a script to automate a common information security response procedure - the reimaging of an infected endpoint. The primary function of the script will be to access the application program interface (API) of various enterprise software solutions to speed up the manual tasks involved in performing a reimage.

Cloud Security Monitoring
By Balaji Balakrishnan
March 13, 2017

  • This paper discusses how to apply security log monitoring capabilities for Amazon Web Services (AWS) Infrastructure as a Service(IaaS) cloud environments. It will provide an overview of AWS CloudTrail and CloudWatch Logs, which can be stored and mined for suspicious events. Security teams implementing AWS solutions will benefit from applying security monitoring techniques to prevent unauthorized access and data loss. Splunk will be used to ingest all AWS CloudTrail and CloudWatch Logs. Machine learning models are used to identify the suspicious activities in the AWS cloud infrastructure. The audience for this paper are the security teams trying to implement AWS security monitoring.

In-Depth Look at Tuckman's Ladder and Subsequent Works as a Tool for Managing a Project Team
By Aron Warren
March 1, 2017

  • Bruce Tuckman's 1965 research on modeling group development, titled "Developmental Sequence in Small Groups," laid out a framework consisting of four stages a group will transition between while members interact with each other: forming, storming, norming, and performing. This paper will describe in detail the original Tuckman model as well as derivative research in group development models. Traditional and virtual team environments will both be addressed to assist IT project managers in understanding how a team evolves over time with a goal of achieving a successful project outcome.

Medical Data Sharing: Establishing Trust in Health Information Exchange
By Barbara Filkins
March 1, 2017

  • Health information exchange (HIE) "allows doctors, nurses, pharmacists, other health care providers and patients to appropriately access and securely share a patient's vital medical information electronically--improving the speed, quality, safety and cost of patient care" (, 2014). The greatest gain in the use of HIE is the ability to achieve interoperability across providers that, except for the care of a given patient, are unrelated. But, by its very nature, HIE also raises concern around the protection and integrity of shared, sensitive data. Trust is a major barrier to interoperability.

Tor Browser Artifacts in Windows 10
By Aron Warren
February 24, 2017

  • The Tor network is a popular, encrypted, worldwide, anonymizing virtual network in existence since 2002 and is used by all facets of society such as privacy advocates, journalists, governments, and criminals. This paper will provide a forensic analysis of the Tor Browser version 5 client on a Windows 10 host for an individual or group interested in remnants left by the software. This paper will utilize various free and commercial tools to provide a detailed analysis of filesystem artifacts as well as a comparison between pre- and post- connection to the Tor network using memory analysis.

OS X as a Forensic Platform
By David M. Martin
February 22, 2017

  • The Apple Macintosh and its OS X operating system have seen increasing adoption by technical professionals, including digital forensic analysts. Forensic software support for OS X remains less mature than that of Windows or Linux. While many Linux forensic tools will work on OS X, instructions for how to configure the tool in OS X are often missing or confusing. OS X also lacks an integrated package management system for command line tools. Python, which serves as the basis for many open-source forensic tools, can be difficult to maintain and easy to misconfigure on OS X. Due to these challenges, many OS X users choose to run their forensic tools from Windows or Linux virtual machines. While this can be an effective and expedient solution, those users miss out on the much of the power of the Macintosh platform. This research will examine the process of configuring a native OS X forensic environment that includes many open-source forensic tools, including Bulk Extractor, Plaso, Rekall, Sleuthkit, Volatility, and Yara. This process includes choosing the correct hardware and software, configuring it properly, and overcoming some of the unique challenges of the OS X environment. A series of performance tests will help determine the optimal hardware and software configuration and examine the performance impact of virtualization options.

Indicators of Compromise TeslaCrypt Malware
By Kevin Kelly
February 16, 2017

  • Malware has become a growing concern in a society of interconnected devices and realtime communications. This paper will show how to analyze live ransomware malware samples, how malware processes locally, over time and within the network. Analyzing live ransomware gives a unique three-dimensional perspective, visually locating crucial signatures and behaviors efficiently. In lieu of reverse engineering or parsing the malware executable’s infrastructure, live analysis provides a simpler method to root out indicators. Ransomware touches just about every file and many of the registry keys. Analysis can be done, but it needs to be focused. The analysis of malware capabilities from different datasets, including process monitoring, flow data, registry key changes, and network traffic will yield indicators of compromise. These indicators will be collected using various open source tools such as Sysinternals suite, Fiddler, Wireshark, and Snort, to name a few. Malware indicators of compromise will be collected to produce defensive countermeasures against unwanted advanced adversary activity on a network. A virtual appliance platform with simulated production Windows 8 O/S will be created, infected and processed to collect indicators to be used to secure enterprise systems. Different tools will leverage datasets to gather indicators, view malware on multiple layers, contain compromised hosts and prevent future infections.

Impediments to Adoption of Two-factor Authentication by Home End-Users
By Preston Ackerman
February 10, 2017

  • Cyber criminals have proven to be both capable and motivated to profit from compromised personal information. The FBI has reported that victims have suffered over $3 billion in losses through compromise of email accounts alone (IC3 2016). One security measure which has been demonstrated to be effective against many of these attacks is two-factor authentication (2FA). The FBI, the Department of Homeland Security US Computer Emergency Readiness Team (US-CERT), and the internationally recognized security training and awareness organization, the SANS Institute, all strongly recommend the use of two-factor authentication. Nevertheless, adoption rates of 2FA are low.

Dissect the Phish to Hunt Infections
By Seth Polley
February 3, 2017

  • Internal defense is a perilous problem facing many organizations today. The sole reliance on external defenses is all too common, leaving the internal organization largely unprotected. The times when internal defense is actually considered, how many think beyond the fallible antivirus (AV) or immature data loss prevention (DLP) solutions? Considering the rise of phishing emails and other social engineering campaigns, there is a significantly increased risk that an organization’s current external and internal defenses will fail to prevent compromises. How would a cyber security team detect an attacker establishing a foothold within the center of the organization or undetectable malware being downloaded internally if a user were to fall for a phishing attempt?

Forensication Education: Towards a Digital Forensics Instructional Framework
By J. Richard “Rick” Kiper
February 3, 2017

  • The field of digital forensics is a diverse and fast-paced branch of cyber investigations. Unfortunately, common efforts to train individuals in this area have been inconsistent and ineffective, as curriculum managers attempt to plug in off-the-shelf courses without an overall educational strategy. The aim of this study is to identify the most effective instructional design features for a future entry-level digital forensics course. To achieve this goal, an expert panel of digital forensics professionals was assembled to identify and prioritize the features, which included general learning outcomes, specific learning goals, instructional delivery formats, instructor characteristics, and assessment strategies. Data was collected from participants using validated group consensus methods such as Delphi and cumulative voting. The product of this effort was the Digital Forensics Framework for Instruction Design (DFFID), a comprehensive digital forensics instructional framework meant to guide the development of future digital forensics curricula.

Superfish and TLS: A Case Study of Betrayed Trust and Legal Liability
By Sandra Dunn
January 24, 2017

  • Superfish, the bloat adware included in Lenovo consumer laptops from 2014-2015 which intentionally broke TLS, exposed user's personal data to compromise and theft, and altered search result ads in user's browsers severely impacted Lenovo brand reputation. There have been other high profile cases of intentionally modifying and breaking TLS that used questionable and deceptive practices but few that generated as much attention and provide such a clear example of a chain of missteps between Lenovo, Superfish, and their customers. A case study of the Superfish mishap exposes the danger, risk, legal liability, and potential government investigation for organization deploying TLS certificates and keys that breaks or weakens the security design and puts private data or people at risk. The Superfish case further demonstrates the importance of a company's disclosure transparency to avoid accusations of deceptive practices if breaking TLS is required to protect users or an organization's data.

Minimizing Legal Risk When Using Cybersecurity Scanning Tools
By John Dittmer
January 19, 2017

  • When cybersecurity professionals use scanning tools on the networks and devices of organizations, there can be legal risks that need to be managed by individuals and enterprises. Often, scanning tools are used to measure compliance with cybersecurity policies and laws, so they must be used with due care. There are protocols that should be followed to ensure proper use of the scanning tools to prevent interference with normal network or system operations and to ensure the accuracy of the scanning results. Several challenges will be examined in depth, such as, measuring for scanner accuracy, proper methods of obtaining written consent for scanning, and how to set up a scanning session for optimum examination of systems or networks. This paper will provide cybersecurity professionals and managers with a better understanding of how and when to use the scanning tools while minimizing the legal risk to themselves and their enterprises.

Data Breach Impact Estimation
By Paul Hershberger
January 3, 2017

  • Internal and External auditors spend a significant amount of time planning their audit processes to align their efforts with the needs of the audited organization. The initial phase of that audit cycle is the risk assessment. Establishing a firm understanding of the likelihood and impact of risk guides the audit function and aligns its work with the risks the organization faces. The challenge many auditors and security professionals face is effectively quantifying the potential impact of a data breach to their organization. This paper compares the data breach cost research of the Ponemon Institute and the RAND Corporation, comparing the models against breach costs reported by publicly traded companies by the Securities and Exchange Commission (SEC) reporting requirements. The comparisons will show that the RAND Corporation's approach provides organizations with a more accurate and flexible model to estimate the potential cost of data breaches as they relate to the direct cost of investigating and remediating a breach and the indirect financial impact associated with regulatory and legal action of a data breach. Additionally, the comparison indicates that data breach-related impacts to revenue and stock valuation are only realized in the short-term.

Real-World Case Study: The Overloaded Security Professional's Guide to Prioritizing Critical Security Controls
By Phillip Bosco
December 27, 2016

  • Using a real-world case study of a recently compromised company as a framework, we will step inside the aftermath of an actual breach and determine how the practical implementation of Critical Security Controls (CSC) may have prevented the compromise entirely while providing greater visibility inside the attack as it occurred. The breached company's information security "team" consisted of a single over-worked individual, who found it arduous to identify which critical controls he should focus his limited time implementing. Lastly, we will delve into real-world examples, using previously unpublished research, that serve as practical approaches for teams with limited resources to prioritize and schedule which CSCs will provide the largest impact towards reducing the company's overall risk. Ideally, the observations and approaches identified in this research paper will assist security professionals who may be in similar circumstances.

Finding Bad with Splunk
By David Brown
December 16, 2016

  • There is such a deluge of information that it can be hard for information security teams to know where to focus their time and energy. This paper will recommend common Linux and Windows tools to scan networks and systems, store results to local filesystems, analyze results, and pass any new data to Splunk. Splunk will then help security teams narrow in on what has changed within the networks and systems by alerting the security teams to any differences between old baselines and new scans. In addition, security teams may not even be paying attention to controls, like whitelisting blocks, that successfully prevent malicious activities. Monitoring failed application execution attempts can give security teams and administrators early warnings that someone may be trying to subvert a system. This paper will guide the security professional on setting up alerts to detect security events of interest like failed application executions due to whitelisting. To solve these problems, the paper will discuss the first five Critical Security Controls and explain what malicious behaviors can be uncovered as a result of alerting. As the paper progresses through the controls, the security professional is shown how to set up baseline analysis, how to configure the systems to pass the proper data to Splunk, and how to configure Splunk to alert on events of interest. The paper does not revolve around how to implement technical controls like whitelisting, but rather how to effectively monitor the controls once they have been implemented.

Continuous Monitoring: Build A World Class Monitoring System for Enterprise, Small Office, or Home
By Austin Taylor
December 15, 2016

  • For organizations who wish to prevent data breaches, incident prevention is ideal, but detection of an attempted or successful breach is a must. This paper outlines guidance for network visibility, threat intelligence implementation and methods to reduce analyst alert fatigue. Additionally, this document includes a workflow for Security Operations Centers (SOC) to efficiently process events of interest thereby increasing the likelihood of detecting a breach. Methods include Intrusion Detection System (IDS) setup with tips on efficient data collection, sensor placement, identification of critical infrastructure along with network and metric visualization. These recommendations are useful for enterprises, small homes, or offices who wish to implement threat intelligence and network analysis.

Detecting Malicious SMB Activity Using Bro
By Richie Cyrus
December 13, 2016

  • Attackers utilize the Server Message Block (SMB) protocol to blend in with network activity, often carrying out their objectives undetected. Post-compromise, attackers use file shares to move laterally, looking for sensitive or confidential data to exfiltrate out a network. Traditional methods for detecting such activity call for storing and analyzing large volumes of Windows event logs, or deploying a signature-based intrusion detection solution. For some organizations, processing and storing large amounts of Windows events may not be feasible. Pattern based intrusion detection solutions can be bypassed by malicious entities, potentially failing to detect malicious activity. Bro Network Security Monitor (Bro) provides an alternative solution allowing for rapid detection through custom scripts and log data. This paper introduces methods to detect malicious SMB activity using Bro.

Active Defense via a Labyrinth of Deception
By Nathaniel Quist
December 5, 2016

  • A network baseline allows for the identification of malicious activity in real time. However, a baseline requires that every listed action is known and accounted, presenting a nearly impossible task in any production environment due to an ever-changing application footprint, system and application updates, changing project requirements, and not least of all, unpredictable user behaviors. Each obstacle presents a significant challenge in the development and maintenance of an accurate and false positive free network baseline. To surmount these hurdles, network architects need to design a network free from continuous change including, changing company requirements, untested systems or application updates, and the presence of unpredictable users. Creating a static, never-changing environment is the goal. However, this completely removes the functionality of a production network. Or does it? Within this paper, I will detail how this type of static environment, referred to as the Labyrinth, can be placed in front of a production environment and provide real time defensive measures against hostile and dispersed attacks, from both human actors and automated machines. I expect to prove the Labyrinth is capable of detecting changes in its environment in real time. It will provide a listing of dynamic defensive capabilities like identifying attacking IP addresses, rogue-process start commands, modifications to registry values, alterations in system memory and recording the movements of an attacker's tactics, techniques, and procedures. At the same time, the Labyrinth will add these values to block list, protecting the production network lying behind. Successful accomplishment of these goals will prove the viability and sustainability of a Labyrinth defending network (Revelle, 2011) environments.

Next Generation of Privacy in Europe and the Impact on Information Security: Complying with the GDPR
By Edward Yuwono
December 5, 2016

  • Human rights have a strong place within Europe, part of this includes the fundamental right to privacy. Over the years, individual privacy has strengthened through various European directives. With the evolution of privacy continuing in Europe through the release of the General Data Protection Regulation (GDPR), how will the latest iteration of European Union (EU) regulation affect organisations and what will information security leaders need to do to meet this change? This paper will explore the evolution of privacy in Europe, the objectives and changes this iteration of EU privacy regulation will provide, what challenges organisations will experience, and how information security could be leveraged to satisfy the regulation.

A Checklist for Audit of Docker Containers
By Alyssa Robinson
November 22, 2016

  • Docker and other container technologies are increasingly popular methods for deploying applications in DevOps environments, due to advantages in portability, efficiency in resource sharing and speed of deployment. The very properties that make Docker containers useful, however, can pose challenges for audit, and the security capabilities and best practices are changing rapidly. As adoption of this technology grows, it is, therefore, necessary to create a standardized checklist for audit of Dockerized environments based on the latest tools and recommendations.

Security Assurance of Docker Containers
By Stefan Winkle
November 22, 2016

  • With recent movements like DevOps and the conversion towards application security as a service, the IT industry is in the middle of a set of substantial changes with how software is developed and deployed. In the infrastructure space, we see the uptake of lightweight container technology, while application technologies are moving towards distributed micros services. There is a recent explosion in popularity of package managers and distributors like OneGet, NPM, RubyGems and PyPI. More and more software development becomes dependent on small, reusable components developed by many different developers and often distributed by infrastructures outside our control. In the midst of this all, we often find application containers like Docker, LXC, and Rocket to compartmentalize software components. The Notary project, recently introduced in Docker, is built upon the assumption the software distribution pipeline can no longer be trusted. Notary attempts to protect against attacks on the software distribution pipeline by association of trust and duty separation to Docker containers. In this paper, we explore the Notary service and take a look at security testing of Docker containers.

Implementing Full Packet Capture
By Matt Koch
November 7, 2016

  • Full Packet Capture (FPC) provides a network defender an after-the-fact investigative capability that other security tools cannot provide. Uses include capturing malware samples, network exploits and determining if data exfiltration has occurred. Full packet captures are a valuable troubleshooting tool for operations and security teams alike. Successful implementation requires an understanding of organization-specific requirements, capacity planning, and delivery of unaltered network traffic to the packet capture system.

Intrusion Detection Through Relationship Analysis
By Patrick Neise
October 24, 2016

  • With the average time to detection of a network intrusion in enterprise networks assessed to be 6-8 months, network defenders require additional tools and techniques to shorten detection time. Perimeter, endpoint, and network traffic detection methods today are mainly focused on detecting individual incidents while security incident and event management (SIEM) products are then used to correlate the isolated events. Although proven to be able to detect network intrusions, these methods can be resource intensive in both time and personnel. Through the use of network flows and graph database technologies, analysts can rapidly gain insight into which hosts are communicating with each other and identify abnormal behavior such as a single client machine communicating with other clients via Server Message Block (SMB). Combining the power of tools such as Bro, a network analysis framework, and neo4j, a native graph database that is built to examine data and its relationships, rapid detection of anomalous behavior within the network becomes possible. This paper will identify the tools and techniques necessary to extract relevant network information, create the data model within a graph database, and query the resulting data to identify potential malicious activity.

Building a Home Network Configured to Collect Artifacts for Supporting Network Forensic Incident Response
By Gordon Fraser
September 21, 2016

  • A commonly accepted Incident Response process includes six phases: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. Preparation is key. It sets the foundation for a successful incident response. The incident responder does not want to be trying to figure out where to collect the information necessary to quickly assess the situation and to respond appropriately to the incident. Nor does the incident responder want to hope that the information he needs is available at the level of detail necessary to most effectively analyze the situation so he can make informed decisions on the best course of action. This paper identifies artifacts that are important to support network forensics during incident response and discusses an architecture and implementation for a home lab to support the collection of them. It then validates the architecture using an incident scenario.

Using Vagrant to Build a Manageable and Sharable Intrusion Detection Lab
By Shaun McCullough
September 20, 2016

  • This paper investigates how the Vagrant software application can be used by Information Security (InfoSec) professionals looking to provide their audience with an infrastructure environment to accompany their research. InfoSec professionals conducting research or publishing write-ups can provide opportunities for their audience to replicate or walk through the research themselves in their own environment. Vagrant is a popular DevOps tool for providing portable and repeatable production environments for application developers, and may solve the needs of the InfoSec professional. This paper will investigate how Vagrant works, the pros and cons of the technology, and how it is typically used. The paper describes how to build or repurpose three environments, highlighting different features of Vagrant. Finally, the paper will discuss lessons learned.

Know Thy Network - Cisco Firepower and Critical Security Controls 1 & 2
By Ryan Firth
September 19, 2016

  • Previously known as the SANS Top 20, the Critical Security Controls are based on real-world attack and security breach data from around the world, and are objectively the most effective technical controls against known cyber-attacks. Due to competing priorities and demands, however, organizations may not have the expertise to figure out how to implement and operationalize the Critical Security Controls in their environments. This paper will help bridge that gap for security and network teams using Cisco Firepower.

Windows Installed Software Inventory
By Jonathan Risto
September 7, 2016

  • The 20 Critical Controls provide a guideline for the controls that need to be placed in our networks to manage and secure our systems. The second control states there should be a software inventory that contains the names and versions of the products for all devices within the infrastructure. The challenge for a large number of organizations is the ability to have accurate information available with minimal impact on tight IT budgets. This paper will discuss the Microsoft Windows command line tools that will gather this information, and provide example scripts that can be run by the reader.

In but not Out: Protecting Confidentiality during Penetration Testing
By Andrew Andrasik
August 22, 2016

  • Penetration testing is imperative for organizations committed to security. However, independent penetration testers are rarely greeted with open arms when initiating an assessment. As firms implement the Critical Security Controls or the Risk Management Framework, independent penetration testing will likely become standard practice as opposed to supplemental exercises. Ethical hacking is a common tactic to view a company's network from an attacker's perspective, but inviting external personnel into a network may increase risk. Penetration testers strive to gain superuser privileges wherever possible and utilize thousands of open-source tools and scripts, many of which do not originate from validated sources.

Introduction to Rundeck for Secure Script Executions
By John Becker
August 11, 2016

  • Many organizations today support physical, virtual, and cloud-based systems across a wide range of operating systems. Providing least privilege access to systems can be a complex mesh of sudoers files, profiles, policies, and firewall rules. While configuration management tools such as Puppet or Chef help ensure consistency, they do not inherently simplify the process for users or administrators. Additionally, current DevOps teams are pushing changes faster than ever. Keeping pace with new services and applications often force sysadmins to use more general access rules and thus expose broader access than necessary. Rundeck is a web-based orchestration platform with powerful ACLs and ssh-based connectivity to a wide range of operating systems and devices. The simple user interface for Rundeck couples with DevOps-friendly REST APIs and YAML or XML configuration files. Using Rundeck for server access improves security while keeping pace with rapidly changing environments.

Legal Aspects of Privacy and Security: A Case- Study of Apple versus FBI Arguments
By Muzamil Riffat
June 3, 2016

  • The debate regarding privacy versus security has been going on for some time now. The matter is complicated due to the fact that the concept of privacy is a subjective phenomenon, shaped by several factors such as cultural norms or geographical location. In a paradoxical situation, rapid advancements in technology are fast making the technology both the guardian and invader of the privacy. Governments and organizations around the globe are using technology to achieve their objectives in the name of security and convenience. It appears that sporadic fights of the proponents of privacy and security had eventually found an avenue to express their opinions i.e. the USA court system. In February 2016, FBI was able to obtain a court order requiring Apple to modify the security features of an iPhone to enable the law enforcement agency access the contents of the device. Apple, backed by other leading technology firms, had vehemently opposed the idea and intended to file a legal appeal against the court order. Before both parties could present their arguments in the court, the case was dropped by FBI as it claimed that it was able to access the contents of the device without Apple's assistance. By using FBI vs. Apple as a case-study, this paper discusses different legal aspects of the opinions of both parties. With the pervasiveness of advanced technology, it can be reasonably anticipated that such requests by law enforcement and government agencies will become more frequent. The paper presents the privacy concerns that should be taken into consideration regarding all such requests.

Under The Ocean of the Internet - The Deep Web
By Brett Hawkins
May 27, 2016

  • The Internet was a revolutionary invention, and its use continues to evolve. People around the world use the Internet every day for things such as social media, shopping, email, reading news, and much more. However, this only makes up a very small piece of the Internet, and the rest is filled by an area called The Deep Web.

Securing Jenkins CI Systems
By Allen Jeng
April 8, 2016

  • With over 100,000 active installations worldwide, Jenkins became the top choice for continuous integration and automation. A survey conducted by Cloudbees during the 2012 Jenkins Users Conference concluded that 83 percent of the respondents consider Jenkins to be mission critical. The November 2015 remotely exploitable Java deserialization vulnerability stresses the need to lock down and monitor Jenkins systems. Exploitation of this weakness enables hackers to gain access to critical assets such as source code that Jenkins manages. Enabling password security is the general recommendations for securing Jenkins. Unfortunately, this necessary security measure can easily be defeated with a packet sniffer because passwords are transmitted over the wire as clear text. This paper will look at ways to secure Jenkins system as well as the deployment of intrusion detection systems to monitor critical assets controlled by Jenkins CI systems.

Secure Network Design: Micro Segmentation
By Brandon Peterson
February 29, 2016

  • Hackers, once on to a network, often go undetected as they freely move from system to system looking for valuable information to steal. Credentials, intellectual property, and personal information are all at risk. It is generally accepted that the attacker has the upper hand and can eventually penetrate most networks. A secure network design that focuses on micro segmentation can slow the rate at which an attacker moves through a network and provide more opportunities for detecting that movement. Organizations that implement a secure network design will find that the added cost and complexity of micro segmentation is more than offset by a reduction in the number and severity of incidents. In fact, the effort extended in learning, classifying, and segmenting the network adds value and strengthens all of the organization’s controls.

Selling Your Information Security Strategy
By David Todd
February 18, 2016

  • It is the Chief Information Security Officer’s (CISO) responsibility to identify the gaps between the most significant security threats and vulnerabilities, compared with the organization's current state. The CISO should develop an information security strategy that aligns with the strategic goals of the organization and sells the gap mitigation strategy to executive management and the board of directors. Before embarking on this new adventure, clearly articulate what success looks like to your organization. What is the result you are driving to accomplish? Then develop a strategy to get you there. Take a play directly from the Sales organization’s playbook – Know yourself; know your customer; and know the benefits from your customer’s perspective. Following this simple strategy will help the CISO close the deal of selling your Information Security Strategy.

Dont Always Judge a Packet by Its Cover
By Gabriel Sanchez
February 16, 2016

  • Distinguishing between friend and foe as millions of packets traverse a network at any given moment can be a very tedious and trying objective. Packets can contain viruses, malware, and botnets which necessitates the need to detect them fast. However, chasing every packet often becomes unmanageable and can often lead to many dead ends. Traditional approaches to this problem rely on heuristics or signatures with a known bad which tend to be ineffective to the advanced attacker. Instead, this paper will go beyond the known bad and describe a general approach of honing in on packets of interest utilizing the behavior and profiling of a network. The use of behavior analysis and profiling for packets that ordinarily traverse a network can shine light on the shadows that the enemy lurks in that bypass traditional detection. This behavior analysis and profiling is especially imperative since knowing the characteristics of your packets can certainly reveal their true intentions.

Security Systems Engineering Approach in Evaluating Commercial and Open Source Software Products
By Jesus Abelarde
January 29, 2016

  • The use of commercial and free open source software (FOSS) is becoming more common in commercial, corporate, and government settings as they develop complex systems. This carries a set of risks until the system is retired or replaced. Unfortunately during project development, the amount of security resources and time necessary to accommodate proper security evaluations is usually underestimated. Also, there is no widely used or standardized evaluation process that engineers and scientists can utilize as a guideline. Therefore, the evaluation process usually ends up lacking or widely different from project to project and company to company. This paper provides a suggested evaluation process and a set of methodologies, along with associated costs and risks that projects can utilize as a guideline when they integrate commercial and FOSS products during system development life cycle (SDLC).

Network Forensics and HTTP/2
By Stefan Winkel
January 18, 2016

  • Last May, a major new version of the HTTP protocol, HTTP/2, has been published and finalized in RFC 7540. HTTP/2, based on the SPDY protocol, which was primarily developed by Google, is a multiplexed, binary protocol where TLS has become the de- facto mandatory standard. Most of the modern web browsers (e.g. Chrome, Firefox, Edge) are now supporting HTTP/2 and some Fortune 500 companies like Google, Facebook and Twitter have enabled HTTP/2 traffic to and from their servers already. We also have seen a recent uptake in security breaches related to HTTP data compression (e.g. Crime, Beast) which is part of HTTP/2. From a network perspective there is currently limited support for analyzing HTTP/2 traffic. This paper will explore how best to analyze such traffic and discuss how the new version might change the future of network forensics.

Cybersecurity Inventory at Home
By Glen Roberts
January 7, 2016

  • Consumers need better home network security guidance for taking stock of the hardware and software applications installed on their network and devices. The primary sources of information security advice for the average person are TV, magazines, newspapers, websites and social media. Unfortunately, these sources typically repeat the same advice, provide limited guidance and miss key areas of security that should be taken into consideration when securing home networks. On the other hand, enterprises receive comprehensive, prioritized guidance such as the Critical Security Controls from The Center for Internet Security. Unfortunately, these controls were not designed with securing home networks in mind. The wide gap between consumer-media advice columns and highly professional corporate security controls needs to be bridged. This can be done by using the Critical Security Controls as a comprehensive foundation from which to craft an authoritative yet easy-to-understand set of home network security recommendations for individuals. The first step is distilling the guidance for inventorying hardware and software applications.

Infrastructure Security Architecture for Effective Security Monitoring
By Luciana Obregon
December 11, 2015

  • Many organizations struggle to architect and implement adequate network infrastructures to optimize network security monitoring. This challenge often leads to data loss with regards to monitored traffic and security events, increased cost in new hardware and technology needed to address monitoring gaps, and additional Information Security personnel to keep up with the overwhelming number of security alerts. Organizations spend a lot of time, effort, and money deploying the latest and greatest tools without ever addressing the fundamental problem of adequate network security design. This paper provides a best practice approach to designing and building scalable and repeatable infrastructure security architectures to optimize network security monitoring. It will expand on four network security domains including network segmentation, intrusion detection and prevention, security event logging, and packet capturing. The goal is a visual representation of an infrastructure security architecture that will allow stakeholders to understand how to architect their networks to address monitoring gaps and protect their organizations.

Compliant but not Secure: Why PCI-Certified Companies Are Being Breached
By Christian Moldes
December 9, 2015

  • The Payment Card Industry published the Data Security Standard 11 years ago; however, criminals are still breaching companies and getting access to cardholder data. The number of security breaches in the past two years has increased considerable, even among the companies for which assessors deemed compliant. In this paper, the author conducts a detailed analysis of why this is still occurring and proposes changes companies should adopt to avoid a security breach.

Web Application File Upload Vulnerabilities
By Matthew Koch
December 7, 2015

  • Uploading files to a web application can be a key feature to many web applications. Without it cloud backup services, photograph sharing and other functions would not be possible.

There's No Going it Alone: Disrupting Well Organized Cyber Crime
By John Garris
November 23, 2015

  • The identification and eventual disruption of a sophisticated criminal enterprise, requiring on-the-fly problem solving and groundbreaking international collaboration, offers a model of how an international cooperative effort can succeed. The efforts that ultimately brought down Rove Digital, an Estonian-based criminal operation that compromised millions of computers, provides just such an example. The approach taken by law enforcement from several countries, coupled with the important roles played by security researchers, can be built upon to address burgeoning threats that can only be tackled cooperatively.

A Network Analysis of a Web Server Compromise
By Kiel Wadner
September 8, 2015

  • Through the analysis of a known scenario, the reader will be given the opportunity to explore a website being compromised. From the initial reconnaissance to gaining root access, each step is viewed at the network level. The benefit of a known scenario is assumptions about the attackers’ reasons are avoided, allowing focus to remain on the technical details of the attack. Steps such as file extraction, timing analysis and reverse engineering an encrypted C2 channel are covered.

Breaking the Ice: Gaining Initial Access
By Phillip Bosco
August 28, 2015

  • While companies are spending an increasing amount of resources on security equipment, attackers are still successful at finding ways to breach networks. This is a compounded problem with many moving parts, due to misinformation within the security industry and companies placing focus on areas of security that yield unimpressive results. A company cannot properly defend and protect against what they do not adequately understand, which tends to be a misunderstanding of their own security defense systems and relevant attacks that cyber criminals commonly use today. These misunderstandings result in attackers bypassing even the most seemingly robust security systems using the simplest methods. The author will outline the common misconceptions within the security industry that ultimately lead to insecure networks. Such misconceptions include a company’s misallocation of their security budget, while other misconceptions include the controversies regarding which methods are most effective at fending off an attacker. Common attack vectors and misconfigurations that are devastating, but are highly preventable, are also detailed.

Forensic Timeline Analysis using Wireshark GIAC (GCFA) Gold Certification
By David Fletcher
August 10, 2015

  • The objective of this paper is to demonstrate analysis of timeline evidence using the Wireshark protocol analyzer. To accomplish this, sample timelines will be generated using tools from The Sleuth Kit (TSK) as well as Log2Timeline. The sample timelines will then be converted into Packet Capture (PCAP) format. Once in this format, Wireshark's native analysis capabilities will be demonstrated in the context of forensic timeline analysis. The underlying hypothesis is that Wireshark can provide a suitable interface for enhancing analyst's ability. This is accomplished through use of built-in features such as analysis profiles, filtering, colorization, marking, and annotation.

Coding For Incident Response: Solving the Language Dilemma
By Shelly Giesbrecht
July 28, 2015

  • Incident responders frequently are faced with the reality of "doing more with less" due to budget or manpower deficits. The ability to write scripts from scratch or modify the code of others to solve a problem or find data in a data "haystack" are necessary skills in a responder's personal toolkit. The question for IR practitioners is what language should they learn that will be the most useful in their work? In this paper, we will examine several coding languages used in writing tools and scripts used for incident response including Perl, Python, C#, PowerShell and Go. In addition, we will discuss why one language may be more helpful than another depending on the use-case, and look at examples of code for each language.

Accessing the inaccessible: Incident investigation in a world of embedded devices
By Eric Jodoin
June 24, 2015

  • There are currently an estimated 4.9 billion embedded systems distributed worldwide. By 2020, that number is expected to have grown to 25 billion. Embedded systems can be found virtually everywhere, ranging from consumer products such as Smart TVs, Blu-ray players, fridges, thermostats, smart phones, and many more household devices. They are also ubiquitous in businesses where they are found in alarm systems, climate control systems, and most networking equipment such as routers, managed switches, IP cameras, multi-function printers, etc. Unfortunately, recent events have taught us these devices can also be vulnerable to malware and hackers. Therefore, it is highly likely that one of these devices may become a key source of evidence in an incident investigation. This paper introduces the reader to embedded systems technology. Using a Blu-ray player embedded system as an example; it demonstrates the process to connect to and then access data through the serial console to collect evidence from an embedded system non-volatile memory.

Honeytokens and honeypots for web ID and IH
By Rich Graves
May 14, 2015

  • Honeypots and honey tokens can be useful tools for examining follow-up to phishing attacks. In this exercise, we respond using valid email addresses that actually received the phish, and wrong passwords. We demonstrate using custom single sign-on code to redirect logins with those fake passwords and any other logins from presumed attacker source IP addresses to a dedicated phishing-victim web honeypot. Although the proof-of- concept described did not become a production deployment, it provided insight into current attacks.

Group Gold Papers

Endpoint Security through Device Configuration, Policy and Network Isolation
By Barbara Filkins & Jonathan Risto
July 15, 2016

  • Sensitive data leaked from endpoints unbeknownst to the user can be detrimental to both an organization and its workforce. The CIO of GIAC Enterprises, alarmed by reports from a newly installed, host-based firewall on his MacBook Pro, commissioned an investigation concerning the security of GIAC Enterprise endpoints.