STI master's program candidates conduct research which is relevant, has real world impact, and which often provides cutting edge advancements to the field of cybersecurity knowledge, all under the guidance and review of our world class instructors. Here are some highlights of their recent findings.
Auto-Nuke It from Orbit: A Framework for Critical Security Control Automation By Jeremiah Hainly March 15, 2017
- Over 83% of security teams report that the use of automation in security needs to increase within the next three years (Algosec, 2016). With automation becoming a reality for a growing number of companies, there will also be an increased demand for open-sourced scripts to get started. This paper will provide a framework for prioritizing and developing security automation and will demonstrate this process by creating a script to automate a common information security response procedure - the reimaging of an infected endpoint. The primary function of the script will be to access the application program interface (API) of various enterprise software solutions to speed up the manual tasks involved in performing a reimage.
Cloud Security Monitoring By Balaji Balakrishnan March 13, 2017
- This paper discusses how to apply security log monitoring capabilities for Amazon Web Services (AWS) Infrastructure as a Service(IaaS) cloud environments. It will provide an overview of AWS CloudTrail and CloudWatch Logs, which can be stored and mined for suspicious events. Security teams implementing AWS solutions will benefit from applying security monitoring techniques to prevent unauthorized access and data loss. Splunk will be used to ingest all AWS CloudTrail and CloudWatch Logs. Machine learning models are used to identify the suspicious activities in the AWS cloud infrastructure. The audience for this paper are the security teams trying to implement AWS security monitoring.
In-Depth Look at Tuckman's Ladder and Subsequent Works as a Tool for Managing a Project Team By Aron Warren March 1, 2017
- Bruce Tuckman's 1965 research on modeling group development, titled "Developmental Sequence in Small Groups," laid out a framework consisting of four stages a group will transition between while members interact with each other: forming, storming, norming, and performing. This paper will describe in detail the original Tuckman model as well as derivative research in group development models. Traditional and virtual team environments will both be addressed to assist IT project managers in understanding how a team evolves over time with a goal of achieving a successful project outcome.
Medical Data Sharing: Establishing Trust in Health Information Exchange By Barbara Filkins March 1, 2017
- Health information exchange (HIE) "allows doctors, nurses, pharmacists, other health care providers and patients to appropriately access and securely share a patient's vital medical information electronically--improving the speed, quality, safety and cost of patient care" (HealthIT.gov, 2014). The greatest gain in the use of HIE is the ability to achieve interoperability across providers that, except for the care of a given patient, are unrelated. But, by its very nature, HIE also raises concern around the protection and integrity of shared, sensitive data. Trust is a major barrier to interoperability.
Tor Browser Artifacts in Windows 10 By Aron Warren February 24, 2017
- The Tor network is a popular, encrypted, worldwide, anonymizing virtual network in existence since 2002 and is used by all facets of society such as privacy advocates, journalists, governments, and criminals. This paper will provide a forensic analysis of the Tor Browser version 5 client on a Windows 10 host for an individual or group interested in remnants left by the software. This paper will utilize various free and commercial tools to provide a detailed analysis of filesystem artifacts as well as a comparison between pre- and post- connection to the Tor network using memory analysis.
OS X as a Forensic Platform By David M. Martin February 22, 2017
- The Apple Macintosh and its OS X operating system have seen increasing adoption by technical professionals, including digital forensic analysts. Forensic software support for OS X remains less mature than that of Windows or Linux. While many Linux forensic tools will work on OS X, instructions for how to configure the tool in OS X are often missing or confusing. OS X also lacks an integrated package management system for command line tools. Python, which serves as the basis for many open-source forensic tools, can be difficult to maintain and easy to misconfigure on OS X. Due to these challenges, many OS X users choose to run their forensic tools from Windows or Linux virtual machines. While this can be an effective and expedient solution, those users miss out on the much of the power of the Macintosh platform. This research will examine the process of configuring a native OS X forensic environment that includes many open-source forensic tools, including Bulk Extractor, Plaso, Rekall, Sleuthkit, Volatility, and Yara. This process includes choosing the correct hardware and software, configuring it properly, and overcoming some of the unique challenges of the OS X environment. A series of performance tests will help determine the optimal hardware and software configuration and examine the performance impact of virtualization options.
Indicators of Compromise TeslaCrypt Malware By Kevin Kelly February 16, 2017
- Malware has become a growing concern in a society of interconnected devices and realtime communications. This paper will show how to analyze live ransomware malware samples, how malware processes locally, over time and within the network. Analyzing live ransomware gives a unique three-dimensional perspective, visually locating crucial signatures and behaviors efficiently. In lieu of reverse engineering or parsing the malware executable’s infrastructure, live analysis provides a simpler method to root out indicators. Ransomware touches just about every file and many of the registry keys. Analysis can be done, but it needs to be focused. The analysis of malware capabilities from different datasets, including process monitoring, flow data, registry key changes, and network traffic will yield indicators of compromise. These indicators will be collected using various open source tools such as Sysinternals suite, Fiddler, Wireshark, and Snort, to name a few. Malware indicators of compromise will be collected to produce defensive countermeasures against unwanted advanced adversary activity on a network. A virtual appliance platform with simulated production Windows 8 O/S will be created, infected and processed to collect indicators to be used to secure enterprise systems. Different tools will leverage datasets to gather indicators, view malware on multiple layers, contain compromised hosts and prevent future infections.
Impediments to Adoption of Two-factor Authentication by Home End-Users By Preston Ackerman February 10, 2017
- Cyber criminals have proven to be both capable and motivated to profit from compromised personal information. The FBI has reported that victims have suffered over $3 billion in losses through compromise of email accounts alone (IC3 2016). One security measure which has been demonstrated to be effective against many of these attacks is two-factor authentication (2FA). The FBI, the Department of Homeland Security US Computer Emergency Readiness Team (US-CERT), and the internationally recognized security training and awareness organization, the SANS Institute, all strongly recommend the use of two-factor authentication. Nevertheless, adoption rates of 2FA are low.
Dissect the Phish to Hunt Infections By Seth Polley February 3, 2017
- Internal defense is a perilous problem facing many organizations today. The sole reliance on external defenses is all too common, leaving the internal organization largely unprotected. The times when internal defense is actually considered, how many think beyond the fallible antivirus (AV) or immature data loss prevention (DLP) solutions? Considering the rise of phishing emails and other social engineering campaigns, there is a significantly increased risk that an organization’s current external and internal defenses will fail to prevent compromises. How would a cyber security team detect an attacker establishing a foothold within the center of the organization or undetectable malware being downloaded internally if a user were to fall for a phishing attempt?
Forensication Education: Towards a Digital Forensics Instructional Framework By J. Richard “Rick” Kiper February 3, 2017
- The field of digital forensics is a diverse and fast-paced branch of cyber investigations. Unfortunately, common efforts to train individuals in this area have been inconsistent and ineffective, as curriculum managers attempt to plug in off-the-shelf courses without an overall educational strategy. The aim of this study is to identify the most effective instructional design features for a future entry-level digital forensics course. To achieve this goal, an expert panel of digital forensics professionals was assembled to identify and prioritize the features, which included general learning outcomes, specific learning goals, instructional delivery formats, instructor characteristics, and assessment strategies. Data was collected from participants using validated group consensus methods such as Delphi and cumulative voting. The product of this effort was the Digital Forensics Framework for Instruction Design (DFFID), a comprehensive digital forensics instructional framework meant to guide the development of future digital forensics curricula.
Superfish and TLS: A Case Study of Betrayed Trust and Legal Liability By Sandra Dunn January 24, 2017
- Superfish, the bloat adware included in Lenovo consumer laptops from 2014-2015 which intentionally broke TLS, exposed user's personal data to compromise and theft, and altered search result ads in user's browsers severely impacted Lenovo brand reputation. There have been other high profile cases of intentionally modifying and breaking TLS that used questionable and deceptive practices but few that generated as much attention and provide such a clear example of a chain of missteps between Lenovo, Superfish, and their customers. A case study of the Superfish mishap exposes the danger, risk, legal liability, and potential government investigation for organization deploying TLS certificates and keys that breaks or weakens the security design and puts private data or people at risk. The Superfish case further demonstrates the importance of a company's disclosure transparency to avoid accusations of deceptive practices if breaking TLS is required to protect users or an organization's data.
Minimizing Legal Risk When Using Cybersecurity Scanning Tools By John Dittmer January 19, 2017
- When cybersecurity professionals use scanning tools on the networks and devices of organizations, there can be legal risks that need to be managed by individuals and enterprises. Often, scanning tools are used to measure compliance with cybersecurity policies and laws, so they must be used with due care. There are protocols that should be followed to ensure proper use of the scanning tools to prevent interference with normal network or system operations and to ensure the accuracy of the scanning results. Several challenges will be examined in depth, such as, measuring for scanner accuracy, proper methods of obtaining written consent for scanning, and how to set up a scanning session for optimum examination of systems or networks. This paper will provide cybersecurity professionals and managers with a better understanding of how and when to use the scanning tools while minimizing the legal risk to themselves and their enterprises.
Data Breach Impact Estimation By Paul Hershberger January 3, 2017
- Internal and External auditors spend a significant amount of time planning their audit processes to align their efforts with the needs of the audited organization. The initial phase of that audit cycle is the risk assessment. Establishing a firm understanding of the likelihood and impact of risk guides the audit function and aligns its work with the risks the organization faces. The challenge many auditors and security professionals face is effectively quantifying the potential impact of a data breach to their organization. This paper compares the data breach cost research of the Ponemon Institute and the RAND Corporation, comparing the models against breach costs reported by publicly traded companies by the Securities and Exchange Commission (SEC) reporting requirements. The comparisons will show that the RAND Corporation's approach provides organizations with a more accurate and flexible model to estimate the potential cost of data breaches as they relate to the direct cost of investigating and remediating a breach and the indirect financial impact associated with regulatory and legal action of a data breach. Additionally, the comparison indicates that data breach-related impacts to revenue and stock valuation are only realized in the short-term.
Real-World Case Study: The Overloaded Security Professional's Guide to Prioritizing Critical Security Controls By Phillip Bosco December 27, 2016
- Using a real-world case study of a recently compromised company as a framework, we will step inside the aftermath of an actual breach and determine how the practical implementation of Critical Security Controls (CSC) may have prevented the compromise entirely while providing greater visibility inside the attack as it occurred. The breached company's information security "team" consisted of a single over-worked individual, who found it arduous to identify which critical controls he should focus his limited time implementing. Lastly, we will delve into real-world examples, using previously unpublished research, that serve as practical approaches for teams with limited resources to prioritize and schedule which CSCs will provide the largest impact towards reducing the company's overall risk. Ideally, the observations and approaches identified in this research paper will assist security professionals who may be in similar circumstances.
Finding Bad with Splunk By David Brown December 16, 2016
- There is such a deluge of information that it can be hard for information security teams to know where to focus their time and energy. This paper will recommend common Linux and Windows tools to scan networks and systems, store results to local filesystems, analyze results, and pass any new data to Splunk. Splunk will then help security teams narrow in on what has changed within the networks and systems by alerting the security teams to any differences between old baselines and new scans. In addition, security teams may not even be paying attention to controls, like whitelisting blocks, that successfully prevent malicious activities. Monitoring failed application execution attempts can give security teams and administrators early warnings that someone may be trying to subvert a system. This paper will guide the security professional on setting up alerts to detect security events of interest like failed application executions due to whitelisting. To solve these problems, the paper will discuss the first five Critical Security Controls and explain what malicious behaviors can be uncovered as a result of alerting. As the paper progresses through the controls, the security professional is shown how to set up baseline analysis, how to configure the systems to pass the proper data to Splunk, and how to configure Splunk to alert on events of interest. The paper does not revolve around how to implement technical controls like whitelisting, but rather how to effectively monitor the controls once they have been implemented.
Continuous Monitoring: Build A World Class Monitoring System for Enterprise, Small Office, or Home By Austin Taylor December 15, 2016
- For organizations who wish to prevent data breaches, incident prevention is ideal, but detection of an attempted or successful breach is a must. This paper outlines guidance for network visibility, threat intelligence implementation and methods to reduce analyst alert fatigue. Additionally, this document includes a workflow for Security Operations Centers (SOC) to efficiently process events of interest thereby increasing the likelihood of detecting a breach. Methods include Intrusion Detection System (IDS) setup with tips on efficient data collection, sensor placement, identification of critical infrastructure along with network and metric visualization. These recommendations are useful for enterprises, small homes, or offices who wish to implement threat intelligence and network analysis.
Detecting Malicious SMB Activity Using Bro By Richie Cyrus December 13, 2016
- Attackers utilize the Server Message Block (SMB) protocol to blend in with network activity, often carrying out their objectives undetected. Post-compromise, attackers use file shares to move laterally, looking for sensitive or confidential data to exfiltrate out a network. Traditional methods for detecting such activity call for storing and analyzing large volumes of Windows event logs, or deploying a signature-based intrusion detection solution. For some organizations, processing and storing large amounts of Windows events may not be feasible. Pattern based intrusion detection solutions can be bypassed by malicious entities, potentially failing to detect malicious activity. Bro Network Security Monitor (Bro) provides an alternative solution allowing for rapid detection through custom scripts and log data. This paper introduces methods to detect malicious SMB activity using Bro.
Active Defense via a Labyrinth of Deception By Nathaniel Quist December 5, 2016
- A network baseline allows for the identification of malicious activity in real time. However, a baseline requires that every listed action is known and accounted, presenting a nearly impossible task in any production environment due to an ever-changing application footprint, system and application updates, changing project requirements, and not least of all, unpredictable user behaviors. Each obstacle presents a significant challenge in the development and maintenance of an accurate and false positive free network baseline. To surmount these hurdles, network architects need to design a network free from continuous change including, changing company requirements, untested systems or application updates, and the presence of unpredictable users. Creating a static, never-changing environment is the goal. However, this completely removes the functionality of a production network. Or does it? Within this paper, I will detail how this type of static environment, referred to as the Labyrinth, can be placed in front of a production environment and provide real time defensive measures against hostile and dispersed attacks, from both human actors and automated machines. I expect to prove the Labyrinth is capable of detecting changes in its environment in real time. It will provide a listing of dynamic defensive capabilities like identifying attacking IP addresses, rogue-process start commands, modifications to registry values, alterations in system memory and recording the movements of an attacker's tactics, techniques, and procedures. At the same time, the Labyrinth will add these values to block list, protecting the production network lying behind. Successful accomplishment of these goals will prove the viability and sustainability of a Labyrinth defending network (Revelle, 2011) environments.
Next Generation of Privacy in Europe and the Impact on Information Security: Complying with the GDPR By Edward Yuwono December 5, 2016
- Human rights have a strong place within Europe, part of this includes the fundamental right to privacy. Over the years, individual privacy has strengthened through various European directives. With the evolution of privacy continuing in Europe through the release of the General Data Protection Regulation (GDPR), how will the latest iteration of European Union (EU) regulation affect organisations and what will information security leaders need to do to meet this change? This paper will explore the evolution of privacy in Europe, the objectives and changes this iteration of EU privacy regulation will provide, what challenges organisations will experience, and how information security could be leveraged to satisfy the regulation.
A Checklist for Audit of Docker Containers By Alyssa Robinson November 22, 2016
- Docker and other container technologies are increasingly popular methods for deploying applications in DevOps environments, due to advantages in portability, efficiency in resource sharing and speed of deployment. The very properties that make Docker containers useful, however, can pose challenges for audit, and the security capabilities and best practices are changing rapidly. As adoption of this technology grows, it is, therefore, necessary to create a standardized checklist for audit of Dockerized environments based on the latest tools and recommendations.
Security Assurance of Docker Containers By Stefan Winkle November 22, 2016
- With recent movements like DevOps and the conversion towards application security as a service, the IT industry is in the middle of a set of substantial changes with how software is developed and deployed. In the infrastructure space, we see the uptake of lightweight container technology, while application technologies are moving towards distributed micros services. There is a recent explosion in popularity of package managers and distributors like OneGet, NPM, RubyGems and PyPI. More and more software development becomes dependent on small, reusable components developed by many different developers and often distributed by infrastructures outside our control. In the midst of this all, we often find application containers like Docker, LXC, and Rocket to compartmentalize software components. The Notary project, recently introduced in Docker, is built upon the assumption the software distribution pipeline can no longer be trusted. Notary attempts to protect against attacks on the software distribution pipeline by association of trust and duty separation to Docker containers. In this paper, we explore the Notary service and take a look at security testing of Docker containers.
Implementing Full Packet Capture By Matt Koch November 7, 2016
- Full Packet Capture (FPC) provides a network defender an after-the-fact investigative capability that other security tools cannot provide. Uses include capturing malware samples, network exploits and determining if data exfiltration has occurred. Full packet captures are a valuable troubleshooting tool for operations and security teams alike. Successful implementation requires an understanding of organization-specific requirements, capacity planning, and delivery of unaltered network traffic to the packet capture system.
Intrusion Detection Through Relationship Analysis By Patrick Neise October 24, 2016
- With the average time to detection of a network intrusion in enterprise networks assessed to be 6-8 months, network defenders require additional tools and techniques to shorten detection time. Perimeter, endpoint, and network traffic detection methods today are mainly focused on detecting individual incidents while security incident and event management (SIEM) products are then used to correlate the isolated events. Although proven to be able to detect network intrusions, these methods can be resource intensive in both time and personnel. Through the use of network flows and graph database technologies, analysts can rapidly gain insight into which hosts are communicating with each other and identify abnormal behavior such as a single client machine communicating with other clients via Server Message Block (SMB). Combining the power of tools such as Bro, a network analysis framework, and neo4j, a native graph database that is built to examine data and its relationships, rapid detection of anomalous behavior within the network becomes possible. This paper will identify the tools and techniques necessary to extract relevant network information, create the data model within a graph database, and query the resulting data to identify potential malicious activity.
Building a Home Network Configured to Collect Artifacts for Supporting Network Forensic Incident Response By Gordon Fraser September 21, 2016
- A commonly accepted Incident Response process includes six phases: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned. Preparation is key. It sets the foundation for a successful incident response. The incident responder does not want to be trying to figure out where to collect the information necessary to quickly assess the situation and to respond appropriately to the incident. Nor does the incident responder want to hope that the information he needs is available at the level of detail necessary to most effectively analyze the situation so he can make informed decisions on the best course of action. This paper identifies artifacts that are important to support network forensics during incident response and discusses an architecture and implementation for a home lab to support the collection of them. It then validates the architecture using an incident scenario.
Using Vagrant to Build a Manageable and Sharable Intrusion Detection Lab By Shaun McCullough September 20, 2016
- This paper investigates how the Vagrant software application can be used by Information Security (InfoSec) professionals looking to provide their audience with an infrastructure environment to accompany their research. InfoSec professionals conducting research or publishing write-ups can provide opportunities for their audience to replicate or walk through the research themselves in their own environment. Vagrant is a popular DevOps tool for providing portable and repeatable production environments for application developers, and may solve the needs of the InfoSec professional. This paper will investigate how Vagrant works, the pros and cons of the technology, and how it is typically used. The paper describes how to build or repurpose three environments, highlighting different features of Vagrant. Finally, the paper will discuss lessons learned.
Know Thy Network - Cisco Firepower and Critical Security Controls 1 & 2 By Ryan Firth September 19, 2016
- Previously known as the SANS Top 20, the Critical Security Controls are based on real-world attack and security breach data from around the world, and are objectively the most effective technical controls against known cyber-attacks. Due to competing priorities and demands, however, organizations may not have the expertise to figure out how to implement and operationalize the Critical Security Controls in their environments. This paper will help bridge that gap for security and network teams using Cisco Firepower.
Windows Installed Software Inventory By Jonathan Risto September 7, 2016
- The 20 Critical Controls provide a guideline for the controls that need to be placed in our networks to manage and secure our systems. The second control states there should be a software inventory that contains the names and versions of the products for all devices within the infrastructure. The challenge for a large number of organizations is the ability to have accurate information available with minimal impact on tight IT budgets. This paper will discuss the Microsoft Windows command line tools that will gather this information, and provide example scripts that can be run by the reader.
In but not Out: Protecting Confidentiality during Penetration Testing By Andrew Andrasik August 22, 2016
- Penetration testing is imperative for organizations committed to security. However, independent penetration testers are rarely greeted with open arms when initiating an assessment. As firms implement the Critical Security Controls or the Risk Management Framework, independent penetration testing will likely become standard practice as opposed to supplemental exercises. Ethical hacking is a common tactic to view a company's network from an attacker's perspective, but inviting external personnel into a network may increase risk. Penetration testers strive to gain superuser privileges wherever possible and utilize thousands of open-source tools and scripts, many of which do not originate from validated sources.
Introduction to Rundeck for Secure Script Executions By John Becker August 11, 2016
- Many organizations today support physical, virtual, and cloud-based systems across a wide range of operating systems. Providing least privilege access to systems can be a complex mesh of sudoers files, profiles, policies, and firewall rules. While configuration management tools such as Puppet or Chef help ensure consistency, they do not inherently simplify the process for users or administrators. Additionally, current DevOps teams are pushing changes faster than ever. Keeping pace with new services and applications often force sysadmins to use more general access rules and thus expose broader access than necessary. Rundeck is a web-based orchestration platform with powerful ACLs and ssh-based connectivity to a wide range of operating systems and devices. The simple user interface for Rundeck couples with DevOps-friendly REST APIs and YAML or XML configuration files. Using Rundeck for server access improves security while keeping pace with rapidly changing environments.
Legal Aspects of Privacy and Security: A Case- Study of Apple versus FBI Arguments By Muzamil Riffat June 3, 2016
- The debate regarding privacy versus security has been going on for some time now. The matter is complicated due to the fact that the concept of privacy is a subjective phenomenon, shaped by several factors such as cultural norms or geographical location. In a paradoxical situation, rapid advancements in technology are fast making the technology both the guardian and invader of the privacy. Governments and organizations around the globe are using technology to achieve their objectives in the name of security and convenience. It appears that sporadic fights of the proponents of privacy and security had eventually found an avenue to express their opinions i.e. the USA court system. In February 2016, FBI was able to obtain a court order requiring Apple to modify the security features of an iPhone to enable the law enforcement agency access the contents of the device. Apple, backed by other leading technology firms, had vehemently opposed the idea and intended to file a legal appeal against the court order. Before both parties could present their arguments in the court, the case was dropped by FBI as it claimed that it was able to access the contents of the device without Apple's assistance. By using FBI vs. Apple as a case-study, this paper discusses different legal aspects of the opinions of both parties. With the pervasiveness of advanced technology, it can be reasonably anticipated that such requests by law enforcement and government agencies will become more frequent. The paper presents the privacy concerns that should be taken into consideration regarding all such requests.
Under The Ocean of the Internet - The Deep Web By Brett Hawkins May 27, 2016
- The Internet was a revolutionary invention, and its use continues to evolve. People around the world use the Internet every day for things such as social media, shopping, email, reading news, and much more. However, this only makes up a very small piece of the Internet, and the rest is filled by an area called The Deep Web.
Securing Jenkins CI Systems By Allen Jeng April 8, 2016
- With over 100,000 active installations worldwide, Jenkins became the top choice for continuous integration and automation. A survey conducted by Cloudbees during the 2012 Jenkins Users Conference concluded that 83 percent of the respondents consider Jenkins to be mission critical. The November 2015 remotely exploitable Java deserialization vulnerability stresses the need to lock down and monitor Jenkins systems. Exploitation of this weakness enables hackers to gain access to critical assets such as source code that Jenkins manages. Enabling password security is the general recommendations for securing Jenkins. Unfortunately, this necessary security measure can easily be defeated with a packet sniffer because passwords are transmitted over the wire as clear text. This paper will look at ways to secure Jenkins system as well as the deployment of intrusion detection systems to monitor critical assets controlled by Jenkins CI systems.
Secure Network Design: Micro Segmentation By Brandon Peterson February 29, 2016
- Hackers, once on to a network, often go undetected as they freely move from system to system looking for valuable information to steal. Credentials, intellectual property, and personal information are all at risk. It is generally accepted that the attacker has the upper hand and can eventually penetrate most networks. A secure network design that focuses on micro segmentation can slow the rate at which an attacker moves through a network and provide more opportunities for detecting that movement. Organizations that implement a secure network design will find that the added cost and complexity of micro segmentation is more than offset by a reduction in the number and severity of incidents. In fact, the effort extended in learning, classifying, and segmenting the network adds value and strengthens all of the organization’s controls.
Selling Your Information Security Strategy By David Todd February 18, 2016
- It is the Chief Information Security Officer’s (CISO) responsibility to identify the gaps between the most significant security threats and vulnerabilities, compared with the organization's current state. The CISO should develop an information security strategy that aligns with the strategic goals of the organization and sells the gap mitigation strategy to executive management and the board of directors. Before embarking on this new adventure, clearly articulate what success looks like to your organization. What is the result you are driving to accomplish? Then develop a strategy to get you there. Take a play directly from the Sales organization’s playbook – Know yourself; know your customer; and know the benefits from your customer’s perspective. Following this simple strategy will help the CISO close the deal of selling your Information Security Strategy.
Dont Always Judge a Packet by Its Cover By Gabriel Sanchez February 16, 2016
- Distinguishing between friend and foe as millions of packets traverse a network at any given moment can be a very tedious and trying objective. Packets can contain viruses, malware, and botnets which necessitates the need to detect them fast. However, chasing every packet often becomes unmanageable and can often lead to many dead ends. Traditional approaches to this problem rely on heuristics or signatures with a known bad which tend to be ineffective to the advanced attacker. Instead, this paper will go beyond the known bad and describe a general approach of honing in on packets of interest utilizing the behavior and profiling of a network. The use of behavior analysis and profiling for packets that ordinarily traverse a network can shine light on the shadows that the enemy lurks in that bypass traditional detection. This behavior analysis and profiling is especially imperative since knowing the characteristics of your packets can certainly reveal their true intentions.
Security Systems Engineering Approach in Evaluating Commercial and Open Source Software Products By Jesus Abelarde January 29, 2016
- The use of commercial and free open source software (FOSS) is becoming more common in commercial, corporate, and government settings as they develop complex systems. This carries a set of risks until the system is retired or replaced. Unfortunately during project development, the amount of security resources and time necessary to accommodate proper security evaluations is usually underestimated. Also, there is no widely used or standardized evaluation process that engineers and scientists can utilize as a guideline. Therefore, the evaluation process usually ends up lacking or widely different from project to project and company to company. This paper provides a suggested evaluation process and a set of methodologies, along with associated costs and risks that projects can utilize as a guideline when they integrate commercial and FOSS products during system development life cycle (SDLC).
Network Forensics and HTTP/2 By Stefan Winkel January 18, 2016
- Last May, a major new version of the HTTP protocol, HTTP/2, has been published and finalized in RFC 7540. HTTP/2, based on the SPDY protocol, which was primarily developed by Google, is a multiplexed, binary protocol where TLS has become the de- facto mandatory standard. Most of the modern web browsers (e.g. Chrome, Firefox, Edge) are now supporting HTTP/2 and some Fortune 500 companies like Google, Facebook and Twitter have enabled HTTP/2 traffic to and from their servers already. We also have seen a recent uptake in security breaches related to HTTP data compression (e.g. Crime, Beast) which is part of HTTP/2. From a network perspective there is currently limited support for analyzing HTTP/2 traffic. This paper will explore how best to analyze such traffic and discuss how the new version might change the future of network forensics.
Cybersecurity Inventory at Home By Glen Roberts January 7, 2016
- Consumers need better home network security guidance for taking stock of the hardware and software applications installed on their network and devices. The primary sources of information security advice for the average person are TV, magazines, newspapers, websites and social media. Unfortunately, these sources typically repeat the same advice, provide limited guidance and miss key areas of security that should be taken into consideration when securing home networks. On the other hand, enterprises receive comprehensive, prioritized guidance such as the Critical Security Controls from The Center for Internet Security. Unfortunately, these controls were not designed with securing home networks in mind. The wide gap between consumer-media advice columns and highly professional corporate security controls needs to be bridged. This can be done by using the Critical Security Controls as a comprehensive foundation from which to craft an authoritative yet easy-to-understand set of home network security recommendations for individuals. The first step is distilling the guidance for inventorying hardware and software applications.
Infrastructure Security Architecture for Effective Security Monitoring By Luciana Obregon December 11, 2015
- Many organizations struggle to architect and implement adequate network infrastructures to optimize network security monitoring. This challenge often leads to data loss with regards to monitored traffic and security events, increased cost in new hardware and technology needed to address monitoring gaps, and additional Information Security personnel to keep up with the overwhelming number of security alerts. Organizations spend a lot of time, effort, and money deploying the latest and greatest tools without ever addressing the fundamental problem of adequate network security design. This paper provides a best practice approach to designing and building scalable and repeatable infrastructure security architectures to optimize network security monitoring. It will expand on four network security domains including network segmentation, intrusion detection and prevention, security event logging, and packet capturing. The goal is a visual representation of an infrastructure security architecture that will allow stakeholders to understand how to architect their networks to address monitoring gaps and protect their organizations.
Compliant but not Secure: Why PCI-Certified Companies Are Being Breached By Christian Moldes December 9, 2015
- The Payment Card Industry published the Data Security Standard 11 years ago; however, criminals are still breaching companies and getting access to cardholder data. The number of security breaches in the past two years has increased considerable, even among the companies for which assessors deemed compliant. In this paper, the author conducts a detailed analysis of why this is still occurring and proposes changes companies should adopt to avoid a security breach.
Web Application File Upload Vulnerabilities By Matthew Koch December 7, 2015
- Uploading files to a web application can be a key feature to many web applications. Without it cloud backup services, photograph sharing and other functions would not be possible.
There's No Going it Alone: Disrupting Well Organized Cyber Crime By John Garris November 23, 2015
- The identification and eventual disruption of a sophisticated criminal enterprise, requiring on-the-fly problem solving and groundbreaking international collaboration, offers a model of how an international cooperative effort can succeed. The efforts that ultimately brought down Rove Digital, an Estonian-based criminal operation that compromised millions of computers, provides just such an example. The approach taken by law enforcement from several countries, coupled with the important roles played by security researchers, can be built upon to address burgeoning threats that can only be tackled cooperatively.
A Network Analysis of a Web Server Compromise By Kiel Wadner September 8, 2015
- Through the analysis of a known scenario, the reader will be given the opportunity to explore a website being compromised. From the initial reconnaissance to gaining root access, each step is viewed at the network level. The benefit of a known scenario is assumptions about the attackers’ reasons are avoided, allowing focus to remain on the technical details of the attack. Steps such as file extraction, timing analysis and reverse engineering an encrypted C2 channel are covered.
Breaking the Ice: Gaining Initial Access By Phillip Bosco August 28, 2015
- While companies are spending an increasing amount of resources on security equipment, attackers are still successful at finding ways to breach networks. This is a compounded problem with many moving parts, due to misinformation within the security industry and companies placing focus on areas of security that yield unimpressive results. A company cannot properly defend and protect against what they do not adequately understand, which tends to be a misunderstanding of their own security defense systems and relevant attacks that cyber criminals commonly use today. These misunderstandings result in attackers bypassing even the most seemingly robust security systems using the simplest methods. The author will outline the common misconceptions within the security industry that ultimately lead to insecure networks. Such misconceptions include a company’s misallocation of their security budget, while other misconceptions include the controversies regarding which methods are most effective at fending off an attacker. Common attack vectors and misconfigurations that are devastating, but are highly preventable, are also detailed.
Forensic Timeline Analysis using Wireshark GIAC (GCFA) Gold Certification By David Fletcher August 10, 2015
- The objective of this paper is to demonstrate analysis of timeline evidence using the Wireshark protocol analyzer. To accomplish this, sample timelines will be generated using tools from The Sleuth Kit (TSK) as well as Log2Timeline. The sample timelines will then be converted into Packet Capture (PCAP) format. Once in this format, Wireshark's native analysis capabilities will be demonstrated in the context of forensic timeline analysis. The underlying hypothesis is that Wireshark can provide a suitable interface for enhancing analyst's ability. This is accomplished through use of built-in features such as analysis profiles, filtering, colorization, marking, and annotation.
Coding For Incident Response: Solving the Language Dilemma By Shelly Giesbrecht July 28, 2015
- Incident responders frequently are faced with the reality of "doing more with less" due to budget or manpower deficits. The ability to write scripts from scratch or modify the code of others to solve a problem or find data in a data "haystack" are necessary skills in a responder's personal toolkit. The question for IR practitioners is what language should they learn that will be the most useful in their work? In this paper, we will examine several coding languages used in writing tools and scripts used for incident response including Perl, Python, C#, PowerShell and Go. In addition, we will discuss why one language may be more helpful than another depending on the use-case, and look at examples of code for each language.
Accessing the inaccessible: Incident investigation in a world of embedded devices By Eric Jodoin June 24, 2015
- There are currently an estimated 4.9 billion embedded systems distributed worldwide. By 2020, that number is expected to have grown to 25 billion. Embedded systems can be found virtually everywhere, ranging from consumer products such as Smart TVs, Blu-ray players, fridges, thermostats, smart phones, and many more household devices. They are also ubiquitous in businesses where they are found in alarm systems, climate control systems, and most networking equipment such as routers, managed switches, IP cameras, multi-function printers, etc. Unfortunately, recent events have taught us these devices can also be vulnerable to malware and hackers. Therefore, it is highly likely that one of these devices may become a key source of evidence in an incident investigation. This paper introduces the reader to embedded systems technology. Using a Blu-ray player embedded system as an example; it demonstrates the process to connect to and then access data through the serial console to collect evidence from an embedded system non-volatile memory.
Honeytokens and honeypots for web ID and IH By Rich Graves May 14, 2015
- Honeypots and honey tokens can be useful tools for examining follow-up to phishing attacks. In this exercise, we respond using valid email addresses that actually received the phish, and wrong passwords. We demonstrate using custom single sign-on code to redirect logins with those fake passwords and any other logins from presumed attacker source IP addresses to a dedicated phishing-victim web honeypot. Although the proof-of- concept described did not become a production deployment, it provided insight into current attacks.
Group Gold Papers
Endpoint Security through Device Configuration, Policy and Network Isolation By Barbara Filkins & Jonathan Risto July 15, 2016
- Sensitive data leaked from endpoints unbeknownst to the user can be detrimental to both an organization and its workforce. The CIO of GIAC Enterprises, alarmed by reports from a newly installed, host-based firewall on his MacBook Pro, commissioned an investigation concerning the security of GIAC Enterprise endpoints.