Book Proposal Form Archives Catalog Auerbach Publications Book Proposal Form Catalog

Information Security Today Home

New Books

<font size=-2></font>
Threat Level Red: Cybersecurity Research Programs of the U.S. Government by Michael Erbschloe; ISBN 9781138052802
Location Privacy in Wireless Sensor Networks by Ruben Rios, Javier Lopez, and Jorge Cuellar; ISBN 9781498776332
Anonymous Communication Networks: Protecting Privacy on the Web by Kun Peng; ISBN 9781439881576
Security and Privacy in Internet of Things (IoTs): Models, Algorithms, and Implementations edited by Fei Hu; ISBN 9781498723183
Social Media Warfare: Equal Weapons for All by Michael Erbschloe; ISBN 9781138036024

The Evolution of Cybersecurity and the Rise of Threat Hunting

By Rohit Dhamankar

Security approaches need to evolve. Most IT and security pros continue to believe that the best shield against cybercrime involves strengthening a network's perimeter to keep attackers out and antivirus software for endpoint defense.

The need for new security approaches that improve response time or action has never been more apparent, as threats (unfortunately) arise more quickly than security strategies. Countless worldwide breaches have proven that regardless of financial or human resources, changing our security operations mindset is required to protect critical assets, reduce attacker dwell time and limit risk.

First Wave: Deep Packet Inspection Technology Shift

Intrusion detection/prevention technology first made its mark a decade ago. The security industry considered those systems a viable solution. Its promise of examining the contents of packets and streams at the bit and byte level made it feasible to detect and stop the attacks at the network level.

The protocols then were less complex, and more importantly, many packets were not encrypted. It had become quickly apparent that just a chain of firewall rules was not sufficient to stop an actual attack from propagating in the network, or protecting a web, mail or DNS server.

While organizations were scrambling to include intrusion detection/prevention technology into IT strategies, they realized the network perimeter was still well defined. As a result, they realized:

  • It was easy to deploy sensors primarily at the network perimeter for inspecting traffic in and out of the network.
  • The attacks and their variations against OS and applications could be handled via protocol decoders, regular expressions and string matching.
  • The attacks that required statistical observations, such as sweeps and DDoS across networks, could also be handled at the sensor level.

In addition, further advances in hardware (FPGAs and ASICs) made it possible to create high performing appliances that could block malicious packets and streams without network performance degradation. A new term was soon coined to describe the technology behind these appliances: Deep Packet Inspection (DPI).

Early DPI technology brought on a radical shift in how people architected network security. Over time, other forms of DPI have evolved, such as performing analytics based on metadata obtained from DPI, correlating the DPI metadata to sources of threat intelligence, using DNS data for finding compromised hosts, etc. DPI is still a very popular, first line of security defense for a lot of organizations. The technology has been successfully integrated into fundamental networking components, such as enterprise and home-grade switches, and solutions with 100G+ capacity.

The security world continues to look to next-gen DPI technology to ensure cleaner traffic in the network. There is an ongoing need to update signatures on many of these appliances, as new vulnerabilities are discovered weekly. Threat research teams within organizations continue to study publicly released vulnerabilities and discover many on their own, or with bug bounty programs.

So, why isn't DPI putting a stop to all breaches? Why isn't it immediately alerting us of breaches, so the attacker isn't lurking on the network for over six months (on average) at a time? Simply put, the fundamental assumptions made about deployment and monitoring with DPI arenít panning out because:

  • There is a lot more encrypted traffic on the networks, with HTTPS now the de-facto standard for all sensitive communications.
  • Databases are using encrypted connections and a lot of protocols are being encrypted for satisfying compliance criteria.
  • The activity at the core of the network where the lateral movement takes place (using Windows protocols) has never been a strong point of most DPI appliances.
  • The number of ways in which an attack can be evaded, and the tools required for the evasions, are publicly available.

As a result, today's best DPI can be described as a technology that will stop most common attacks. There is still a small percentage of attacks that make their way into an IT environment, and when successful, lead to breaches we see in the news.

Second Wave: Big Data and Analytics Attempt to Fill the Gap

Another strategy employed by the security industry involves the use of big data analytics to find compromises lurking in the network that have bypassed DPI technology.

SIEM (Security Information and Event Management) vendors have made it feasible to throw a large set of data from various security products, such as firewall logs, IDS/IPS alerts, web proxy logs, threat intelligence data and endpoint logs into a correlated data set. There is a continuous attempt to create analytics around various stages of the "kill chain" in the hope that a breach can be discovered much sooner.

Security analysts at various MSSP and MDR providers are even using AI, such as IBM's Watson, to collect information around an IP, domain name or a threat actor, so they can analyze alerts from the analytics for true positives. However, one problem the analysts often encounter is the prevalence of false positives, which monopolizes human and financial resources in an effort to sift through and find the actual issue.

Ultimately, one fundamental question remains, "Is a particular endpoint compromised?" Neither DPI nor SIEM data analytics can conclusively answer this question in many instances.

An accurate snapshot can only be found through comprehensive examination of each endpoint. There is no substitute for an aggressive threat hunting strategy, no matter how many endpoints there may be on a network.

Third Wave Required: Threat Hunting Using Deep Host Inspection (Forensic State Analysis)

A conclusive answer can only be gleaned from deep examination into the question, "What is the current state of an endpoint?" There is no substitute to performing that action, especially if one is working in a large enterprise. The answer to the fundamental question above must be answered for a large number of endpoints throughout the enterprise.

At a fundamental level, Forensic State Analysis (FSA) enables you to examine:

  • The volatile memory of the endpoint to see if there are any malicious injections into any popular processes.
  • Any kernel modifications or hooks that could have been made to the system.
  • Any persistent mechanisms used by a malicious program to restart itself.

Forensic State Analysis of an endpoint enables one to perform "Deep Host Inspection" analogous to the "Deep Packet Inspection" and it is the only way to conclusively determine whether or not a host is compromised. Embracing the FSA approach is the paradigm shift required to contain and stop the data breaches making daily headlines.

It's FSA's Deep Host Inspection capabilities that help analysts at MSSP and MDR providers tie in network anomaly indicators with conclusive proof of a compromise on a host system.

Technology that employs FSA to proactively hunt for threats reduces the burden of false positives from SIEM systems and lets an enterprise focus on prioritizing the remediation activities. As more and more communication from endpoints leave encrypted, the DPI is going to become less and less relevant. In an extreme scenario, the only usage for DPI would be to detect DDoS attacks that require a correlation across a number of endpoints. FSA based technologies would be doing the heavy lifting.

Beyond improving monitoring and hunt processes, FSA enables entirely new use cases:

  • Laptops, mobile devices and other transient systems not previously under management can now be validated as they come on the network.
  • Systems without endpoint monitoring (due to policy, mismanagement or tampering) can be identified and periodically assessed.
  • For organizations that don't have enough historical logs or ability to convert big data into definitive action, FSA provides a huge bang for the buck.
  • For consultants and IR professionals, FSA is the fastest and easiest way to perform a compromise assessment or threat hunting engagement service. Using an agentless method negates the need for most change management processes, simplifying engagements.

Hunting for the Long Haul

There's one thing for certain, hackers will continue to evolve their techniques and organizations must acknowledge that. Tools such as gargoyle have already been released in the public domain to try to evade the new hunting technologies.

It has become imperative to organizations to embrace the "Zero Trust Model." Threat hunting can't be a one-time exercise, instead organizations must continuously verify endpoints to determine if they've been compromised, so quick action can be taken to limit damage and restore network integrity if a threat is detected.

About the Author

Rohit Dhamankar is Vice President, Product for Infocyte. He is responsible for defining Infocyte's product strategy and roadmap. He brings more than 15 years of security industry experience across product management, threat research, technical sales and customer solutions. Contact pr@infocyte.com for more information. @InfocyteInc
 
Subscribe to
Information Security Today







Bookmark and Share


© Copyright 2017 CRC Press