Information Security Today Home

New Books

Defense against the Black Arts: How Hackers Do What They Do and How to Protect against It, ISBN 978-1-4398-2119-0, $69.95
Security Patch Management, ISBN 978-1-4398-2499-3, $79.95
Cyber Security Essentials, ISBN 978-1-4398-5123-4, $69.95
Security of Mobile Communications, ISBN 978-0-8493-7941-3, $99.95
Digital Privacy: Theory, Technologies, and Practices, ISBN 978-1-4200-5217-6, $79.95
CISO's Guide to Penetration Testing: A Framework to Plan, Manage, and Maximize Benefits, ISBN 978-1-4398-8027-2, $69.95

Deep Packet Inspection Technologies

Anderson Ramos

The explosion of the commercial use of the Internet has created specific business and technology demands for products that could allow organizations to explore the opportunities that arose without compromising their security. Thousands of internal networks, with a high level of trust for their owners, have been connected to a public and loosely controlled network; this has opened those organizations to a series of new security problems.

One of the first concerns was the need of having a security mechanism that could allow basic definitions in terms of access control. The development of a network security policy to determine what resources could be accessed by which users, including the operations that could be performed, was always recommended as a good first step. Once the organization had this basic definition of the permissions that should be enforced at the connecting point with this new external world, it was ready to implement technologies for achieving this goal.

The network security killer application of this emerging era was the firewall. Basically, we can define firewalls as a system, formed by one or more components, responsible for network access control. These systems have used a number of different technologies for performing their operations. Well-known examples are packet filters, proxies, and stateful inspection devices. In general, those technologies analyze packet information, allowing or disallowing their flow, considering aspects like source/destination addresses and ports. Some of them have much more complex analysis, as well granularity in terms of configuration, but the basic purpose is the same. They have achieved a partial success in their objectives.

Partial success means that those technologies were able to guarantee that multiple ports that used to be open for communication (thus exploitation) before the advent of the firewalls were, more or less, closed. One of the key success factors here was the default deny approach, a key security principle, correctly implemented in the design of the security policies' structuring. The remaining problem that most organizations today are willing to address is how secure are the few communication ports still opened though their firewalls. In other words, how to guarantee that our few authorized channels are not used in an unauthorized way. This is far more complex.

The reason for this actual concern comes from the fact that, over recent years, the attacks have migrated from the network level to the application level. Because firewalls were effective in blocking several ports that would be opened for network exploitation, the research of new attacks have been concentrated in applications that are often open through most firewall security policies, focusing on protocols like hypertext transfer protocol (HTTP), simple mail transfer protocol (SMTP), database access protocols, and others. Additionally, HTTP has became one of the most important paths to a number of new software-developing technologies, designed for making the delivery of new Web applications easier and full of rich new features that were previously unavailable.

This vast use of HTTP and the other protocols that have been mentioned have forced most network and security administrators to create specific rules in their firewalls for allowing these types of communication in an almost unrestricted way. Several software developers of applications such as instant messaging or Internet telephony have adapted them for using these open communication channels, in an attempt to avoid organization enforced restrictions and controls. Some have even adapted their code to search and use any open port in the firewall, through approaches that remember port scanners, tools historically used for network and host security evaluation and invasion, although the reason for doing that can go beyond network security issues. 1

The network access control needs to become more granular, going beyond the basic functions provided by most technologies. The point is not blocking or not unblocking the HTTP port, but guaranteeing that this open port is being used only for specific types of authorized HTTP traffic. This includes protection against things like:

  • Unauthorized download of mobile code, like ActiveX controls and Java applets
  • Application-level attacks against Web sites
  • Malware propagation through authorized protocols
  • Use of authorized open ports by unauthorized applications
  • Specific behaviors that could characterize an attack.

Different technologies have been used in these tasks, with limited success. Intrusion detection systems (IDS) were one of them. Although the main purpose of these technologies was to work as an auditing tool, several vendors have promised effective protection through firewall integration or active responses, such as connection resets. However, a Gartner report, published in 2003,2 pointed out several fundamental issues with the use of those systems, urging customers to replace them by new emerging technologies capable of not only detecting attacks, but blocking them in real time. Basically, the key arguments were:

  • IDS cannot block attacks effectively, only detect them.
  • Their detection capabilities were also limited, with a high number of false positives and negatives.
  • The management burden is huge, theoretically demanding 24-hour monitoring of their functioning.
  • They were not able to analyze traffic at transmission rates greater than 600 Mbps.

Although the report had some flaws,3 including technical errors like the speed limit, a huge and passionate debate was initiated. Security managers and professionals that have invested their budgets in IDS tried to justify their decisions. Vendors went even further, attempting to disqualify Gartner's arguments. But, curiously, most vendors at that time were already offering in their product ranges new options known as intrusion prevention systems (IPSs). These are probably the most stable and mature technology capable of doing some of the actions demanded by the research report, which indicates that even they were aware of some of their product's limitations. Additionally, the report has also mentioned another recent Gartner research document that focused on a technology called deep packet inspection (DPI), that was new and then still loosely defined.

Since then, several products offering DPI capabilities have emerged. The purpose of this document is to investigate what this technology is, its application in the current network/computer security scenario, and how to decide if it is appropriate for your organization's environment.

Deep Packet Inspection Definition
Deep packet inspection (DPI) is normally referred to as a technology that allows packet-inspecting devices, such as firewalls and IPS, to deeply analyze packet contents, including information from all seven layers of the OSI model. This analysis is also broader than common technologies because it combines techniques such as protocol anomaly detection and signature scanning, traditionally available in IDS and anti-virus solutions.

It is right to affirm that DPI is a technology produced by the convergence of traditional approaches used in network security, but performed by different devices. The improvement of hardware platforms and the development of specific hardware devices for network security tasks have allowed functions that used to be carried out by separate components to be carried out by just one. However, it is not possible to argue that this convergence is complete. Vendors are still maturing their technologies and there is a huge space for improvement.

Due to this convergence, it is important to understand which technologies have preceded DPI and what their drawbacks are because they have driven the demand for new technologies by not fulfilling all current network security needs.

Understanding Previous Technologies
One of the first technologies used for performing network security were packet-filtering firewalls. Those systems were implemented, basically, by using access control lists (ACL) embedded in routers. Access control was one of the primary concerns of the early age of commercial use of the Internet in the 1990s. Because routers are the connection point between internal and external networks, their use as access control devices were very natural and appropriate.

Simple packet filters analyze each of the packets passing through a firewall, matching a small part of their contents against previously defined groups of access control rules. In general, we can say that basic limitations were:

  • Because they analyze individual packets, they could not identify security violations that can only be visualized by screening more of the traffic flow;
  • Very little information from the packets was analyzed, avoiding the identification of several problems that could only be seen in the application layer.
  • The rules were static, creating many security problems for screening protocols that negotiate part of the communication options, like ports and connections, on the fly (the FTP service is a classic example).
  • In general, router ACLs, implemented through command-line parameters, are harder to manage than rules created in easy-to-use graphical user interfaces.

Due to those deficiencies, an alternative, known as application-layer firewalls or proxies, was developed. Designed with the purpose of solving the security limitations of the packet-filtering technology, proxies have adopted a very effective approach in terms of security, but are radical from the networking point of view.

Instead of analyzing packets as they cross the gateway, proxies break the traditional client/server model. Clients are required to forward their requests to a proxy server instead of the real server. After the proxy receives those requests, it will forward them to the real server only if the requests meet a predefined security policy. The real server receives the requests from the proxy, which forces it to believe that the proxy is the real client. This allows the proxy to concentrate all requests and responses from clients and servers.

Because a proxy is normally developed with the purpose of filtering a specific application, its security controls and mechanisms are much stronger than packet filters. Instead of just allowing or not allowing the application, the proxy can have more granularity, specifying exactly which parts of the communication are allowed, which content is allowed, etc. Using HTTP as an example, it is possible to define that users can access Web sites, but download of Java applets or ActiveX controls is prohibited.

However, this new paradigm requires applications to be adapted for taking advantage of their features. Clients must be aware that there is a proxy in the middle of the communication and must format their requests in an appropriate way. Protocols and toolkits, such as SOCKS, have been developed for making this work easier. More recently, transparent proxies have been solving this issue while keeping the security capabilities of the technology.

But the worst problem was cost, and the cost will affect the use of proxy technologies in two ways. First, it is expensive and time consuming to write code for proxy servers. The programmer must know not only everything about the protocol being "proxied," but must also have specific code for implementing the necessary security controls. Second, there is a performance problem. Because connections will be always recreated from the proxy to the real server and the analysis being done is more sophisticated, the performance cost is much higher than it is in packet filters.

Considering that those two technologies are opposite in a number of ways, an intermediate technology, marketed as stateful inspection, focused on improving the security of packet filters. The idea was to keep a performance similar to packet filters while improving their security to an acceptable level. This improvement is made possible through the use of state tables. When packets are analyzed by stateful firewalls, they store important information about the connection in those tables, allowing them to improve the quality of the screening process because the flow of the information is considered when making network access control decisions, instead of single packets. This mechanism also allows the creation of dynamic rules, intended for permitting very specific communication channels to be open on the fly. If the protocol negotiates some connection using a random port, for example, the firewall can realize this through a full seven-layer analysis on the packet, and create a dynamic rule, allowing the communication on this port if the source/destination information is correct, and for a limited time.

This was a huge improvement for packet filters in terms of security, but could not solve all of the security problems. However, developing "intelligence" for firewalls like this-adapting them for new protocols as they emerge-is much simpler and easier than developing new application proxies. This created cheaper products, delivered to the market faster than proxy-based solutions, allowing companies that invested in this technology, like CheckPoint, Netscreen (now Juniper) and Cisco, to establish themselves as market leaders.

Although it represented a good improvement for packet filters, stateful inspection still lacked important security capabilities. Network access control was being performed very well, but it still was not capable of detecting attacks at the application level. Some of the vendors were using internal transparent application proxies when their customers needed more extensive checks. But as performance needs have increased, the stateful inspection/proxy combination has not scaled very well. Additionally, the number of network attacks was increasing dramatically, and the proxy part of this combination was not being updated for addressing all of them.

For this reason, many customers willing to add an additional layer of monitoring and protection have acquired IDS. Those systems, from a network perspective, are basically monitoring devices, although most of them have some firewall integration features that could also give some level of reaction and protection. Copies of the packets crossing the monitored networks are sent to the network IDS that analyze this information, normally using pattern (signature) matching technologies. This approach is very similar to the approach used by anti-virus software, being equally ineffective. Only previously known viruses/attacks can be detected. Attempts to solve this issue using statistical analysis for defining an expected baseline and examining for deviations from it, could even identify attacks not defined in the signatures database, but raised the false positives to unsustainable levels.

However, from a security perspective, pattern-matching approaches are even more ineffective in IDS than in anti-virus software. Most anti-virus software can block viruses in real-time once they are found, while most IDS can only generate an alert. They can also send a command to the firewall, asking for blocking of the source of a just-identified attack. However, this approach has at least two serious problems:

  • Some attacks, including several denial-of-service techniques, can be performed using very few packets, disrupting their targets before the firewall responsible for blocking them receives any notification.
  • IDSs are famous for their false positives. In case of a false alarm, the firewall can block legitimate traffic, compromising the availability of the services and creating huge administrative problems.

The most logical evolution of this scenario would be to combine stateful inspection performed by firewalls with the content inspection performed by IDSs in a single box that could identify and block attacks in real-time, but improving their detection capabilities for avoiding the false positives issue. In this way, the analyses done by both components would be performed simultaneously.

A single-box approach is appealing. Customers prefer to have just one single security solution that would reduce the total cost of ownership (TCO) of the system, in addition to greatly simplifying the administration. Vendors would prefer to eliminate their competitors and be the only network security company present on their customer's network. The Gartner "IDS is dead" report, as it is popularly known, only served as a kick-off element of this probable transition, as mentioned in the previous section.

Deep Packet Inspection Debut
There are two types of products, different but similar, using DPI. First, we have firewalls that have implemented content-inspection features present in IDS systems. Second, we have IDS systems working with an in-line positioning approach, intended to protect the networks instead of just detecting attacks against them.

First, with regard to analyzing firewalls that have incorporated IDS features, there are two key technologies making this possible: pattern (signature) matching and protocol anomaly. The first approach incorporates a database of known network attacks and analyzes each packet against it. As previously mentioned, success in the protection is normally obtained only for known attacks, which have signatures previously stored in the database. The second approach, protocol anomaly, incorporates a key security principle, already mentioned in the first section, known as default deny. The idea is to, instead of allowing all packets in which content does not match the signatures database, define what should be allowed, based on the definitions of how the protocol works. The main benefit is to block even unknown attacks. Because the time window between the discovery of a new vulnerability and their exploitation by tools or worms has dramatically decreased, this ability can be considered almost indispensable nowadays.

Additionally, this reduction in the time frame for exploitation forces companies to pay more attention to their patch management procedures. This creates a painful dilemma: should they apply patches as soon as possible, without adequate testing, exposing them to availability problems arising from problematic patches, or should they test patches before applying, exposing them to the vulnerability exploitation risk during the test period? This management concern has been explored by DPI vendors. Some claim4 that their products can protect companies from attacks, giving them the ability to test patches adequately, applying them then whenever possible. These claims have strong marketing appeal, but a poor security vision. The connection to the Internet is not the only source of problems that could explore unpatched systems, although it is the primary one.

Some well-recognized security experts5 argue that the protocol-anomaly approach is not the best implementation of the default-deny approach for network security purposes. From their point of view, proxies are much better in terms of performance. Curiously, vendors such as CheckPoint have abandoned mixed architectures, using stateful inspection and transparent application-level gateways towards DPI approaches. 6 This may suggest that proxy-only solutions could have even more problems, although it is very questionable.

Besides the firewall/IDS combination, there are a number of solutions marketed as IPS that also implement DPI technologies. Generally speaking, IPS are in-line IDS. They have almost the same capabilities, but IPSs can block attacks in real-time if they are detected. Careful and conservative policies are implemented with the purpose of avoiding one of the key limitations of IDS systems: false positives. Using their IDS systems as a comparison parameter, several customers were reluctant to purchase IPSs, fearing that they could block legitimate traffic.

Another mechanism commonly implemented for avoiding possible availability problems related to IPS malfunctioning is the network pass-through. In case of any problem in the IPS, such as a power supply failure, the pass-through mechanism will connect the network cables directly, maintaining network connectivity. Although this is a desired feature for a device used in combination with a firewall, it should never be implemented in a firewall itself. It is an approach against a basic security engineering concept known as fail-safe. According to fail-safe, security components should fail in a way that does not compromise their security goals. In practical terms, firewalls that implement this concept should not allow any traffic if problems arise, as opposed to allowing everything.

In general, IPSs can identify and block many more attacks than firewalls with embedded IDS functionalities. Additionally, they usually do not have the same filtering capabilities and administration features present in products that used to be simple firewalls in the past. But the fact is that both combinations have been improved for solving their limitations, producing very broad network security solutions. A number of new technologies are also being embedded in those new products. Some examples include:

  • Anti-spam filters
  • Malware analysis
  • URL filtering
  • Virtual private networks
  • Network address translation
  • Server and link load balancing
  • Traffic shaping.

Besides the numerous benefits existent in the single-box approach, the drawbacks from the security point of view should not be ignored. Since the early days of network security, defense in-depth has been almost unanimity. The combination of multiple security controls that complement each other, following solid architectural security principals, increases security and creates resiliency, thereby allowing a longer time frame for detecting and responding to attacks before they reach the most valuable information assets, usually the internal servers.

Additionally, there exists a second a problem, not less relevant, related to availability. Single-box designs inherently create single points of failure. Fortunately, this problem is not so hard to solve and several vendors have hot-standby and cluster options for their DPI solutions.

Other Issues
The initial convergence of technologies that produced the first so-called DPI devices was involved in a paradigm. Part of it was possible due to new hardware improvements. However, hard-coding security analysis in chips would prevent vendors from quickly and effectively responding to new demands. This supposed limitation was heavily explored by vendors producing software-based solutions. 7

At the same time, most of these answers from vendors are, basically, updates to their signature databases. A great part of these updates would be unnecessary with a truly effective and well-implemented default-deny approach, using protocol-anomaly technologies. This raises the question of whether the signature approach is more interesting to vendors than it is to their customers, which must depend on software subscriptions and update services for keeping their structures running. Formal research on the network attacks discovered in the last few years could be helpful in measuring the real effectiveness of the protocol-anomaly approach and answer this question more precisely.

Nevertheless, innovative approaches in network hardware appliances seems to be producing solutions to this dilemma, allowing the creation of devices with good performance, while keeping their ability to receive updates from external sources. This is being achieved through packet analysis optimization methods, which unify hardware and software technologies for parallelizing filters and verifications.

Another architectural issue, but a broader one, is the fact that the migration of IDS-like technologies to access-control devices have almost totally ignored other very relevant and important aspects of intrusion detection as a whole. Those aspects are related to host-based IDSs and the correlation of events generated by them with network-based captured data. Several vendors of DPI technologies do not have host-based protection or even detection systems. The path that has been crossed by IDS systems, with the objective of improving their detection capabilities, was almost interrupted.

Some attack behaviors can only be detected, or at least more precisely detected, correlating host and network captured data. Host-based systems can understand local vulnerabilities and analyze the consequences of an attack, besides detecting that the packet was malicious.

This kind of feature is very desirable, especially if considering that secure application protocols, designed for providing end-to-end security, seem to be a trend. Furthermore, any type of encryption on the transport or network layer would compromise almost every functionality of DPI technologies, except for basic filtering.

This phenomenon, among other things, has lead to a popularization of a radical security approach, know as de-perimeterization. This concept, also known as boundary-less information flow, is not new, but is now been seriously researched and supported by a number of companies and vendors worldwide. 8 The idea is to gradually remove most perimeter security barriers and focus more on secure protocols and data-level authentication, extensively using encryption for achieving these goals.

Only the future will prove if totally removing perimeters is a reasonable approach, but the people that support the de-perimeterization concept do exist today. Most VPN clients, for example, have personal firewall capabilities where the objective is to protect laptops frequently connected directly to the Internet when they leave the corporate network. Critical servers often have host-based IDS solutions that can, in a number of ways, protect against some attacks in real-time, besides detecting them, working like a device that could be called a host-based IPS.

Those examples can be clear signals that a multilayer approach, considering also the protection of hosts using technologies that used to be available only for network security, will prevail in the medium and long terms. Integrated management solutions are probably going to be implemented for allowing the administration of those layers in a centralized way, reducing the TCO and improving the effectiveness of the solutions.

DPI technologies are based on a number of old approaches that used to be implemented by different devices. Hardware and software advances have allowed the convergence of those approaches into single-box architectures that increases the security provided by them and makes their administration easier.

However, single-box architectures lack defense in-depth, a key network security concept that has been used for years, which could lead to unnecessary exposure. Additionally, they create single points of failure that can compromise network availability. Nevertheless, both can be solved using technology largely available from most vendors and correct security design principles, implementing network perimeters according to specific security needs of each network. The popularization of the use of protocols with native encryption reduces the effectiveness of such solutions, but do not make then dispensable. Integrated approaches, using intrusion prevention controls, that normally include DPI, both at host and network levels, will probably be the best approach in the medium and long terms.

1. Skype Technical FAQ. (accessed October 27, 2006).
2. Pescatore, J., Stiennon, R., and Allan, A. Intrusion detection should be a function, not a product. Research Note QA-20-4654, Gartner Research, July 2003.
3. Ellen Messmer. Security Debate Rages. Network World, October 6, 2003, (accessed October 27, 2006).
4. Tipping Point Intrusion Prevention Systems. (accessed October 27, 2006).
5. Ranum, M. 2005. What is 'Deep Inspection.'
6. Check Point Software Technologies Ltd, Check Point Application Intelligence, February 22, 2006, (accessed October 27, 2006).
7. Check Point Software Technologies Ltd, The Role of Specialized Hardware in Network Security Gateways, (accessed October 27, 2006).
8. The Open Group, The Jericho Forum. (accessed October 27, 2006).

About the Author

From Information Security Management Handbook, Sixth Edition, Volume 3, edited by Harold F. Tipton and Micki Krause. New York: Auerbach Publications, 2009.

Subscribe to
Information Security Today

Powered by VerticalResponse

© Copyright 2009-2011 Auerbach Publications