Information Security Today Home

New Books

Information Security Management Metrics: A Definitive Guide to Effective Security Monitoring and Measurement
Architecting Secure Software Systems
CISO Soft Skills: Securing Organizations Impaired by Employee Politics, Apathy, and Intolerant Perspectives
How to Complete a Risk Assessment in 5 Days or Less
Information Assurance Architecture

Security Event Management*

by Glenn Cater

In addition to traditional security devices such as firewalls and intrusion detection systems, most systems on a typical network are capable of generating security events. Examples of security events include authentication events, audit events, intrusion events, and anti-virus events, and these events are usually stored in operating system logs, security logs or database tables.

In many organizations, security policies or business regulations require that security events are monitored and that security logs are reviewed to identify security issues. Information captured in security logs is often critical for reconstructing the sequence of events during investigation of a security incident, and monitoring security logs may identify issues that would be missed otherwise. The problem is that the amount of information generated by security devices and systems can be vast and manual review is typically not practical. Security event management (SEM, or SIM-security information management) aims to solve this problem by automatically analyzing all that information to provide actionable alerts. In a nutshell, security event management deals with the collection, transmission, storage, monitoring and analysis of security events.


When implemented correctly, a security event management solution can benefit a security operations team responsible for monitoring infrastructure security. Implementing SEM can relieve much of the need for hands-on monitoring of security systems such as intrusion detection systems, which typically entails staring at a consoles or logs for lengthy periods. This allows the security monitoring team to spend less time monitoring consoles, and more time on other tasks, such as improving incident response capabilities.

This improvement is achieved by implementing rules in the SEM system that mimic the know-how or methods used by the security practitioner when reviewing security events on a console or in a log. The SEM system can even go beyond this and look for patterns in the data that would not be detected by human analysis, such as "low and slow" (deliberately stealthy) attacks. Building this intelligence into the system is not a trivial task however and it can take many months to start realizing the benefits from implementing a SEM system.

When planning a security event management solution, the following issues should be considered:

  • Which systems should be monitored for security events?
  • Which events are important and what information should be collected from logs?
  • Time synchronization, time zone offsets, and daylight savings
  • Where, how, and for how long should the logs be stored?
  • Security and integrity of the logs during collection and transmission
  • Using the SEM system as a system of record
  • How to process security events to generate meaningful alerts or metrics?
  • Tuning the system to improve effectiveness and reduce false positives
  • Monitoring procedures
  • Requirements for choosing a commercial security event management solution

The remainder of this chapter discusses the factors associated with planning and implementing a security event management (SEM) system, and factors to consider when purchasing a commercial SEM solution.

Selecting Systems and Devices for Monitoring

Systems or devices to be monitored will typically fall into one of three categories:

  1. Security systems: includes systems and devices that perform some security function on your network. For example, authentication systems, firewalls, network intrusion detection and prevention systems (IDS/IPS), virtual private network devices (VPNs), host-based intrusion detection systems (HIDS), wireless security devices, and anti-malware systems
  2. Business critical systems: includes those systems that are important for running the network. For example, mail servers, DNS servers, web servers, authentication servers. When establishing which infrastructure systems are most critical, try to determine what the business impact would be if the system was unavailable. This category of system also includes more traditional network devices such as routers, switches and wireless network devices.
  3. Critical infrastructure systems: includes those systems that are important for running the network. For example, mail servers, DNS servers, Web servers, authentication servers. When establishing which infrastructure systems are most critical, try to determine what the business impact would be if the system was unavailable. This category of system also includes more traditional network devices such as routers, switches, and wireless network devices.

Because budgets, time, and resources are not unlimited, you will have to do some up-front work to define the set of systems that should be monitored by the SEM system. It is a good idea to start with a risk assessment to determine which systems are most important to your business. Each of the categories (security, business and infrastructure) above should be taken into account during the assessment. If regulatory requirements are a driving factor, then those requirements will help to define which systems should be monitored.

When ordering the priority in which monitoring should be implemented, take into account the following:

  • The criticality of the system to the business. Critical systems that process high value data will have a higher priority.
  • Risk of inappropriate access. Internet facing systems or systems that process information from untrusted networks should have a higher priority.
  • The "security value" of the available events. If a security system generates events that provide more value than another system, it makes sense to prioritize those first. For example, an IDS system typically generates more valuable information than a firewall.

Determining Which Security Events Are Important

Security logs allow administrators or security personnel to proactively identify security issues or to backtrack through the timeline of events to investigate a security incident after it has occurred. Normally, a company's security policy will outline which security events need to be logged and what the requirements are for storage and review of those events, so it is likely that some or all systems are already configured to log security events.

It is important to perform a review for each type of device that will feed into the SEM system to identify which security events are important. Administrator's manuals should provide details on the logging capabilities of a device, although manual review of log samples is recommended to determine which events should be logged.

During this review you will probably find that many of the events being logged do not provide that much value. For example, perimeter firewalls are always dropping packets on their external interfaces due to Internet "noise." Although this information might be useful in rare cases, it is much more useful to know which connections made it through your firewall, or if a connection was allowed somewhere it was not supposed to be allowed. When planning an SEM system, unimportant events like these can be filtered or suppressed so that only more important events are collected and analyzed. This has the advantage of reducing the processing and storage needs of the SEM system.

Use the following checklist when reviewing the logging capabilities for each type of device:

  • Review the manual that describes the logging capabilities.
  • Obtain samples of logs from the device.
  • Ensure that events which must be logged because of security, regulatory or business requirements are included in the log configuration.
  • For other types of events, assess the value of including that type of event in the log configuration.

Some events do not provide much value and can probably be ignored. The overall value of the SEM system is affected by the value of the data it processes and stores, so ensure that valuable data is not missed because of an incorrect logging configuration.

After the review is completed, standard logging configurations can be created for each type of device. Standardization is important to ensure that devices are logging common information. The standard logging configurations can be included with the organization's security requirements, and can be rolled out across all devices during implementation of the SEM system.

Time Synchronization, Time Zones, and Daylight Savings

In addition to a defining a standard logging configuration, it is also important to ensure that all monitored devices and systems, and especially the SEM servers, are synchronized with a reliable and accurate time source. For smaller organizations, public Network Time Protocol (NTP) servers could be used for this purpose. There are lists of public NTP servers available on the Internet which can be used for time synchronization. It is good etiquette to limit usage of public servers and to notify the hosting organization before using their time servers. Larger organizations can set up local NTP servers that are synchronized with public NTP servers. To avoid having to change server names across many devices if authoritative NTP servers change, standard DNS aliases (such as and should be created for the time servers to be used in lieu of the real server names.

For systems that are geographically dispersed across time zones, time zone offsets become an issue. Even if the systems are all located in the same time zone, it is important to be aware of the time zone so that there is no confusion when presenting logs to third parties such as law enforcement. Ideally, timestamps on all logs should be converted to Universal Time (UTC), as this eliminates the possibility of confusion. Alternatively, the time zone offset can be stored with the logs; for example, K0500 for Eastern Standard Time (meaning 5 h behind UTC). Time offset changes due to daylight savings time is something to be aware of as well and there are a couple of ways to deal with this issue. For monitored systems where having local time is important, the time zone and time zone offset can be set as normal on the system; then when logs are collected, the collection agent can note the time zone and include it with the logs. Another way to deal with this situation is to create a database on the logging servers that contains the time zone information for each system. The time zone information can then be used during conversions or preprocessing of the data.

Possibly the easiest way to deal with the time zone problem is to set the time zone on monitored systems to Universal Time (UTC), then as long as administrators know that the time zone is UTC it becomes a non-issue. This might not be feasible for all systems, but it might work for certain devices, such as routers or intrusion detection systems, that are managed by network operations or security operations teams. Something to note is that although the data is stored with UTC timestamps, it can be shown in reports or on screen as local time with a simple conversion. This is beneficial if personnel are also spread out geographically because timestamps are shown to them in their local time.

Centralized Logging Architecture

Commercial SEM systems all have their own solutions for collection, processing and storage of security events. However, generally the approach is to centralize these functions so that security events are forwarded to centrally managed, dedicated SEM servers. There are many advantages to this approach such as centralized backups, searching, and analysis capabilities. For scalability, the SEM servers can be organized in a hierarchical manner, with local SEM servers situated near to the monitored systems. The function of local SEM servers is to collect, process, and queue events for transmission to the next tier. Figure 1 depicts a hierarchical system with local SEM servers and a master SEM server.

The primary requirement of the master SEM server is plenty of local storage (hard disk, optical disk, tape). If searches, analysis, or other processing is performed on this server, it also needs fast CPUs, RAM, and disk. Local SEM servers will have leaner specifications because they do not need to store or process as much information. In more complex environments, a relational database (RDBMS) is typically used to store security events. Relational databases organize and index security logs, alerts, and other information for rapid searches and report generation. Commercial SEM systems use databases to organize and store security events for analysis, reporting, and display.

Centralized logging architecture

Figure 1. Centralized logging architecture.

After security events reach the central SEM server, they will be stored on disk for some period of time. How long the logs are on disk depends on the size of the logs, budget, security requirements, and business requirements. Typically, logs will be stored on disk ("online") for a few weeks or months, and this is mostly dependent on how much disk space is available. It is advantageous to keep logs on disk because this allows for convenient access to the data, and all operations such as searching will be quicker. There might be a security requirement to store logs in a read-only form in which case a write-once, read-many (WORM) form of media such as optical disk will be necessary.

Encryption may also be a requirement, in which case encryption software or a hardware encryption solution will also be necessary. Online storage is usually at a premium, so periodically the logs will need to be archived to cheaper offline storage such as tape and removed from disk to make space for newer logs.

To save disk or tape space in long term storage, compression techniques such as GZip or Zip will maximize the amount of data that can be stored. Short term "online" storage should remain uncompressed to improve searching or processing of the data. For example, there may one month of "online" uncompressed data on disk, another five months of compressed data on disk for quick access, then up to seven years of compressed data archived onto tape. These periods are only an example, and should be dependent upon business and security policies, and the amount of available disk space. Batch jobs can be set up to periodically compress, archive and remove old data from storage.

An organization's data retention policies should dictate how long information such as logs must be stored, and what requirements there are for storage and disposal of the information. If there is no data retention policy, then this needs to be defined so that information is kept for as long as it is needed, but for no longer than is necessary. There may be local legal or regulatory requirements which dictate a minimum term for which information needs to be stored (usually a maximum of seven years, depending on the type of information).

The security of the SEM servers is very important. These servers need to be hardened and locked down to expose only the minimum services to the outside. It is a good idea to firewall these servers from the rest of the network, or to utilize the built-in firewall capability of the operating system to limit access to the servers. When building an SEM server, the following steps should be performed:

  • Implement standard operating system security hardening techniques.
  • Limit services exposed and listening on the network.
  • Limit access to the server only to the administrators or security personnel that require access.
  • Perform periodic network and host-based vulnerability scans on the SEM servers.
  • Use external or built-in firewalls to limit connectivity to and from known hosts only.
  • Ensure that the server is synchronized to a reliable and accurate time source (such as a public NTP server).

To avoid having to change server names across many devices if logging servers change, standard DNS aliases (such as and should be created for the SEM servers to be used in lieu of the real server names.

Integrating Systems and Collecting Security Events

Commercial SEM systems typically provide "agents" or other mechanisms to securely gather security events or logs from systems, but it is possible that the SEM system does not have an agent or mechanism for every type of system in a network. It is also possible to entirely roll your own SEM system, so some techniques are presented here for gathering and transmitting security logs in a secure manner. Because it is important to maintain the integrity of the security logs, care must be taken in choosing methods for collection and transmission, and methods used must meet the organization's security requirements.

There are three general methods for collection of logs or events. Commercial SEM systems typically use all three approaches, depending on the type of system or device that is being monitored:

  1. Direct transmission of events to the SEM servers, for example via RADIUS accounting or SNMPv3 traps. Direct transmission is a good method if the device supports it and the mechanism is appropriately secure.
  2. Agent-based collection and transmission of logs or events. A software agent runs continuously or periodically on the monitored system and sends new security events over to the SEM servers.
  3. Server-based collection of logs from monitored systems. A SEM server will periodically poll the monitored systems for new security events. This requires that the SEM system has an appropriate level of permission on the target system.

The method chosen will depend on the capability of the target system and security requirements. For example, hosts located within a DMZ (de-militarized zone), usually have strict security policies applied to them and outbound data connections to the internal network might not be allowed. In this situation, a server-based polling mechanism is probably the best approach if the SEM server is not located within the DMZ.

Generally, encrypted or authenticated connections should be used to transmit events between devices and the SEM servers to maintain integrity of the logs; however, this is not always possible. Following are various options for gathering events (see Figure 2).

  • SSL (Secure Sockets Layer) or TLS (Transport Layer Security). For example, Web servers can be used to "serve" logs to trusted hosts via an SSL connection.
  • SCP (Secure Copy) or SFTP (Security File Transfer Protocol). SCP or SFTP are simple protocols that can be scripted into batch jobs.
  • IPSec connections or tunnels between systems. IPSec can be used to secure specific connections or all traffic for a host.
  • VPN tunnels. VPN tunnels can be used if the target system and SEM server are far apart, or if the target system does not support any other method of transmission.
  • RADIUS accounting. RADIUS accounting is a good option that is supported by many network devices.
  • SNMPv3 traps, which are common with network devices. SNMPv1 is not encrypted so its use is not recommended.
  • Encrypted file transfer over FTP (using PGP or another file encryption tool). This is another option that can be scripted for use in batch jobs.
  • Secured database connections can be used to read events directly from logs stored in databases.
  • Syslog-ng combined with s-tunnel. Standard syslog uses cleartext UDP packets so security and integrity is difficult to maintain. Syslog-ng can use TCP and can be combined with s-tunnel to transmit logs securely.
  • Native authenticated file sharing mechanisms, such as CIFS (Windows) with appropriate security applied. NFS could be used if secured appropriately. This can be a simple solution if the target system supports it.
  • E-mail alerts sent directly to the SEM server. Often anti-virus, IDS or other systems have the ability to send alerts via e-mail. The SEM server can be configured to receive and process e-mail alerts via SMTP. Although not the most secure method it can be convenient.
  • Third-party monitoring solutions, typically used to monitor and manage the network, have the ability to gather logs from systems. These systems can be configured to send logs to a SEM system for analysis.

Encryption keys used with SSL, SCP, or other connections should be stored securely and in accordance with security policies. Because security log collection is almost always automated, the agents or batch jobs that perform collection and transmission need to have access to the encryption keys. Possibly the cleanest way to do this is to run the agents with a non-privileged account with just enough permission to read the logs and to access the encryption keys. If security requirements do not prohibit it, the encryption keys can be stored without passwords but with file-level security so that only the agent is allowed to access them.

Each system (including the SEM servers) should have a unique key pair so that compromise of one system does not compromise the whole SEM infrastructure. There are other ways to provide automated access to encryption keys that may be more secure but will also be more difficult to automate and maintain.

Because log files tend to be large, it is beneficial to use compression techniques such as GZip or Zip before encryption or transmission. Text files will usually compress to a fraction of their original size, which saves disk space and network bandwidth. Processes can compress data before encryption and transmission, and uncompress data on the other side after it has been received and decrypted. Compression should always be done before encryption because the random nature of encrypted data makes compression ineffective.

Whatever collection and transmission mechanisms are used, they should have fault tolerance built in to detect and recover from failures such as system outages, network outages, or insufficient disk space. This is important to ensure integrity and completeness of the collected events.

Secure transmission of data

Figure 2. Secure transmission of data.

Using the SEM System as a System of Record

Because a SEM system collects and stores security logs from many devices across the network, it can be implemented as the "system of record" for security logs. This means the SEM system will be considered the definitive and authoritative source for security logs for the organization. This distinction places additional requirements on the system because it becomes important to ensure the integrity and timeliness of data feeds, so that the SEM system has complete, accurate, and up-to-date logs. Access to the information should be strictly controlled via approved mechanisms, and updates to the information should be logged so that the integrity of the data can be audited. Cryptographic checks such as hashes or digital signatures can help to ensure the integrity of the data from collection through to storage.

Events, Alerts, and the Rule System

As discussed, "events" are the individual log messages gathered from systems and devices, such as firewalls, intrusion detection systems, hosts, routers, etc. For example, a single "login" event will contain a hostname, a username, and a timestamp. After events are gathered by the SEM system, they pass through a series of "rules" for processing events called the "rule system." The rule system will generate "alerts" based on characteristics of events being processed. Alerts indicate that a significant event or series of events has happened that needs attention. Alerts are typically intended for review by a security analyst, and will normally be displayed on the SEM console and stored in the database for tracking and reporting purposes.

Techniques for Processing Security Events

The goal of the SEM rule system is to reduce the data volume from an unmanageable number of events down to a small number of actionable alerts that can be reviewed by security analysts. Security events are collected by the system, and pass through categorization, prioritization, filtering, and other stages in which alerts are generated. The end result is that a smaller number of actionable alerts are generated for security analysts to review. Commercial systems generally operate in a similar way with several processing stages. Figure 3 depicts how processing stages affect event volume.

Effect of processing on event volume

Figure 3. Effect of processing on event volume.

Following is a discussion of some techniques used to process security events in the SEM rule system. Commercial SEM systems provide pre-built rules to perform many functions and normally allow customized rules to be created to meet customer needs. For this reason, SEM systems need to be very flexible and are usually scriptable or programmable to allow advanced customization. Flexibility and programmability are key features of any SEM system.

Event Parsing
Event parsing is usually the first stage in a SEM system. The goal of this stage is to extract useful information from the security events so that they can be further processed by later stages. Security events are extracted into "fields" of information such as timestamp, event source, event type, username, hostname, source IP address, target IP address, source port, target port, message, etc. Because each device generates events in a different format, specific parsers need to be created for each type of device. The parsing stage needs to be very flexible to handle many event formats. Vendors of commercial SEM systems usually provide a list of devices that they directly support, but the SEM system is usually flexible enough to allow customized parsing rules to be built for unsupported devices.

The output from this stage is a parsed event, with fields separated out so that they are available to the rest of the rule system. Parsed security events may be stored as rows in a database table with fields populated with information from the event. The overall value of the SEM system is affected by the value of the data it processes and stores, so ensure that all valuable fields are parsed and stored properly. Figure 4 depicts a sample "failed authentication" event, and shows how it is parsed into fields for storage in the SEM database. This example also shows why an extensible database schema is useful for capturing important fields from differing message formats.

Example of message parsing

Figure 4. Example of message parsing.

Event Categorization
After events are parsed usually the next step is to assign categories, and subcategories, to the events. For example, an event category of "virus" and a sub category of "quarantined," meaning that the event was caused by a virus that was detected and quarantined. Categorization aids in display and analysis, reporting, and further processing of events.

Event Prioritization
After events are categorized, the next step is to assign a priority to the event. Priorities could be on a numeric scale, for example 0-100, with "0" meaning that the event has no relevance and "100" meaning that the event is a critical issue that needs to be investigated. The priority can be used to filter events of little significance to reduce the volume for later processing stages.

Event Aggregation or Summarization
Event aggregation or summarization functions look for many events that are similar. The events do not necessarily need to arrive at the same time, so the function will store state. Events are summarized into one "aggregated" event that is passed to the next stage with an aggregate count that indicates how many events comprise the aggregated event.

For certain types of devices, such as firewalls, this can significantly reduce the volume of data. For example, if a firewall logs 50 connection (SYN) attempts to a particular port, it could be summarized to one event with a count of 50. The aggregated count may then cause another rule to fire because of a high volume of SYN packets, for example. The problem with summarization is that information is lost as part of the summarization operation, so only fields that are included in the summarization operation will be available to the next stage. The more fields that are included, the less effective the summarization becomes. Therefore, with the firewall event example, it is possible that the only fields included in the summarization operations are the event type (SYN) and the port number; all other fields would be discarded.

Pattern Matching
Pattern matching is a simple technique that looks for patterns in the event fields. Exact matches, substring matches, or regular expressions are used to extract important events from the stream. Typically, the pattern-matched events will then become alerts for display and review. For example, a pattern matching rule could look for the words "buffer overflow" in an IDS event, which could result in that event being promoted to an alert for display.

Scan Detection
Scan detection refers to port scans, vulnerability scans, ping sweeps, and other scanning activities and works best with firewall or IDS events. Scanning is usually a prelude to an attack of some sort, so it is a useful rule. Network worms use this technique to locate systems to infect, so this rule can be useful to identify infected hosts on a network. The scan detection rule looks for a large number of events from a source host with many target hosts, ports, or event types. The scan detection rule may also look for a large number of different types of events against a host, which can indicate a vulnerability scan. Because state can be kept for a long time, scan detection rules can also be tuned to look for "low and slow" or stealthy scanning techniques that would not normally be discovered by human review.

Event Counts and Rate Thresholds
Event counts are simply counts of a certain type of event, such as virus detections. After this count reaches a pre-defined threshold, the rule will fire and generate an alert for display. Rate thresholds work by calculating the rate of a certain type of event; for example, 20 failed login messages within a minute is indicative of a password-guessing attack.

Event Correlation
Correlation refers to the ability of a SEM system to take multiple events or pieces of information from various sources and to infer that some activity is happening. For example, if vulnerability scan data is available to a SEM system, it can determine whether an attempted attack on a system is likely to succeed because it can correlate IDS "attack" events with known system vulnerabilities. The priority can then be raised to indicate a successful attack. In another example, host information has been loaded into the SEM system, and a UNIX-specific attack is detected against a Windows host. Because this attack could not succeed, the SEM system can lower the priority and discard the event. Other possibilities exist when correlating events across sources because patterns indicative of malicious behavior can be detected and alerted upon.

Tuning and Customizing the Rule System

After event sources have been integrated into the security event management system and events start to flow, the system will initially generate too many alerts, or a lot of false positive (erroneous) alerts. Like intrusion detection systems, security event management systems need to be tuned to be effective because the default rules are built in a generic way and need to be customized for local conditions. To get the best results, tuning requires expert knowledge of the SEM system, the network, and many of the devices being monitored. Depending on the size of the network, this may require input from many people.

If too many false positives or insignificant alerts are being generated, begin at the event sources generating those alerts (systems or devices) and determine methods to limit the events being collected so that only the more significant events are allowed through. Often the monitored system can be configured to filter out insignificant events. For example, IDS systems can be tuned to filter out low priority "informational" events. Be careful not to tune out events that could adversely affect the value of the SEM system.

Another way to reduce the volume of alerts is to filter out lower priority events after the event prioritization step (see the section on event prioritization above). Care should be taken with this type of "blanket" approach so that significant alerts triggered by low priority events are not affected.

To continue tuning, follow the event flow through the rule system to locate points where alerts are generated and determine if the alerting criteria, such as a thresholds or counts, are valid. Because alerts are getting through that are not significant, there should be ways to reduce or eliminate them entirely without affecting legitimate alerts. If not, then a compromise will be necessary to reduce false positives or insignificant alerts in favor of important alerts.

Monitoring the SEM System

Alerts generated by the rule system are usually stored in a relational database (RDBMS) along with the original security events for fast querying capability. Alerts are also normally presented on a console for review by an analyst. Documented procedures should be developed for analysts describing how to monitor the system and respond to alerts. During audits, auditors will look for evidence that these procedures are being followed. SEM systems may provide workflow type features or integrate with ticketing systems to track incidents and document actions taken by analysts and incident managers. This documentation will provide evidence that procedures are being followed.

SEM systems normally offer the ability to "drill down" into alerts to perform "forensic analysis." Typically, the analyst will be able to select the alert and perform various queries to determine what caused the alert to fire. For example, if "vulnerability scan" alert was detected against a system, the console should allow the operator to query the event store to pull up more details about which events comprised the alert. The analyst can then make an informed decision about the criticality of the alert, and whether to escalate it into an incident. Analysts typically need strong technical and analytical skills to perform this function.

A lot of data is collected by a security event management systems and this data can contain valuable nuggets of information. Data mining tools exist to perform deep analysis of the data to extract information that is not immediately apparent. These tools tend to be CPU and resource intensive, so they need to be used carefully. For larger organizations with numerous security events, it might make sense to take periodic samples of data for analysis, or run analysis in a batch mode at off-peak times. Security event management systems also include the ability to generate pre-canned and custom reports. Reports can be useful to provide metrics to upper management showing trends and graphs of activity over time.

Criteria for Choosing a Commercial Security Event Management System

It is important to evaluate and compare different solutions when choosing a commercial security event management system. Following are some of the more important factors, other than cost, to take into account during the evaluation:

  • Types of devices supported: Ensure that all devices, software, and operating systems that need to be monitored are supported by the SEM system.
  • Event collection mechanisms: Ensure that methods used to collect events (such as agent based collection) will work within the environment and meet security or architectural requirements.
  • Usability of rule system: Review the rule system to ensure that it is understandable and that alerting criteria are clear. Also review how locally customized rules can be distinguished from built-in system rules.
  • Storage flexibility and completeness: Ensure that the SEM database (or store) is flexible enough to store all information fields valuable to your organization. This is an important factor because if data is not stored in the database it won't be available to the SEM system for reporting, analysis or display which reduces the value of the whole system.
  • Upgrade path: Review the upgrade policy and the process of applying upgrades to the SEM system to ensure that upgrades do not interfere with local customizations.
  • Handling of time zones and daylight savings time: If the monitored systems are spread over multiple time zones, ensure that the SEM system can readily handle time zone offsets and daylight savings time.
  • Scalability and performance: Ensure that the SEM system is capable of processing the maximum expected rate of security events generated from all devices. Also, ensure that there is enough capacity to meet future needs.
  • Security: Ensure that the SEM system meets the organization's security requirements. This includes the security of the whole event collection mechanism, SEM servers and applications, databases, and user interfaces. Also ensure that the system keeps adequate audit logs. Review the requirement to separate functions by role such as system administrator, security analyst, and incident manager, and ensure that the SEM system can accommodate role separation.
  • Usability and functionality of the user interfaces: Probably the most important function is the act of monitoring the SEM system. The analyst's console needs to present all information in an understandable and intuitive way. Review the ability to perform analyst functions such as alert inspection, drill down, canned and custom queries, work flow, and escalation features. Also, review the reporting system to ensure that canned reports are usable and meet requirements and that custom reports can be created in cases where canned reports are not adequate.
  • Ability to integrate with external databases: If there is a need to integrate with other databases such as Configuration Management Databases (CMDB), ticketing systems, or company directories, then this capability should be reviewed.
  • Programming interface: To allow advanced customizations, programmability is a key feature of a SEM system. The usability and flexibility of the programming interface should be reviewed.


A correctly implemented security event management solution will improve the effectiveness of security monitoring and incident response functions. Analysts will spend less time monitoring consoles and reviewing security logs because this function is automated by the SEM system. Senior analysts can build expert know-how into the rule system to improve the quality of alerts for all analysts, and reduce cases of false positives.

Having all security events collected into one central database is a key benefit of a SEM system. This information is very valuable for security analysts, incident response teams, and other IT teams. Reports and security metrics can be generated for managers and data mining tools can uncover interesting information from the data.

The benefits do come at a cost, however, and it will take several months to start realizing the benefit of implementing an SEM system. In addition to the cost of purchasing a commercial solution, perhaps two of the most resource intensive efforts are integrating security event sources into the system and performing tuning of the rule system. Vendors offer professional assistance, but it is beneficial for analysts to be involved in the implementation process to understand the workings of the whole system. Analysts will also need training in the use and administrator of the system.

Perhaps one of the most important factors when implementing a SEM system is to ensure that all data of importance is collected and available within the database. If the data is not available, then it cannot be queried or displayed and it is frustrating to run a query or report only to find that a needed field is not available because it has not been collected. The value of the SEM system then is only as good as the information it contains.

About the Author
Glenn Cater, CISSP, has more than 11 years combined experience in Information Security, IT management and application development. Glenn currently holds the position of Director, IT Risk Consulting at Aon Consulting, Inc. In this role, Glenn supports Aonís Electronic Discovery Services, High-Tech Investigations and IT Security Consulting practices. Glenn joined Aon from Lucent Technologies where he held the position of Technical Manager within the IT Security organization. At Lucent, Glenn supervised the Computer Security Incident Response Team, supporting the intrusion prevention and security event management systems. Glenn also worked as Managing Principal of the Reliability and Security Consulting practice at Lucent Worldwide Services, leading and supporting security consulting engagements for LWS clients. Before that, Glenn worked as a senior network security manager at Lucent Technologies managing a development team and supporting internal security solutions. Prior to joining Lucent, Glenn began his career as a software engineer at British Aerospace working on military systems.

* This article is from the Information Security Management Handbook, 2009 CD-ROM Edition, edited by Hal Tipton and Micki Krause.

Subscribe to Information Security Today

E-mail Marketing by VerticalResponse

Search the Site

Share this Article

© Copyright 2009-2010 Auerbach Publications