, Reliable Software Group, Computer Science Department, University of California Santa Barbara
, Reliable Software Group, Computer Science Department, University of California Santa Barbara
Pages: pp. 27-30
Abstract—Most security experts agree that a completely secure system is impossible to achieve. So we must stay alert for attacks.
Suppose a strange man is standing in front of your house. He looks around, studying the surroundings, and then goes to the front door and starts turning the knob. The door is locked. He moves to a nearby window and gently tries to open it. It, too, is locked. It seems your house is secure. So why install an alarm?
This question is often asked of intrusion detection advocates. Why bother detecting intrusions if you've installed firewalls, patched operating systems, and checked passwords for soundness? The answer is simple: because intrusions still occur. Just as people sometimes forget to lock a window, for example, they sometimes forget to correctly update a firewall's rule set.
Even with the most advanced protection, computer systems are still not 100 percent secure. In fact, most computer security experts agree that, given user-desired features such as network connectivity, we'll never achieve the goal of a completely secure system. As a result, we must develop intrusion detection techniques and systems to discover and react to computer attacks.
Originally, system administrators performed intrusion detection by sitting in front of a console and monitoring user activities. They might detect intrusions by noticing, for example, that a vacationing user is logged in locally or that a seldom-used printer is unusually active. Although effective enough at the time, this early form of intrusion detection was ad hoc and not scalable.
The next step in intrusion detection involved audit logs, which system administrators reviewed for evidence of unusual or malicious behavior. In the late '70s and early '80s, administrators typically printed audit logs on fan-folded paper, which were often stacked four- to five-feet high by the end of an average week. Searching through such a stack was obviously very time consuming. With this overabundance of information and only manual analysis, administrators mainly used audit logs as a forensic tool to determine the cause of a particular security incident after the fact. There was little hope of catching an attack in progress.
As storage became cheaper, audit logs moved online and researchers developed programs to analyze the data. 1 However, analysis was slow and often computationally intensive, and, therefore, intrusion detection programs were usually run at night when the system's user load was low. Therefore, most intrusions were still detected after they occurred.
In the early '90s, researchers developed real-time intrusion detection systems that reviewed audit data as it was produced. This enabled the detection of attacks and attempted attacks as they occurred, which in turn allowed for real-time response, and, in some cases, attack preemption.
More recent intrusion detection efforts have centered on developing products that users can effectively deploy in large networks. This is no easy task, given increasing security concerns, countless new attack techniques, and continuous changes in the surrounding computing environment.
The goal of intrusion detection is seemingly simple: to detect intrusions. However, the task is difficult, and in fact intrusion detection systems do not detect intrusions at all—they only identify evidence of intrusions, either while they're in progress or after the fact.
Such evidence is sometimes referred to as an attack's "manifestation." If there is no manifestation, if the manifestation lacks sufficient information, or if the information it contains is untrustworthy, then the system cannot detect the intrusion.
For example, suppose a house monitoring system is analyzing camera output that shows a person fiddling with the front door. The camera's video data is the manifestation of the occurring intrusion. If the camera lens is dirty or out of focus, the system will be unable to determine whether the person is a burglar or the owner.
For accurate intrusion detection, we must have reliable and complete data about the target system's activities. Reliable data collection is a complex issue in itself. Most operating systems offer some form of auditing that provides an operations log for different users. These logs might be limited to the security-relevant events (such as failed login attempts) or they might offer a complete report on every system call invoked by every process. Similarly, routers and firewalls provide event logs for network activity. These logs might contain simple information, such as network connection openings and closings, or a complete record of every packet that appeared on the wire.
The amount of system activity information a system collects is a trade-off between overhead and effectiveness. A system that records every action in detail could have substantially degraded performance and require enormous disk storage. For example, collecting a complete log of a 100-Mbit Ethernet link's network packets could require hundreds of Gbytes per day.
Collecting information is expensive, and collecting the right information is important. Determining what information to log and where to collect it is an open problem. For example, having your house alarm system monitor the water for pollution levels is an expensive activity that doesn't help detect burglars. On the other hand, if the house's threat model includes terrorist attacks, monitoring the pollution level might be reasonable.
Auditing your system is useless if you don't analyze the resulting information. How intrusion detection systems analyze collected data is an important system characteristic.
There are two basic categories of intrusion detection techniques: anomaly detection and misuse detection.
A basic assumption of anomaly detection is that attacks differ from normal behavior. For example, we can model certain users' daily activity (type and amount) quite precisely. Suppose a particular user typically logs in around 10 a.m., reads mail, performs database transactions, takes a break between noon and 1 p.m., has very few file access errors, and so on. If the system notices that this same user logs in at 3 a.m., starts using compilers and debugging tools, and has numerous file access errors, it will flag this activity as suspicious.
The main advantage of anomaly detection systems is that they can detect previously unknown attacks. By defining what's normal, they can identify any violation, whether it is part of the threat model or not. In actual systems, however, the advantage of detecting previously unknown attacks is paid for in terms of high false-positive rates. Anomaly detection systems are also difficult to train in highly dynamic environments.
Misuse detection. Misuse detection systems essentially define what's wrong. They contain attack descriptions (or "signatures") and match them against the audit data stream, looking for evidence of known attacks. 5-7 One such attack, for example, would occur if someone created a symbolic link to a Unix system's password file and executed a privileged application that accesses the symbolic link. In this example, the attack exploits the lack of file access checks.
The main advantage of misuse detection systems is that they focus analysis on the audit data and typically produce few false positives.
The main disadvantage of misuse detection systems is that they can detect only known attacks for which they have a defined signature. As new attacks are discovered, developers must model and add them to the signature database.
An intrusion detection system's response is its output or action upon detecting a problem. A response can take many different forms; the most common is to generate an alert that describes the detected intrusion. There are also more aggressive responses, such as paging a system administrator, sounding a siren, or even mounting a counter-attack.
A counterattack might include reconfiguring a router to block the attacker's address or even attacking the culprit. Obviously, aggressive responses can be dangerous, since they could be launched against innocent victims. For example, a hacker can attack a network using spoofed traffic—traffic that appears to come from a certain address, but that is actually generated elsewhere. If the intrusion detection system detected the attack and reconfigured the network routers to block traffic from that address, it would effectively be executing a denial-of-service attack against the impersonated site.
Although intrusion detection has evolved rapidly in the past few years, many important issues remain. First, detection systems must be more effective, detecting a wider range of attacks with fewer false positives. Second, intrusion detection must keep pace with modern networks' increased size, speed, and dynamics. Finally, we need analysis techniques that support the identification of attacks against whole networks.
The challenge for increased system effectiveness is to develop a system that detects close to 100 percent of attacks with minimal false positives. We are still far from achieving this goal.
Today's intrusion detection systems primarily rely on misuse detection techniques. The freely available Snort 8 ( www.snort.org) and the commercially available RealSecure ( www.iss.net) are two products that use signatures to analyze network traffic. Because they model only known attacks, developers must regularly update their signature sets. This approach is insufficient. We need anomaly detection's ability to detect new attacks, but without the approach's accompanying high rate of false positives. Many researchers advocate using a hybrid misuse–anomaly detection approach, but further investigation is needed. 9
Simply detecting a variety of attacks is not enough. Intrusion detection systems must also keep up with the input-event stream generated by high-speed networks and high-performance network nodes.
Gigabit Ethernet is common, and fast optical links are becoming popular. The network nodes are also getting faster, processing more data and generating more audit logs. This takes us back to the historical problem of a system administrator confronting a mountain of data. There are two ways to analyze this amount of information in real-time: split the event stream or use peripheral network sensors.
In the first approach, a "slicer" component splits the event stream into slimmer, more manageable streams that the intrusion detection sensors can analyze in real-time. To do this, the whole event stream must be accessible at a single location. Therefore, researchers typically advocate stream splitting for centralized systems or network gateways.
The problem with this approach is that the slicer must divide the event stream in a way that guarantees the detection of all relevant attack scenarios. If an event stream is divided randomly, sensors might not receive sufficient data to detect an intrusion, because different parts of the attack manifestation might be assigned to different slices.
A second approach is to deploy multiple sensors at the network periphery, close to the hosts the system must protect. This approach assumes that by moving the analysis to the network's periphery, a natural partitioning of traffic will occur.
The problem with this approach is that it's difficult to deploy and manage a highly distributed set of sensors. First of all, correct sensor positioning can be difficult to do. Attacks that depend on the network topology, such as routing- and spoofing-based attacks, require that detection sensors be placed in a specific position in the network. Second, there is a control-and-coordination issue. Networks are dynamic entities that evolve through time, and the threats evolve, too. New attacks are invented every day; the sensing infrastructure must evolve accordingly.
Placing sensors at critical network locations lets administrators detect attacks against the network as a whole. That is, the sensing network is able to provide an integrated, "big picture" view of the network security status. Attacks that might appear irrelevant in the context of a single host might be extremely dangerous when considered across the network.
Consider, for example, an attack that involves multiple steps. Suppose each step is carried out on a different host, but because the system under attack has a shared file system, the effects are evident throughout the network. The system might not identify an individual step as malicious when analyzing a single sensor's information, but a more comprehensive analysis of network activity could reveal the attack pattern. This alert correlation or fusion—identifying intrusion patterns based on different sensor alerts—is one of the most challenging problems in intrusion detection today.
Even as networks become more secure, intrusion detection will always be an integral part of any serious security solution. The current trend to distribute and specialize sensors will result in systems composed of hundreds, possibly thousands, of intrusion detection sensors connected by an infrastructure that supports communication, control, and reconfiguration. Although the infrastructure type and characteristics might vary, all will have to be able to scale up to large numbers. 10 Also, analysis will gradually shift its focus from low-level sensors to high-level analyzers that will give administrators a better, more concise picture of the entire network's important security events.
In the near future, sensor technology will be integrated into our everyday computing environment. We've seen something similar with firewalls, which are now an integral part of operating systems: both Unix and Windows provide some form of host-based firewalling. It's now time for operating systems and network software to integrate intrusion detection sensors. Intrusion detection will no doubt become a default feature, rather than an esoteric option.
That said, a pervasive, ubiquitous sensor network can be deployed only if we can integrate different types of sensors running on different platforms, environments, and systems. We thus need standards that will support interoperability. A first step in this direction is the current Intrusion Detection Message Exchange Format standard proposed by the Internet Engineering Task Force's Intrusion Detection Working Group. 11 IDMEF defines the format of alerts and an alert exchange protocol. Additional effort is needed to provide a common ontology that lets sensors agree on what they see. Without this common way of describing the involved entities, sensors will continue to disagree when detecting the same intrusion.
Pushing the evolution even further, software-based intrusion detection might evolve into hardware-based sensing technology. New types of pervasive sensors might also open new directions for intrusion detection. Perhaps in the future, our sensor-woven clothing will be capable of detecting pickpockets; the possibilities are intriguing, if not endless.