WO2002048959A2 - A hierarchial neural network intrusion detector - Google Patents

A hierarchial neural network intrusion detector Download PDF

Info

Publication number
WO2002048959A2
WO2002048959A2 PCT/US2001/047828 US0147828W WO0248959A2 WO 2002048959 A2 WO2002048959 A2 WO 2002048959A2 US 0147828 W US0147828 W US 0147828W WO 0248959 A2 WO0248959 A2 WO 0248959A2
Authority
WO
WIPO (PCT)
Prior art keywords
neural networks
tier
soft
neural network
outputs
Prior art date
Application number
PCT/US2001/047828
Other languages
French (fr)
Other versions
WO2002048959A3 (en
Inventor
Susan C. Lee
Original Assignee
The Johns Hopkins University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Johns Hopkins University filed Critical The Johns Hopkins University
Priority to AU2002228988A priority Critical patent/AU2002228988A1/en
Priority to US10/433,713 priority patent/US20040054505A1/en
Publication of WO2002048959A2 publication Critical patent/WO2002048959A2/en
Publication of WO2002048959A3 publication Critical patent/WO2002048959A3/en

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B31/00Predictive alarm systems characterised by extrapolation or other computation using updated historic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/22Electrical actuation
    • G08B13/24Electrical actuation by interference with electromagnetic field distribution
    • G08B13/2491Intrusion detection systems, i.e. where the body of an intruder causes the interference with the electromagnetic field
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B29/00Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
    • G08B29/18Prevention or correction of operating errors
    • G08B29/20Calibration, including self-calibrating arrangements
    • G08B29/24Self-calibration, e.g. compensating for environmental drift or ageing of components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection

Definitions

  • the present invention relates to detection of intrusion into a computer system such as a computer network. While many commercial intrusion detection systems (IDS) are deployed, the protection they afford is modest.
  • IDS intrusion detection systems
  • a number of IDS involve "training" of a neural network detectors - that is, a process by which the inputs with known contents are applied to the neural network IDS, and a feedback mechanism is used to adjust the parameters of the IDS until the actual outputs of the IDS match desired outputs for each input. If such an IDS is to detect novel attacks, it should be trained to distinguish the possible nominal inputs from possible inputs. In addition, obtaining training data with known content is difficult. It can be very time consuming to collect real data to use in training, especially if the training data is to represent a full range of nominal conditions. It is difficult, if not impossible, to collect real data representative of all anomalous conditions. If the input representing "anomalous" behavior includes know attacks, the IDS will leam to recognize those particular signatures as bad, but may not recognize other, novel attack signatures.
  • Hierarchical neural network based IDS While there are other hierarchical neural network based IDS, they use the hierarchy to aggregate the outputs of monolithic IDS at a central location. They do not use a hierarchy as a basic detector in the manner disclosed herein. They also use the hierarchy to consolidate information for an operator, not to strengthen detection certainty or to reduce false alarms as does the technology disclosed herein.
  • networking or computing can be completely specified in advance. Examples of these are network protocols or an operating system's "user-to-root" transition. A substantial number of attacks distort these specifiable characteristics. For this class of attack, the teclmology disclosed herein generates training data so that an IDS can be trained to detect novel attacks, not simply those known at the time of training.
  • the present invention provides a hierarchical neural network for monitoring network functions, comprising: a set of primary neural networks operatively connectable to receive inputs associated with respective ones of the network functions, each of the primary neural networks having an output; and a first tier of neural networks operatively connected to combine selected outputs of the primary neural networks.
  • Figure 1 is a schematic diagram of a lower portion of an exemplary hierarchical neural network in accordance with the present invention.
  • Figure 2 is a schematic diagram of an upper portion of an exemplary hierarchical neural network in accordance with the present invention.
  • Figure 3 is a schematic diagram of an exemplary hierarchical neural network in accordance with the present invention.
  • Figures 4 (a) - (f) graphically illustrate the output of an exemplary hierarchical neural network in accordance with the present invention.
  • Figure 5 graphically illustrates the performance of six different arrangements of a hierarchy of neural networks.
  • Figure 6 graphically illustrates a vector map displaying converted n-dimensional vectors in accordance with the present invention for the fast scan, SYN Flood, and surge login events.
  • Figure 7 graphically illustrates converted n-dimensional vectors in accordance with the present invention for the stealthy scan on an expanded scale.
  • Figures 1 and 2 are schematic diagrams of portions of an exemplary hierarchical, back propagation neural network to which the present invention can be applied.
  • the use of back propagation in neural networks is well known as discussed in C. M. Bishop, Neural Networks or Pattern Recognition. New York: Oxford University Press, 1995.
  • the training data was created without reference to network data, but obtained from assertions about network behavior that are embodied in network protocols, such as the TCP protocol.
  • the IDS is evaluated using test data produced by a network simulation. Use of a simulation to produce test data has good and bad features. The model is limited in its fidelity; however, the user and attacker behavior can be controlled (within limits) to produce challenging test cases.
  • the exemplary IDS focuses on the TCP protocol. Training of a neural network in accordance with the present invention is not limited to any particular protocol. TCP was selected as an exemplary protocol because it has a rich repertoire of well-defined behaviors that can be monitored by the exemplary IDS. The three- way connection establishment handshake, the connection termination handshake, packet acknowledgement, sequence number matching, source and destination port designation, and flag-use all follow pre-defined patterns. In the exemplary IDS described herein, and to winch training in accordance with the present invention can be applied, is assumed to be a host-based system protecting a network server. Although the exemplar ⁇ ' IDS looked only at TCP network data, it is 'host-based' in the sense that the IDS data are packets received by or sent from the server itself; that is, it did not see all network TCP traffic.
  • TCP protocol not all of the richness of the TCP protocol could be exploited in the exemplary setup. For example, packet formation (particularly, flag use) would be a very productive area to monitor, but ill-formed packets could not be produced by the network simulation, therefore the exemplary IDS did not monitor packet formation.
  • the portions of the TCP protocol that could be monitored and tested in the exemplary setup are connection establishment, connection termination and port use.
  • Figure 3 is a schematic diagram of an exemplary hierarchical neural network in accordance with the present invention
  • reference numerals 1 - 26 represent primary neural networks.
  • these primary neural networks receive desired inputs from the system being monitored.
  • Reference numerals Gl - G9 represent first tier grouping
  • reference numerals Gl' - G4' represent second tier grouping
  • reference numeral Gl" represent a third tier grouping.
  • the last tier in Figure 3 is designated TOP.
  • a hierarchical neural network accordance with the present invention can include any number of primary neural networks, 1 - n.
  • Each primary neural network monitors some small aspect of the network behavior.
  • the primary neural networks monitor, e.g., receive inputs representing a smgle input, behavior, or networking aspect.
  • the outputs of the primary neural networks are combined into groups that form the inputs to any number of secondary neural networks, Gl - Gn.
  • the formation of the groups may be based on any criteria appropriate to the function, behavior, or aspect monitored by the associated primary neural networks.
  • the grouping of outputs at one level to form the inputs to an arbitrary number of neural networks at a higher level nay be repeated an arbitrary number of times, until a single output indicating the intrusion status of the monitored network is achieved.
  • Figure 3 shows four such groupings; at the first tier, the second tier, at the third tier, and at the last tier.
  • the neural networks at the first tier or secondary level and above are trained to combine the inputs using logical combinational functions.
  • the functions are defined as follows.
  • a “Soft OR” provides a “1” output if it receives a single strong input, or if it receives many moderate inputs.
  • the “Soft OR” provides a “0” output if there are only weak inputs.
  • a “Soft AND” provides a "1” output if the average of inputs is greater than an arbitrary threshold.
  • the “Soft AND” provides a "0” output if the average of the inputs are below the same threshold.
  • a hierarchical structure such as disclosed in Figure 3 breaks the task of intrusion detection into small focused components. It uses the neural networks to monitor each primary element of the intiiision detection task, that is, to monitor each small primary element of the intiiision task.
  • the exemplary structure of Figure 3 provides a recombination of the small focused components into a comprehensive picture of intiiision by using an arbitrary hierarchical architecture of neural networks with fixed combinational elements above the fust level.
  • Table 1 gives the very simple set of assertions utilized by the exemplary IDS. The assertions in Table 1 were applied to the packets associated with each individual service, and to all TCP packets aggregated globally. No assumptions are made about use statistics; the assertions in Table 1 hold regardless
  • Table 1 Lowest-Level NN Definitions of the volume of traffic, packet size distribution, inter-arrival rates, login rates, etc..
  • the assertions do not even include knowledge about the number of and ports for seivices allowed on the monitored server, although this could well be doable for real systems.
  • the truth of the assertions in Table 1, and more, could be tested precisely by a program that maintained state on eveiy packet sent and received. Writing such a program would be akin to rewriting the TCP network software. If a re-write of TCP is contemplated, it would be more productive simply to put in the error and bounds checking that would prevent exploitation of the protocol for attacks. Rather than maintaining state on
  • Table 2 Input Statistics Definition every packet and connection, the experiment tested whether or not the assertions would hold well enough over aggregated statistics to detect anomalies.
  • the packet and TCP connection statistics utilized in the exemplary data discussed herein were generated over 30 second windows. The 30 second windows were overlapped by 20 seconds, yielding an IDS input every 10 seconds. The input statistics are given in Table 2.
  • the test data included baseline (nominal use) data, and four distinct variations from the baseline.
  • the "stealthy scan" variant tested the system's threshold of detection.
  • FIG 1 is a schematic diagram of a lower portion of an exemplary hierarchical neural network (NN) to which the present invention can be applied.
  • Packet and queue statistics are used as input to the lowest-level NNs monitoring the nominal behaviors described in Table 1.
  • the outputs from the Level 1 NNs are combined at Level 2 into connection establishment (CE), connection termination (CT) and port use (Pt, for all-packets only) monitors.
  • the outputs of the Level 2 NNs are combined at Level 3 into a single status.
  • the hierarchy shown in Figure 1 was replicated to monitor the individual status of the TCP services and "all- packets" status.
  • Figure 2 is a schematic diagram of an upper portion of an exemplary hierarchical neural network to which the present invention can be applied. This figure shows how each of these status monitors was combined to yield a single TCP status.
  • a soft OR function was implemented that passed high-valued inputs from even a single NN, enlianced low-valued inputs from more than one contributing NN, and tended to suppress single, low-valued inputs.
  • a soft AND function was implemented that enlianced inputs when the average value from all contributing NNs exceeded some threshold, but suppressed inputs whose average value was low.
  • a back propagation NN is initialized randomly and must undergo "supervised learning” before use as a detector. This requires knowledge of the desired output for each input vector. Often, obtaining training data with known content is difficult. Furthermore, if the input representing "anomalous" contains known attacks, the NN will learn to recognize those particular signatures as bad, but may not recognize other, novel attack signatures.
  • the NNs described herein were trained using data generated artificially, eliminating both problems. Input vectors to each NN comprise random numbers. Each input vector was tested against the assertion monitored by that particular NN. The desired output was set to "nominal" for all random vectors for which the assertion held; the desired output was set to "anomalous” for all other vectors.
  • the desired output is set to "nominal" for all sets of random numbers for which the assertion holds; the desired output is set to "anomalous” for all other sets.
  • the percentage of random number sets for which the assertion holds is small.
  • the percentage of nominal inputs can be augmented by selecting some of the parameters randomly, and then forcing the remaining parameters to make the assertion tine.
  • the space of nominal and anomalous inputs can be reasonably well-spanned.
  • the n-dimensional space of nominal and anomalous input statistics can be reasonably well-spanned. The NN learns to distinguish the nominal pattern from any anomalous (attack) pattern.
  • Exemplary test data was generated by running a network simulation developed using Mil3's OPNET Modeler.
  • OPNET is tool for event-driven modeling and simulation of communications networks, devices and protocols.
  • the modeled network consisted of a seiver computer, client computers and an attacking computer connected via 10 Mbps Ethernet links and a hub.
  • the server module was configured to provide email, FTP, telnet, and Xwindows services.
  • the attacking computer module was a standard client module modified to send out only SYN packets. Those packets can be addressed to a single port to simulate a SYN flood attack or they can be addressed to a range of ports for a SYN port scan.
  • the attacking computer was a non-participant in the network.
  • the model was configured so that all but two of the clients began telnet sessions at the same time. This created a deluge of concurrent attempts to access the telnet service. The login rate this simulation produced was several hundred times higher than the baseline rate. At the start of the surge of logins, the seiver is overwhelmed and drops some SYN packets. The other two clients were used to provide consistent traffic levels on the other available seivices.
  • Table 4 Event Descriptions. ability to detect the scan. Table 4 describes the characteristics of the simulation nins.
  • FIG. 4 summarizes the performance of the six exemplary back propagation hierarchies over all five runs. To make these graphs, the maximum, minimum and average output of each hierarchy was calculated for the baseline, surge logins, and the tliree attacks. The surge login event was further broken down into two parts: a "nominal" part when the server could handle the incoming login requests, and an "off-nominal” part when the server dropped SYN packets. The length of the bars in Figure 4 shows the range of outputs, while the color changes at the average output.
  • a threshold can be set for all hierarchies that results in 100% probability of detection (PD) for these attacks, with no false ala ⁇ ns (FA) from nominal data. All hierarchies excepting the "flat" one detected some part of the stealthy scan.
  • Figure 5 shows the PD for the stealthy scan as a function of scan packet rate. For each hierarchy type, the detection threshold was set just above the maximum output for nominal inputs, so these are PD at zero FA.
  • the "flat" hierarchy was unable to detect the stealthy scan at all. This result shows the sensitivity advantage of the deeper hierarchies. What is not evident from this graph is the difference in robustness between the hierarchy and flat IDS.
  • the flat IDS made its detenninations on the basis of just three inputs. A flat NN with only these inputs responds as well as the flat NN with all inputs; a flat NN without just one of these inputs will miss a detection or have a FA at the surge login. This contrasts with the original hierarchy, where the SYN Flood and the scans (fast and stealthy) are each recognized by several Level 1 NNs using different input statistics. This diversity should yield a more robust detector.
  • the output of the "best" hierarchy shows that the organization of the hierarchy has a strong effect, histead of grouping the Level 1 NNs into CE, CT, and Pt groups, hindsiglit was used to establish three different groups: 1) all NN that responded to the surge login, 2) of the remaining NNs, the ones that respond to the stealthy scan, and 3) all the rest.
  • This hierarchy perfonned as well as could possibly be desired.
  • a threshold could be established that resulted in 100% PD at 0% FA, even for scan packet rates of 1 or fewer scan packets per 30-second window.
  • a parametric study could quantify the sensitivity of PD and FA to the hierarchy arrangement.
  • the back propagation hierarchy gives a simple summary nominal/anomaly output, and information about the nature of the anomaly incorporated in the lower-level NNs is lost.
  • the hierarchy itself introduces an element of signature recognition into the IDS. To overcome these drawbacks, the NNs at Level 2 were eliminated completely, and the back propagation NNs at Levels 3-5 were replaced with detectors that sort the unique arrangements of inputs into anomaly categories.
  • the first candidate for these new detectors was a Kohonen Self-Organizing Map (SOM) as described in T. Kohonen, Self-Organizing Maps. New York: Springer- Verlag, 1995.
  • SOM Kohonen Self-Organizing Map
  • the SOM provides a 2-D mapping of n-dimensional input data into unique clusters.
  • the visualization prospects offered by a "map" of behavior are attractive, however, other properties of a SOM are less appealing in this context.
  • a SOM works best when the space spanned by the n-dimensional input vectors is sparsely populated.
  • the Level 1 NN output data had more variability than the SOM could usefully cluster.
  • the SOM was nearly filled with points, and although a line could be drawn around an area where the nominal points seemed to fall, it offered no more insight than the back propagation hierarchy, at a higher computational cost. Second, the SOM only clusters data that is in its training set. The presentation of novel inputs after training produces unpredictable results.
  • Level 1 NN output vectors appeared stable within an event type, and distinct between events, some means of mapping from the multi-dimensional output space to a 2-D display seemed possible.
  • a simpler mapping technique was devised. An arbitrary vector was chosen for a reference; for this experiment, the reference vector was an average of the baseline hierarchy outputs. Then, for every input vector, the detector calculated the difference in length and angle from the reference vector. X-Y coordinates were generated from the length and angle computed from each input. The numeric values of the X-Y pairs themselves are meaningless, except to separate unlike events on a 2-D plot. These X-Y pairs were plotted like the X-Y pairs generated by the SOM. This is referred to as a "vector map". While the vector map is not guaranteed to map all distmct anomalous vectors into separate places on the map, it worked well for the exemplary data.
  • Figure 6 shows a vector map for the baseline, surge login, SYN Flood and fast scan data from Run 1 (there is little run-to-run variation). Due to the reference vector choice, nominal points (baseline and nominal surge login) all cluster at 0,0. While the attack is ongoing, the fast scan and SYN Flood points are well-separated from each other and from nominal. The off-nominal surge login points are distinct from nominal, but are also distmct from both the SYN Flood and fast port scan while the attacks are in progress. Using this technique, this event can be classified as an anomaly, but not a malicious attack.
  • Figure 7 shows the vector map for the stealthy scan on an expanded scale. Distance from nominal increases with scan packet rate, however, even one scan packet per 30-second window maps to a location distinct from nominal. Thus over time, even a very stealthy scan, with packet intervals of minutes to hours, will eventually be detectable as an accumulation points on the map outside the nominal location.
  • the experiment described herein shows that an IDS can be devised that truly responds to anomalies, not to signatures of known attacks.
  • the exemplary IDS was 100% successful in detecting specific attacks, without a priori information on or training directed towards those attacks. Because of the fraining method used, it is expected that the IDS would detect any attack that peiturbs the parameters visible to the exemplary IDS. To produce this result, the normal behavior must be specifiable in advance. Since network protocols can be formally specified, at least attacks that exploit flaws in protocol implementations should be detectable this way. In other experiments, the approach has been successfully applied to RFC 1256 and IGMP as well as TCP.
  • the techniques demonstrated in this experiment appear to be resilient to variations in normal behavior that might confound another anomaly detector. They do not depend on use statistics, and traffic volume has little effect on the output.
  • the hierarchical approach is shown to be more sensitive and more robust than a flat implementation. The hierarchy was able to detect more subtle attacks than a single detector using the same inputs. Further, it used more of the inputs in making its determination of detected anomalies.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Electromagnetism (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Burglar Alarm Systems (AREA)

Abstract

A hierarchical neural network for monitoring network functions and that functions as a true anomaly detector is disclosed. Detection of an anomaly is achieved by monitoring selected areas of network behavior, such as protocols, that are predictable in advance. Combining outputs of neural networks within the hierarchical network yields satisfactory anomaly detection.

Description

A mERARCHIAL NEURAL NETWORK INTRUSION DETECTOR
CROSS-REFERENCE TO RELATED APPLICATIONS
The present invention is related to, and claims the benefit of, U.S. Provisional Patent
Application No. 60/255,164 filed December 13, 2000.
Background of the Invention
The present invention relates to detection of intrusion into a computer system such as a computer network. While many commercial intrusion detection systems (IDS) are deployed, the protection they afford is modest. The original concept of intiiision detection as described in D.E. Denning, "An Intrusion Detection Model," IEEE Transactions on Software Engineering, Vol. 13-2, p. 222, February 1987, was an anomaly detector. Early IDS functioned based on this concept to detect an anomaly in system operation. For example, systems like the Intrusion Detection Expert System (IDES) disclosed in H. Javitz and A. Valdes, "The SRI IDES Statistical Anomaly Detector," Proceeding of the Symposium on Research in Security and Privacy, May 1991, pp. 316-326, and the Next-generation IDES (NIDES) disclosed in D. Anderson, T. Friviold, and A. Valdes, "Next Generation Intrusion Detection Expert System (NIDES): A Summary," SRI International, Menlo Park, CA, Tech. Rep. SRI-CSL-95-07, May 1995, were built around the concept of a statistical anomaly detector. Two difficulties, one practical and the other theoretical confounded these early systems. The practical difficulty is that nominal usage has high variability and changes over time. To meet this challenge, systems had a fairly loose threshold for tolerance of anomalous behavior, and were designed to leam new nominal statistics as they worked. This solution to the practical limitations of statistical anomaly detectors led to the theoretical difficulties: intruders could work below the threshold of tolerance and "teach" the systems to recognize increasingly abnormal patterns as noπnal.
To meet these difficulties, a new paradigm for intrusion detection was introduced: signature recognition. Attempts have been made to create systems that recognize the signature of "normal" accesses to a computer system. The basis for such systems is that by recognizing "normal" accesses, then attacks, known or novel can be detected because they are not "normal" accesses. But, such systems are often confounded by the extreme variability of nominal behavior. In addition, various data sources and types of pattern recognition techniques are used to separate attacks signals from normal usage noise. But, the performance of these systems is limited by the signature database they work from. Many known attacks can be easily modified to present many different signatures as described by T.H. Ptacek and T.N. Newsham, "Insertion, Evasion, and Denial of Service: Eluding Network Intrusion Detection," Secure Networks, Inc, Tech. Rep. 1998. If all variations are not in the database, even known attacks may be missed. Completely novel attacks, by definition, cannot be present in the database, and will nearly always be missed.
A number of IDS involve "training" of a neural network detectors - that is, a process by which the inputs with known contents are applied to the neural network IDS, and a feedback mechanism is used to adjust the parameters of the IDS until the actual outputs of the IDS match desired outputs for each input. If such an IDS is to detect novel attacks, it should be trained to distinguish the possible nominal inputs from possible inputs. In addition, obtaining training data with known content is difficult. It can be very time consuming to collect real data to use in training, especially if the training data is to represent a full range of nominal conditions. It is difficult, if not impossible, to collect real data representative of all anomalous conditions. If the input representing "anomalous" behavior includes know attacks, the IDS will leam to recognize those particular signatures as bad, but may not recognize other, novel attack signatures.
While there are other hierarchical neural network based IDS, they use the hierarchy to aggregate the outputs of monolithic IDS at a central location. They do not use a hierarchy as a basic detector in the manner disclosed herein. They also use the hierarchy to consolidate information for an operator, not to strengthen detection certainty or to reduce false alarms as does the technology disclosed herein.
Many characteristics of networking or computing can be completely specified in advance. Examples of these are network protocols or an operating system's "user-to-root" transition. A substantial number of attacks distort these specifiable characteristics. For this class of attack, the teclmology disclosed herein generates training data so that an IDS can be trained to detect novel attacks, not simply those known at the time of training.
Summary of the Invention
It is an object of the present invention to provide a hierarchical neural-network intrusion detector.
It is another object of the present invention to provide a hierarchical neural-network intrusion detector that detects novel intrusions.
It is a further object of the present invention to provide a hierarchical neural-network intiiision detector that reduces the number of false alarms.
It is still another object of the present invention to provide a hierarchical neural- network intrusion detector that has a better probability of detection with lower false alarm rate.
To achieve the above and other objects, the present invention provides a hierarchical neural network for monitoring network functions, comprising: a set of primary neural networks operatively connectable to receive inputs associated with respective ones of the network functions, each of the primary neural networks having an output; and a first tier of neural networks operatively connected to combine selected outputs of the primary neural networks.
Brief Description of the Drawings
Figure 1 is a schematic diagram of a lower portion of an exemplary hierarchical neural network in accordance with the present invention.
Figure 2 is a schematic diagram of an upper portion of an exemplary hierarchical neural network in accordance with the present invention.
Figure 3 is a schematic diagram of an exemplary hierarchical neural network in accordance with the present invention.
Figures 4 (a) - (f) graphically illustrate the output of an exemplary hierarchical neural network in accordance with the present invention. Figure 5 graphically illustrates the performance of six different arrangements of a hierarchy of neural networks.
Figure 6 graphically illustrates a vector map displaying converted n-dimensional vectors in accordance with the present invention for the fast scan, SYN Flood, and surge login events.
Figure 7 graphically illustrates converted n-dimensional vectors in accordance with the present invention for the stealthy scan on an expanded scale.
Detailed Description of the Preferred Embodiments
Figures 1 and 2 are schematic diagrams of portions of an exemplary hierarchical, back propagation neural network to which the present invention can be applied. The use of back propagation in neural networks is well known as discussed in C. M. Bishop, Neural Networks or Pattern Recognition. New York: Oxford University Press, 1995. In the exemplary embodiment described herein, the training data was created without reference to network data, but obtained from assertions about network behavior that are embodied in network protocols, such as the TCP protocol. The IDS is evaluated using test data produced by a network simulation. Use of a simulation to produce test data has good and bad features. The model is limited in its fidelity; however, the user and attacker behavior can be controlled (within limits) to produce challenging test cases.
The exemplary IDS focuses on the TCP protocol. Training of a neural network in accordance with the present invention is not limited to any particular protocol. TCP was selected as an exemplary protocol because it has a rich repertoire of well-defined behaviors that can be monitored by the exemplary IDS. The three- way connection establishment handshake, the connection termination handshake, packet acknowledgement, sequence number matching, source and destination port designation, and flag-use all follow pre-defined patterns. In the exemplary IDS described herein, and to winch training in accordance with the present invention can be applied, is assumed to be a host-based system protecting a network server. Although the exemplar}' IDS looked only at TCP network data, it is 'host-based' in the sense that the IDS data are packets received by or sent from the server itself; that is, it did not see all network TCP traffic.
Not all of the richness of the TCP protocol could be exploited in the exemplary setup. For example, packet formation (particularly, flag use) would be a very productive area to monitor, but ill-formed packets could not be produced by the network simulation, therefore the exemplary IDS did not monitor packet formation. The portions of the TCP protocol that could be monitored and tested in the exemplary setup are connection establishment, connection termination and port use.
Figure 3 is a schematic diagram of an exemplary hierarchical neural network in accordance with the present invention, h Figure 3, reference numerals 1 - 26 represent primary neural networks. In accordance with the exemplary embodiment discussed herein, these primary neural networks receive desired inputs from the system being monitored. Reference numerals Gl - G9 represent first tier grouping; reference numerals Gl' - G4' represent second tier grouping; and reference numeral Gl" represent a third tier grouping. The last tier in Figure 3 is designated TOP.
A hierarchical neural network accordance with the present invention can include any number of primary neural networks, 1 - n. Each primary neural network monitors some small aspect of the network behavior. In the exemplary embodiment discussed herein it is important that the primary neural networks, monitor, e.g., receive inputs representing a smgle input, behavior, or networking aspect. The outputs of the primary neural networks, are combined into groups that form the inputs to any number of secondary neural networks, Gl - Gn. The formation of the groups may be based on any criteria appropriate to the function, behavior, or aspect monitored by the associated primary neural networks. The grouping of outputs at one level to form the inputs to an arbitrary number of neural networks at a higher level nay be repeated an arbitrary number of times, until a single output indicating the intrusion status of the monitored network is achieved. Figure 3 shows four such groupings; at the first tier, the second tier, at the third tier, and at the last tier.
The neural networks at the first tier or secondary level and above are trained to combine the inputs using logical combinational functions. In the exemplary embodiment discussed herein, the functions are defined as follows. A "Soft OR" provides a "1" output if it receives a single strong input, or if it receives many moderate inputs. The "Soft OR" provides a "0" output if there are only weak inputs. A "Soft AND" provides a "1" output if the average of inputs is greater than an arbitrary threshold. The "Soft AND" provides a "0" output if the average of the inputs are below the same threshold. Those skilled hi the art will recognize that other combinatorial functions may be used, and that the functional characteristics of the "Soft OR" and "Soft AND" may be modified from that of the exemplary embodiment if desired. It will also be recognized that the selection of OR and AND, or any other combinational function, for use in a particular neural network at the secondary level or higher is dependent on the allowable false alarm rate versus the required probability of detection. Any combination of combinational functions can be used until the desired result is achieved.
A hierarchical structure such as disclosed in Figure 3 breaks the task of intrusion detection into small focused components. It uses the neural networks to monitor each primary element of the intiiision detection task, that is, to monitor each small primary element of the intiiision task. The exemplary structure of Figure 3 provides a recombination of the small focused components into a comprehensive picture of intiiision by using an arbitrary hierarchical architecture of neural networks with fixed combinational elements above the fust level.
Table 1 gives the very simple set of assertions utilized by the exemplary IDS. The assertions in Table 1 were applied to the packets associated with each individual service, and to all TCP packets aggregated globally. No assumptions are made about use statistics; the assertions in Table 1 hold regardless
Figure imgf000008_0001
72 # rec'd data packet source sockets = # sent packet dest. sockets
# rec'd packet dest. ports = # sent packet source ports
S2 # rec'd data packet source sockets <= # open connections
# sent packet dest. sockets <= # open connections
'in this server model, all SYN packets are received, all SYN-ACKs are sent. Used only in the all-TCP packets monitor
Table 1 - Lowest-Level NN Definitions of the volume of traffic, packet size distribution, inter-arrival rates, login rates, etc.. The assertions do not even include knowledge about the number of and ports for seivices allowed on the monitored server, although this could well be doable for real systems. The truth of the assertions in Table 1, and more, could be tested precisely by a program that maintained state on eveiy packet sent and received. Writing such a program would be akin to rewriting the TCP network software. If a re-write of TCP is contemplated, it would be more productive simply to put in the error and bounds checking that would prevent exploitation of the protocol for attacks. Rather than maintaining state on
# SYNs received
# SYNs dropped
# SYN-ACKs sent
# of new connections made
# of queued SYNs at end of the last window (T-30 sec)
# of queued SYNs at end of this window (T)
# queued SYNs timed-out
Max # of connections open
# FIN-ACKs sent
# FIN-ACKs received
# Resets sent
# Resets received
# of connections closed
# source sockets for received.data packets
# destination sockets for sent packets
# destination ports for received.packets # source ports for sent packets
Table 2 - Input Statistics Definition every packet and connection, the experiment tested whether or not the assertions would hold well enough over aggregated statistics to detect anomalies. The packet and TCP connection statistics utilized in the exemplary data discussed herein were generated over 30 second windows. The 30 second windows were overlapped by 20 seconds, yielding an IDS input every 10 seconds. The input statistics are given in Table 2.
The test data included baseline (nominal use) data, and four distinct variations from the baseline. One is an extreme variant of noπnal use, where multiple users fry to use Tehiet essentially simultaneously. Tliree attacks were used: a SYN Flood, a fast SYN port scan, and a "stealthy" SYN port scan. The first tliree - the high-volume noπnal use, the SYN Flood and the fast port scan - all cause large numbers of SYN packets to anive at the server in a short period of time. The "stealthy scan" variant tested the system's threshold of detection.
Figure 1 is a schematic diagram of a lower portion of an exemplary hierarchical neural network (NN) to which the present invention can be applied.. Packet and queue statistics are used as input to the lowest-level NNs monitoring the nominal behaviors described in Table 1. The outputs from the Level 1 NNs are combined at Level 2 into connection establishment (CE), connection termination (CT) and port use (Pt, for all-packets only) monitors. Finally, the outputs of the Level 2 NNs are combined at Level 3 into a single status. The hierarchy shown in Figure 1 was replicated to monitor the individual status of the TCP services and "all- packets" status. Figure 2 is a schematic diagram of an upper portion of an exemplary hierarchical neural network to which the present invention can be applied. This figure shows how each of these status monitors was combined to yield a single TCP status.
While the NNs at the lowest level of the hierarchy are framed to monitor the assertions listed in Table 1, the NNs at higher levels are intended to combine lower-level results in a way that enhances detection while suppressing false alarms. Two combinational operators, OR and AND, were chosen for the higher level NNs. A soft OR function was implemented that passed high-valued inputs from even a single NN, enlianced low-valued inputs from more than one contributing NN, and tended to suppress single, low-valued inputs. A soft AND function was implemented that enlianced inputs when the average value from all contributing NNs exceeded some threshold, but suppressed inputs whose average value was low.
For the NNs at Levels 2 and 3, both an OR and an AND NN was tried. This resulted in the four aπangements shown in Table 3. At levels 4 and 5, only OR NNs were
Figure imgf000011_0001
Table 3 - Hierarchy Combinational Variations used. This seemed logical, since an attack can be directed at a single service (the SYN Flood attack in the test data for this experiment was directed at Telnet only) and some attacks (like port scan) are only visible to the "all packet" NNs. Using an AND function to combine the status outputs would tend to wash out these attacks.
In addition to hierarchy variations described above, two contrasting hierarchies were tested. First, the NNs at Levels 1 and 2 were eliminated, and a single "flat" NN at Level 3 categorized the input statistics. This aπangement tested the value of the hierarchy. Second, the arbitrary hierarchy shown in Figures 1 and 2 was replaced with a hierarchy carefully crafted to give the best performance on the test data. This arcangement demonstrates the built-in biases of the hierarchy.
A back propagation NN is initialized randomly and must undergo "supervised learning" before use as a detector. This requires knowledge of the desired output for each input vector. Often, obtaining training data with known content is difficult. Furthermore, if the input representing "anomalous" contains known attacks, the NN will learn to recognize those particular signatures as bad, but may not recognize other, novel attack signatures. The NNs described herein were trained using data generated artificially, eliminating both problems. Input vectors to each NN comprise random numbers. Each input vector was tested against the assertion monitored by that particular NN. The desired output was set to "nominal" for all random vectors for which the assertion held; the desired output was set to "anomalous" for all other vectors. Because only a few nominal vectors are generated by this approach, the set of nominal inputs was augmented by selecting some elements of the input vector randomly, and then forcing the remaining elements to make the assertion true. hi general training data can be developed for each monitored characteristic having a specifiable property. For each of these properties, assertions are devised about the relationship(s) that hold among the measured network or computing parameters. Examples of such assertions are shown in Table 1. Then random numbers are generated to coπespond to each of the measured parameters. Sets of randomly-generated "parameters" (corresponding to the multi-dimensional inputs to the IDS) are tested against the assertion(s) for the monitored characteristic. The desired output is set to "nominal" for all sets of random numbers for which the assertion holds; the desired output is set to "anomalous" for all other sets. In general, the percentage of random number sets for which the assertion holds is small. The percentage of nominal inputs can be augmented by selecting some of the parameters randomly, and then forcing the remaining parameters to make the assertion tine. By generating a sufficient number of training inputs as described above, the space of nominal and anomalous inputs can be reasonably well-spanned. By generating a sufficient number of vectors (4000-6000 were used in experiment described herein), the n-dimensional space of nominal and anomalous input statistics can be reasonably well-spanned. The NN learns to distinguish the nominal pattern from any anomalous (attack) pattern.
Exemplary test data was generated by running a network simulation developed using Mil3's OPNET Modeler. OPNET is tool for event-driven modeling and simulation of communications networks, devices and protocols. The modeled network consisted of a seiver computer, client computers and an attacking computer connected via 10 Mbps Ethernet links and a hub. The server module was configured to provide email, FTP, telnet, and Xwindows services. In the example described herein, the attacking computer module was a standard client module modified to send out only SYN packets. Those packets can be addressed to a single port to simulate a SYN flood attack or they can be addressed to a range of ports for a SYN port scan. For baseline runs, the attacking computer was a non-participant in the network.
For the surge Telnet login case, the model was configured so that all but two of the clients began telnet sessions at the same time. This created a deluge of concurrent attempts to access the telnet service. The login rate this simulation produced was several hundred times higher than the baseline rate. At the start of the surge of logins, the seiver is overwhelmed and drops some SYN packets. The other two clients were used to provide consistent traffic levels on the other available seivices.
Five simulation runs of 37,550 (simulated) seconds were made. Each run contained baseline data plus four events - one "surge" in Telnet logins and the tliree attacks. Twenty- five different seed values were used for the baseline portions. The port scans were conducted at varying rates and over different numbers of ports to assess the effect of scan packet amval rate on the IDS'
Figure imgf000013_0001
Table 4 - Event Descriptions. ability to detect the scan. Table 4 describes the characteristics of the simulation nins.
The following summarizes the results of applying training data in accordance with the present invention to a back propagation hierarchical neural network
A. Anomaly Detection
After training with the randomly generated data described above, each lower level NN h the hierarchy was presented with the network simulation data. Figure 4 summarizes the performance of the six exemplary back propagation hierarchies over all five runs. To make these graphs, the maximum, minimum and average output of each hierarchy was calculated for the baseline, surge logins, and the tliree attacks. The surge login event was further broken down into two parts: a "nominal" part when the server could handle the incoming login requests, and an "off-nominal" part when the server dropped SYN packets. The length of the bars in Figure 4 shows the range of outputs, while the color changes at the average output.
The first thing to note is that for all hierarchies, the output for nominal inputs - baseline and surge logins when no SYNs are dropped - are virtually identical. This is a key result, since true network activity does not follow the normal distributions used in the OPNET network model; instead, it appears to follow heavy-tailed distiibutions where extreme variability in the network activity is expected. True network data might be expected to have more, and more extreme, variability than was seen in the simulation output baseline. The surge login results suggest that the IDS would tolerate these usage swings without false alanns, so long as the server can keep up with the workload.
The second notable result is that the output for the SYN Flood and fast scan attacks are well separated from the nominal output. A threshold can be set for all hierarchies that results in 100% probability of detection (PD) for these attacks, with no false alaπns (FA) from nominal data. All hierarchies excepting the "flat" one detected some part of the stealthy scan.
The wide range of outputs for the stealthy scan reflects the fact that the scan packet rate was varied to test sensitivity. Figure 5 shows the PD for the stealthy scan as a function of scan packet rate. For each hierarchy type, the detection threshold was set just above the maximum output for nominal inputs, so these are PD at zero FA.
Some of the hierarchies responded to the "off-nominal" surge login, that is, during the time when SYN packets were dropped. This result was not expected. Investigation showed that this FA arises mainly from a mis-formulation of the assertion embodied in NN #3. The change in the queue size depends not on the number of SYNs received, but rather on the number of SYNs processed; that is, on the number of SYNs received less the number dropped. The incon-ectly-stated assertion is violated whenever SYN packets are dropped, yielding a strong response during this portion of the surge login. When AND combinational NNs are used at the Level 2, this response is suppressed; however, the OR combinational NNs at Level 2 pass this output unchanged to Level 3, and rehiforce the weak response to the surge login on other Level 1 NNs. This illustrates the general effect of the AND and OR NNs. Using AND NNs, especially at Level 2, strongly suppressed noise, but also reduced sensitivity to the stealthy scan. Using OR NNs increased sensitivity at the expense of increased noise.
The "flat" hierarchy was unable to detect the stealthy scan at all. This result shows the sensitivity advantage of the deeper hierarchies. What is not evident from this graph is the difference in robustness between the hierarchy and flat IDS. The flat IDS made its detenninations on the basis of just three inputs. A flat NN with only these inputs responds as well as the flat NN with all inputs; a flat NN without just one of these inputs will miss a detection or have a FA at the surge login. This contrasts with the original hierarchy, where the SYN Flood and the scans (fast and stealthy) are each recognized by several Level 1 NNs using different input statistics. This diversity should yield a more robust detector.
The output of the "best" hierarchy shows that the organization of the hierarchy has a strong effect, histead of grouping the Level 1 NNs into CE, CT, and Pt groups, hindsiglit was used to establish three different groups: 1) all NN that responded to the surge login, 2) of the remaining NNs, the ones that respond to the stealthy scan, and 3) all the rest. This hierarchy perfonned as well as could possibly be desired. In fact, as shown in Figure 5, a threshold could be established that resulted in 100% PD at 0% FA, even for scan packet rates of 1 or fewer scan packets per 30-second window. Unfortunately, to rearrange the hierarchy to enhance detection of particular attacks is tantamount to introducing a signature detector into the IDS. A parametric study could quantify the sensitivity of PD and FA to the hierarchy arrangement. B. Anomaly Classification
There are two reasons to replace the upper-level back propagation NNs in the hierarchy with some alternative processing. First, the back propagation hierarchy gives a simple summary nominal/anomaly output, and information about the nature of the anomaly incorporated in the lower-level NNs is lost. Second, as demonstrated above, the hierarchy itself introduces an element of signature recognition into the IDS. To overcome these drawbacks, the NNs at Level 2 were eliminated completely, and the back propagation NNs at Levels 3-5 were replaced with detectors that sort the unique arrangements of inputs into anomaly categories.
The first candidate for these new detectors was a Kohonen Self-Organizing Map (SOM) as described in T. Kohonen, Self-Organizing Maps. New York: Springer- Verlag, 1995. The SOM provides a 2-D mapping of n-dimensional input data into unique clusters. The visualization prospects offered by a "map" of behavior are attractive, however, other properties of a SOM are less appealing in this context. First, a SOM works best when the space spanned by the n-dimensional input vectors is sparsely populated. The Level 1 NN output data had more variability than the SOM could usefully cluster. The SOM was nearly filled with points, and although a line could be drawn around an area where the nominal points seemed to fall, it offered no more insight than the back propagation hierarchy, at a higher computational cost. Second, the SOM only clusters data that is in its training set. The presentation of novel inputs after training produces unpredictable results.
Because the Level 1 NN output vectors appeared stable within an event type, and distinct between events, some means of mapping from the multi-dimensional output space to a 2-D display seemed possible. A simpler mapping technique was devised. An arbitrary vector was chosen for a reference; for this experiment, the reference vector was an average of the baseline hierarchy outputs. Then, for every input vector, the detector calculated the difference in length and angle from the reference vector. X-Y coordinates were generated from the length and angle computed from each input. The numeric values of the X-Y pairs themselves are meaningless, except to separate unlike events on a 2-D plot. These X-Y pairs were plotted like the X-Y pairs generated by the SOM. This is referred to as a "vector map". While the vector map is not guaranteed to map all distmct anomalous vectors into separate places on the map, it worked well for the exemplary data.
Figure 6 shows a vector map for the baseline, surge login, SYN Flood and fast scan data from Run 1 (there is little run-to-run variation). Due to the reference vector choice, nominal points (baseline and nominal surge login) all cluster at 0,0. While the attack is ongoing, the fast scan and SYN Flood points are well-separated from each other and from nominal. The off-nominal surge login points are distinct from nominal, but are also distmct from both the SYN Flood and fast port scan while the attacks are in progress. Using this technique, this event can be classified as an anomaly, but not a malicious attack.
Other scattered points identified with the true attacks actually occur after the attack is over, but while the residual effects are still felt. For example for a SYN Flood, after the spoofed SYN packets stop, the queue remains full for 180 seconds. During that time, extra SYN-ACKs are sent to attempt to complete the spoofed connection requests, and legitimate users attempt to login and fail. These anomalous events map to unique locations.
For clarity, Figure 7 shows the vector map for the stealthy scan on an expanded scale. Distance from nominal increases with scan packet rate, however, even one scan packet per 30-second window maps to a location distinct from nominal. Thus over time, even a very stealthy scan, with packet intervals of minutes to hours, will eventually be detectable as an accumulation points on the map outside the nominal location.
Within the limitations of the exemplary setup, the experiment described herein shows that an IDS can be devised that truly responds to anomalies, not to signatures of known attacks. The exemplary IDS was 100% successful in detecting specific attacks, without a priori information on or training directed towards those attacks. Because of the fraining method used, it is expected that the IDS would detect any attack that peiturbs the parameters visible to the exemplary IDS. To produce this result, the normal behavior must be specifiable in advance. Since network protocols can be formally specified, at least attacks that exploit flaws in protocol implementations should be detectable this way. In other experiments, the approach has been successfully applied to RFC 1256 and IGMP as well as TCP. Other well-defined procedures, such as obtaining root access, are also candidates for application of this technique. In recent research, foπnal specifications have been used to define test cases for complete fault coverage as described in P. Sinl a, and N. Suri, "Identification of Test Cases Using a Foπnal Approach," in Proceedings of the 29th Annual International Symposium on Fault Tolerant Computing, June 15-18 1999. The exemplary IDS suggests that formal specifications may provide a means for creating intrusion detectors as well. The use of windowed statistics in the exemplary detector demonstrates that this approach does not require a stateful, packet-by-packet analysis of traffic for successful application.
The techniques demonstrated in this experiment appear to be resilient to variations in normal behavior that might confound another anomaly detector. They do not depend on use statistics, and traffic volume has little effect on the output. The hierarchical approach is shown to be more sensitive and more robust than a flat implementation. The hierarchy was able to detect more subtle attacks than a single detector using the same inputs. Further, it used more of the inputs in making its determination of detected anomalies.
While the lowest-level detectors in the system are not attack-signature based, the hierarchy itself introduces an element of signature-based detection. This undesirable feature can be overcome by replacing some of the NNs in the hierarchy with alternative detectors. A mapping technique called "vector mapping" was worked well in this role. A combination of back propagation NNs and vector maps was able to summarize overall TCP status while distinguishing among types of anomalies. Even very stealthy scans, with scan packets arriving at long inteivals, could be detectable with this approach. The vector map technique is not limited to use with NN detectors, but might be used on other low-level IDS outputs.

Claims

Claims 1. A hierarcliical neural network for monitoring network functions, comprising: a set of primary neural networks operatively connectable to receive inputs associated with respective ones of the network functions, each of the primary neural networks having an output; and a first tier of neural networks operatively connected to combine selected outputs of the primary neural networks. 2. A hierarchical neural network according to claim 1 , wherein each of the first tier of neural networks has an output, and the neural network further comprises: a second tier of neural networks operatively connected to combine selected outputs of the first tier of neural networks. 3. A hierarchical neural network according to claim 1, wherein at least some of the first tier of neural networks operate to combine selected outputs of the primary neural networks using a combinational logic function. 4. A hierarchical neural network according to claim 2, wherein at least some of the second tier of neural networks operate to combhie selected outputs of the first tier neural networks using a combinational logic function. 5. A hierarchical neural network according to claim 1, wherein at least some of the first tier of neural networks operate to combine selected outputs of the primary neural networks using a combinational logic function. 6. A hierarchical neural network according to claim 3, wherein the combinational logic function includes at least one of a Soft OR and a Soft AND. 7. A hierarchical neural network according to claim 4, wherein the combinational logic function includes at least one of a Soft OR and a Soft AND. 8. A hierarchical neural network according to claim 5, wherein the combinational logic function includes at least one of a Soft OR and a Soft AND. 9. A method of detecting an anomaly using a hierarcliical neural network, comprising: applying signals representative of selected network functions to a primary set of neural networks; applying selected outputs of at least some of the primary neural networks to first tier neural networks; and using at least some of the outputs of the first tier neural networks to detect an anomaly. 10. A method of detecting an anomaly according to claim 9, wherein the applying selected outputs of at least some of the primary neural networks to first tier neural networks includes combining at least some of those outputs. 11. A method of detecting an anomaly according to claim 10, wherein the combining at least some of those outputs includes combining those outputs using a combmational logic function. 12. A method of detecting an anomaly according to claim 10, wherein the combining at least some of those outputs includes combining those outputs using at least one of a Soft OR and a Soft AND 13. A method of detecting an anomaly according to claim 11 , wherein the combinational logic function includes at least one of a Soft OR and a Soft AND.
IS
PCT/US2001/047828 2000-12-13 2001-12-12 A hierarchial neural network intrusion detector WO2002048959A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2002228988A AU2002228988A1 (en) 2000-12-13 2001-12-12 A hierarchial neural network intrusion detector
US10/433,713 US20040054505A1 (en) 2001-12-12 2001-12-12 Hierarchial neural network intrusion detector

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US25516400P 2000-12-13 2000-12-13
US60/255,164 2000-12-13

Publications (2)

Publication Number Publication Date
WO2002048959A2 true WO2002048959A2 (en) 2002-06-20
WO2002048959A3 WO2002048959A3 (en) 2003-08-14

Family

ID=22967122

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/047828 WO2002048959A2 (en) 2000-12-13 2001-12-12 A hierarchial neural network intrusion detector

Country Status (2)

Country Link
AU (1) AU2002228988A1 (en)
WO (1) WO2002048959A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040008375A (en) * 2002-07-18 2004-01-31 광주과학기술원 Intrusion detection method and recording media based on common features of abnormal behavior
EP1986391A1 (en) 2007-04-23 2008-10-29 Mitsubishi Electric Corporation Detecting anomalies in signalling flows
US7957372B2 (en) 2004-07-22 2011-06-07 International Business Machines Corporation Automatically detecting distributed port scans in computer networks
US8122504B1 (en) 2004-10-14 2012-02-21 Lockheed Martin Corporation Flood attack projection model
US8433768B1 (en) 2004-10-14 2013-04-30 Lockheed Martin Corporation Embedded model interaction within attack projection framework of information system
US9160760B2 (en) 2014-01-06 2015-10-13 Cisco Technology, Inc. Anomaly detection in a computer network
US9563854B2 (en) 2014-01-06 2017-02-07 Cisco Technology, Inc. Distributed model training
US9870537B2 (en) 2014-01-06 2018-01-16 Cisco Technology, Inc. Distributed learning in a computer network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5295227A (en) * 1991-07-09 1994-03-15 Fujitsu Limited Neural network learning system
US5557686A (en) * 1993-01-13 1996-09-17 University Of Alabama Method and apparatus for verification of a computer user's identification, based on keystroke characteristics

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5295227A (en) * 1991-07-09 1994-03-15 Fujitsu Limited Neural network learning system
US5557686A (en) * 1993-01-13 1996-09-17 University Of Alabama Method and apparatus for verification of a computer user's identification, based on keystroke characteristics

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
DENAULT S ET AL: "INTRUSION DETECTION: APPROACH AND PERFORMANCE ISSUES OF THE SECURENET SYSTEM" COMPUTERS & SECURITY. INTERNATIONAL JOURNAL DEVOTED TO THE STUDY OF TECHNICAL AND FINANCIAL ASPECTS OF COMPUTER SECURITY, ELSEVIER SCIENCE PUBLISHERS. AMSTERDAM, NL, vol. 13, no. 6, 1994, pages 495-508, XP000478665 ISSN: 0167-4048 *
LEE S C ET AL: "Building a true anomaly detector for intrusion detection" MILCOM 2000 PROCEEDINGS. 21ST CENTURY MILITARY COMMUNICATIONS. ARCHITECTURES AND TECHNOLOGIES FOR INFORMATION SUPERIORITY (CAT. NO.00CH37155), PROCEEDINGS OF IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM'00), LOS ANGELES, CA, USA, 22-25 OCT. 2000, pages 1171-1175 vol.2, XP002242525 2000, Piscataway, NJ, USA, IEEE, USA ISBN: 0-7803-6521-6 *
LEE S C ET AL: "Training a neural-network based intrusion detector to recognize novel attacks" IEEE TRANSACTIONS ON SYSTEMS, MAN & CYBERNETICS, PART A (SYSTEMS & HUMANS), JULY 2001, IEEE, USA, vol. 31, no. 4, pages 294-299, XP002242526 ISSN: 1083-4427 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040008375A (en) * 2002-07-18 2004-01-31 광주과학기술원 Intrusion detection method and recording media based on common features of abnormal behavior
US7957372B2 (en) 2004-07-22 2011-06-07 International Business Machines Corporation Automatically detecting distributed port scans in computer networks
US8122504B1 (en) 2004-10-14 2012-02-21 Lockheed Martin Corporation Flood attack projection model
US8433768B1 (en) 2004-10-14 2013-04-30 Lockheed Martin Corporation Embedded model interaction within attack projection framework of information system
EP1986391A1 (en) 2007-04-23 2008-10-29 Mitsubishi Electric Corporation Detecting anomalies in signalling flows
US9413779B2 (en) 2014-01-06 2016-08-09 Cisco Technology, Inc. Learning model selection in a distributed network
US9160760B2 (en) 2014-01-06 2015-10-13 Cisco Technology, Inc. Anomaly detection in a computer network
US9450978B2 (en) 2014-01-06 2016-09-20 Cisco Technology, Inc. Hierarchical event detection in a computer network
US9503466B2 (en) 2014-01-06 2016-11-22 Cisco Technology, Inc. Cross-validation of a learning machine model across network devices
US9521158B2 (en) 2014-01-06 2016-12-13 Cisco Technology, Inc. Feature aggregation in a computer network
US9563854B2 (en) 2014-01-06 2017-02-07 Cisco Technology, Inc. Distributed model training
US9870537B2 (en) 2014-01-06 2018-01-16 Cisco Technology, Inc. Distributed learning in a computer network
US10356111B2 (en) 2014-01-06 2019-07-16 Cisco Technology, Inc. Scheduling a network attack to train a machine learning model

Also Published As

Publication number Publication date
AU2002228988A1 (en) 2002-06-24
WO2002048959A3 (en) 2003-08-14

Similar Documents

Publication Publication Date Title
Lee et al. Training a neural-network based intrusion detector to recognize novel attacks
US20040054505A1 (en) Hierarchial neural network intrusion detector
Al-Jarrah et al. Network Intrusion Detection System using attack behavior classification
JP6139656B2 (en) Use of DNS requests and host agents for path exploration and anomaly / change detection and network status recognition for anomaly subgraph detection
Narang et al. Peershark: detecting peer-to-peer botnets by tracking conversations
Labib et al. An application of principal component analysis to the detection and visualization of computer network attacks
Kemp et al. Utilizing netflow data to detect slow read attacks
US20040059947A1 (en) Method for training a hierarchical neural-network intrusion detector
Lysenko et al. A cyberattacks detection technique based on evolutionary algorithms
Mangrulkar et al. Network attacks and their detection mechanisms: A review
Dhir et al. Study of machine and deep learning classifications in cyber physical system
Liu et al. Real-time diagnosis of network anomaly based on statistical traffic analysis
Gandhi et al. Detecting and preventing attacks using network intrusion detection systems
WO2002048959A2 (en) A hierarchial neural network intrusion detector
Langin et al. A self-organizing map and its modeling for discovering malignant network traffic
Lu et al. Botnets detection based on irc-community
Abushwereb et al. Attack based DoS attack detection using multiple classifier
Kemp et al. Detecting slow application-layer DoS attacks with PCA
Kemp et al. An approach to application-layer dos detection
Mariam et al. Performance evaluation of machine learning algorithms for detection of SYN flood attack
Patil et al. A comparative performance evaluation of machine learning-based NIDS on benchmark datasets
Wutyi et al. Heuristic rules for attack detection charged by NSL KDD dataset
WO2002048958A2 (en) Method for training a hierarchical neural-network intrusion detector
US20040088341A1 (en) Method for converting a multi-dimensional vector to a two-dimensional vector
Elsherif et al. DDOS Botnets Attacks Detection in Anomaly Traffic: A Comparative Study.

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: 10433713

Country of ref document: US

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP