Professional Documents
Culture Documents
June 2003
Abstract
Anomaly based Intrusion Detection Systems are the type of system with the
biggest potential within intrusion detection. An anomaly based Intrusion
Detection System needs to be able to learn users’ or system behaviour
because users and system behaviour changes over time in today’s dynamic
environment. In this research we are experimenting with user behaviour as
parameters in anomaly intrusion detection. There are several methods to
assist an Intrusion Detection System to learn users’ behaviour. The proposed
Intrusion Detection System in this research uses a backpropagation neural
network to learn users’ behaviour. Neural networks have earlier been
successfully applied to different areas such as pattern recognition, speech
recognition, computer vision, control, prediction and other real world
problems.
The training of a neural network takes a lot of time and resources. In this
research, we wanted to see if a neural network was able to classify normal
traffic correctly, and detect known and unknown attacks without using a huge
amount of training data. For the training and testing of the neural network, we
used the 1998 DARPA Intrusion Detection Evaluation data sets. We used
197 sessions of traffic for the training of the neural network. Out of these
there were 99 sessions with normal traffic and 98 sessions with attacks.
The experiments were separated into three parts. The first preliminary
experiment was conducted to see when the neural network was properly
ii
trained to classify sessions correctly and when it did not classify any sessions
at all. In this preliminary experiment we used both known and unknown
attacks. The next experiment was conducted to test the neural network with a
small amount of traffic to see how the classification rate was. Here we tested
with both normal traffic, known attacks and unknown attacks. In the final
experiment we tested with a higher amount of traffic.
Unknown attacks are the most threatening attacks, because we do not know
what to expect from these attacks. In the final experiments, we got a
classification rate of 86% on known and unknown attacks. Compared with
two other researches where they got classification rates of 77.3% and 80%,
the results we got in our experiments are very promising.
iii
Table of Content
Abstract ........................................................................................................ ii
Table of Content.......................................................................................... iv
List of Figures ............................................................................................. vi
List of Tables............................................................................................... vi
Acknowledgements ................................................................................... vii
Statement of Originality ........................................................................... viii
List of Abbreviations .................................................................................. ix
iv
Chapter 3: Research Methodology........................................................... 46
3.1 Introduction......................................................................................... 46
3.2 Outline ................................................................................................ 46
3.3 Input files to the Neural Network ........................................................ 48
3.3.1 Log file format .............................................................................. 48
3.3.2 Making the input files ................................................................... 49
3.4 Testing environment ........................................................................... 51
3.4.1 Training Pair ................................................................................ 51
3.4.2 Parameters for the Neural Network ............................................. 51
3.4.3 Criteria for training termination..................................................... 52
3.4.4 Different tests............................................................................... 53
3.4.5 Attacks in different files ................................................................ 54
Chapter 6: Conclusion............................................................................... 69
6.1 Introduction......................................................................................... 69
6.2 Intrusion Detection with Neural Networks........................................... 69
6.3 Suggestions for further research ........................................................ 70
Appendix .................................................................................................... 72
Appendix 1:Training data ......................................................................... 72
Appendix 2: Testing data.......................................................................... 78
Appendix 3: Attacks used in this research................................................ 82
References.................................................................................................. 83
v
List of Figures
List of Tables
Table 2-1 Key findings from the two security surveys .................................... 6
Table 2-2 Misuse vs. Anomaly intrusion detection ....................................... 23
Table 3-1 Original parameters in the DARPA Data Sets.............................. 48
Table 3-2 Parameters used in this experiment ............................................ 49
Table 3-3 Example of the parameters for the Neural Network ..................... 52
Table 3-4 Separated attacks ........................................................................ 54
Table 4-1 The preliminary experiment results .............................................. 56
Table 4-2 Second experiment results for normal traffic................................ 58
Table 4-3 Second experiment results for known attacks.............................. 59
Table 4-4 Second experiment results for unknown attacks.......................... 60
Table 4-5 Final experiment results for normal traffic .................................... 61
Table 4-6 Final experiment results for known attacks .................................. 62
Table 4-7 Final experiment results for unknown attacks .............................. 63
Table 5-1 Known attacks not detected......................................................... 65
Table 5-2 Unknown attacks not detected..................................................... 66
vi
Acknowledgements
Finally I would like to thank all academic, administrative and technical staff
from the School of Information Technology for their assistance throughout my
research.
vii
Statement of Originality
This work has not previously been submitted for a degree or diploma in any
university. To the best of my knowledge and belief, this thesis contains no
material previously published or written by another person except where due
reference is made in the thesis itself.
Signature:………………………….………………
Date: ………………………………………………
viii
List of Abbreviations
ix
Chapter 1: Introduction
1.1 History
Internet has almost become a “new world”, and as in the real world the “new
world” has criminals and vandals. The big threat of vandalism and theft has
given users a need for security components to protect themselves.
In 1983 the ARPAnet, and every network attached to the ARPAnet, officially
adopted the TCP/IP networking protocol. The TCP/IP networking protocol
had been under development since 1973, and had been tested in an internet
in 1973 [1]. From 1983, all networks that used TCP/IP were collectively
known as the Internet. The standardization of TCP/IP allows the number of
Internet sites and users to grow exponentially [2, 3]. When Internet started to
be widely used, the users were so exited about connecting systems that
security was forgotten. Everyone just wanted to use the Internet, and did not
think of the dangers it also brought. The first Internet worm was unleashed on
November 2 1988 by Robert T. Morris Jr [3, 4]. Since then, the number of
incidents is growing rapidly each year. In 2002, the number of incidents was
82094 [5].
1
All information systems and computer networks are threaten by electronic
attacks. Computer systems today have a variety of threats, such as [6]:
• Integrity
• Confidentiality
• Denial of Service
• Authentication
We can say that the silver bullet in network security would be to lock the
computers in a bank vault, with no external access at all and armed guards to
guard the vault. But still there would be the threat of inside attacks from for
example the guards. It is of course not possible to have a system like this,
because most systems need access to the outside world. This is why we
have to get the security level up close to the same level as locking the
computers into a bank vault.
Network security can be seen as a chain. It is said that a chain is not stronger
than its weakest link. The same can be said about network security. Your
security levels are not higher than the weakest part in your security
components. And in this occasion a quotation from Babylonian Talmud,
Tractate Baba Metzia [6] is illustrative; “It is not the mouse that is the thief, it
is the hole that lets the mouse in”.
A couple of years ago someone who wanted to break into a computer system
had to have very good computer skills. He had to know the security holes,
and how to exploit these. Today, the intrusion threats are bigger than ever.
This is because of the fact that there are applications available on the
Internet that gives people with almost no computer experience the possibility
2
to break into computer systems. Because of this, attacks against computer
systems and networks have increased significantly in the last years [8].
Today, almost everyone can find tools to use for attacks. Someone who is
interested in this can easily search for such tools at for example Google [9]
and start using them from home. And for someone who for example just want
to attack a small neighbourhood firm, their IP address can easily be hidden
by the use of public proxy servers found on the Internet.
As Kelly Schupp from Guraded-Net noted [10], “We don’t believe there’s one
silver bullet product, nor will there ever be. However, hopefully with the
implementation of newer solutions, life will become a little more manageable
and (at least temporarily) more secure”.
After reading an article that said that the attack methods we see and know
today could look puny and make them look like child’s play, compared to the
next-generation threat that are coming [11], my interest for the network
security area really raised.
The aim of this research is to find out what needs to be done to make a
computer system safer, without having a system that sends out false alarms
that takes up much of the time of an already busy system administrator. The
job for security administrators is almost impossible today. No matter how
many holes the security administrators finds in their network, and no matter
how many bugs they fix to keep intruders out, the intruder just needs to find
one hole to get in.
3
The goal for this research is to develop an Intrusion Detection System that is
able to detect both known and unknown attacks without relying on signatures
or other hard coded updates to stay protected against the latest attacks. For
this we will examine neural networks ability to learn user behaviour, and we
will use this for intrusion detection.
The content of this thesis is divided into seven chapters. Chapter one
introduces the background, motivation and aim for the research. Chapter two
presents the overview of security components, and goes deeper into
Intrusion Detection Systems. Chapter three proposes and describes the
methods that are used in this research. Chapter four lists the results that are
obtained in the experiments in this research. Chapter five analyzes the
results of the experiments, and shows which attacks that was not detected by
the neural network. This chapter also compares the results with other
research. Chapter six draws conclusions from the research undertaken.
4
Chapter 2: Literature Review
2.1 Introduction
The field of network security motivates this research. This chapter provides a
review of the network security components and attack methods that are
mostly used today. The chapter starts with an overview of the threats that
companies connected to the Internet have today. Further it explains some of
the network security components that are used, and goes further into the
Intrusion Detection System technology. It then explains the different types of
Intrusion Detection Systems, and how far the research has come today. This
chapter also gives an explanation of neural networks, and it ends with an
overview of some of the most used attack methods today.
2.2 Background
5
40% of the respondents also detected Denial of Service attacks to their
servers.
A similar survey to the one in USA has been made in Australia [12]. This is
the only survey of its type in Australia, focusing on the actual extent and
nature of security incidents. This survey showed that 67% of the companies
had unauthorised use of their networks the last 12 months. Of these, 65%
had experienced attacks from the inside of their company.
The key findings from the two security surveys [7,12] can be seen in Table
2-1.
USA % Australia %
How many had firewall installed? 89 96
How many had Intrusion Detection System
60 53
installed?
How many had Anti-Virus software installed? 90 99
How many had unauthorized use to their network? 60 67
How many attacks from the outside? 40 89
How many attacks from the inside? 78 65
How many detected Denial of Service attacks? 40 43
How many had security incidents on their web site? 38 30
6
Annual reports from the Computer Emergency Response Team (CERT)
indicate a significant increase in the number of computer security incidents.
From just 6 incidents reported in 1988 to 82094 incidents reported in 2002
[5].
Incidents
Incidents
90000
80000
70000
60000
50000
40000
30000
20000
10000
0
1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002
The Internet 10 years ago was not a utility in any sense. It is now supporting
10 – 15 percent of the Gross Domestic Product of the industrialized world
[11]. This makes the needs for better security, and the threats are higher than
ever. Gil Raanan from Sanctum explained why businesses are so in need for
better security [14]: “Companies are using their networks for confidential and
mission critical functions. In addition, businesses today are more vulnerable
than in the past because they are sharing information, such as financial and
sales data, over internal and external networks”. William A. Wulf, president of
7
National Academy of Engineering, explained how important Internet is today
[15]: “We are so dependent on the cyber infrastructure now. We can’t do
financial transactions without it. Even the larger infrastructure – the power
grid, gas pipelines – depends on it”.
2.3 Firewall
8
are have at least two network interfaces. One interface for the network it is
protecting and another one for the network it is protecting against. A firewall
can not protect a network from attacks from the inside of the network.
The firewall is usually placed on the perimeter of the network, but it can also
protect “a corner” of the internal network.
All the traffic between the Internet and the local network goes through the
firewall. This gives the system administrator the opportunity to deny specified
services access to certain areas, or give specified services access to certain
areas.
9
There are three different types of firewall technologies:
10
The components of an Intrusion Detection System are [17]:
11
False positives are those error messages that take much of the system
administrator’s time. A high rate of these errors will degrade the productivity
of the system by invoking unnecessary countermeasures. False negatives
are those errors that are hard to detect because the system sees the attacker
as an ordinary user. These attacks are also the most dangerous and these
errors can cause big losses for a company.
Intrusion Detection Systems differs from on-line to off-line systems [21]. Off-
line systems are run periodically and they detect intrusions after-the-fact
based on system logs. On-line systems are designed to detect intrusions
while they are happening, thereby allowing for quicker intervention. Intrusion
Detection Systems can also be classified according to the kind of audit
source location they analyze [19]:
12
The host based Intrusion Detection System looks at communication in and
out of the computer, checks the integrity of the system files and suspicious
processes. The network based Intrusion Detection System looks at packets
on the network as they pass the intrusion detection sensor. The best solution
will be to have a system that combines these two systems.
There are many different vendors making Intrusion Detection Systems to the
strongly needed network security marked. Most of them are commercial
vendors, but there are also some popular open source systems such as
Snort [22].
13
2.4.2 History
In the end of 1970 and beginning of 1980 the system administrators started
to use logs in their intrusion detection [26]. Still they could not do any
intrusion detection in real time so they read the logs to check for trace and
evidence of any type of intrusion. They had to be lucky to catch an attack in
progress.
Some of the earliest work in intrusion detection was performed in the end of
1970 by Jim Anderson [27]. Anderson defined an intrusion as any
unauthorized attempt to access, manipulate, modify, or destroy information,
or to render a system unreliable or unusable.
After a while all logs where stored online and programs where developed to
do the analysis. But these programs used much resource from the computer
and were slow so they had to be run at night time. Still the logs where just
used to check for intrusions and try to get evidence if there had been an
attack, but now the administrator did not have to check all the logs manually.
14
D. Denning describes in her report “An Intrusion-Detection Model” [28] four
factors for developing real-time Intrusion Detection System:
D. Dennings report was published in the beginning of 1987, but the factors
listed above are still very useful in the development of security components
today.
15
Today’s Intrusion Detection Systems have many shortcomings [17] such as:
Now many companies have up to gigabit networks, and the amount of data
that needs to be checked is huge. Also, the high rate of false alarms is pulling
down the efficiency ratio of the Intrusion Detection Systems. The lack of
ability to detect new attacks is maybe the biggest shortcoming of today’s
Intrusion Detection Systems.
Much of the current effort seems to be aimed at detecting attacks that are
made by relatively unskilled and unfocused attackers that can use tools that
are available for free on the Internet. But the greatest threat lies in narrowly
focused attacks launched by enemies who will make serious attempts to
avoid detection. These attacks are most likely not being detected by misuse
Intrusion Detection Systems.
D. Denning said in her report from 1987 [28] that a person could escape
detection through gradual modifications of behaviour. This is something that
is still possible today, and it is one of the biggest, if not the biggest, problem
in anomaly based intrusion detection. She also left several other questions
unsolved, and most of them are still not answered completely today:
16
• Completeness of approach: Does the approach detects most, if not all,
intrusions, or are a significant proportion of intrusions detectable by
this method?
• Timeliness of approach: Can we detect most intrusions before
significant damage is done?
• Choice of metrics, statistical models and profiles: Which metrics,
models, and profiles provide the best discriminating power? Which are
most cost-effective? What are the relationships between certain types
of anomalies and different methods of intrusion?
• System design: How should a system based on the model be
designed and implemented?
• Feedback: What effect should detection of an intrusion have on the
target system? Should Intrusion Detection Expert System
automatically direct the system to take certain actions?
• Social implications: How will an Intrusion Detection System affect the
user community it monitors? Will it deter intrusions? Will the users feel
their data is better protected? Will it be regarded as a step towards
“big brother”? Will its capabilities be misused to that end?
The high rate of false positives (false alarms) characterizes most Intrusion
Detection Systems today. These false alarms can degrade the productivity of
the system by invoking unnecessary countermeasures [19], and they also
take a lot of time to check for a system administrator.
17
The task of anomaly intrusion detection is to determine if an activity is
unusual enough to suspect an intrusion. A basic assumption of anomaly
detection is that attacks differ from normal behaviour [23]. If an organization
implements an anomaly based Intrusion Detection System, they must first
build profiles of normal user and system behaviour to serve as the statistical
base for intrusion detection, and then use deviations from this baseline to
detect possible intrusions [18]. Any activity sufficiently deviant from the
baseline will be reported as anomalous and considered as a possible attack.
18
1. A user attempting to penetrate the security mechanisms in the
operating system might execute different programs or trigger
more protection violations from attempts to access
unauthorized files or programs. If his attempt succeeds, he will
have access to commands and files not normally permitted to
him.
2. A user trying to leak sensitive documents might log into the
system at unusual times or route data to remote printers not
normally used. A user attempting to obtain unauthorized data
from a database through aggregation and inference might
retrieve more records than usual.
• Denial-of-Service attacks: An intruder able to monopolize a resource
might have abnormally high activity with respect to the resource, while
activity for all other users is abnormally low.
The main advantage of anomaly Intrusion Detection Systems is that they can
detect previously unknown attacks. By defining what is normal, they can
identify any violation, whether it is part of the threat model or not. In today’s
system the advantages of detecting previously unknown attacks is paid for in
terms of high false-positive rates [26, 18]. Disgruntled employees, bribery
and coercions make networks vulnerable to attacks from the inside [13].
Anomaly intrusion detection can detect if any employees differs from their
normal routines to make any attempts to an attack.
Disadvantages with anomaly Intrusion Detection Systems are that they are
less effective in dynamic environments, where employees have erratic hours
or switch project resources frequently. Also, inaccurate or incomplete user
and system profiling can lead to false-positives [18]. This type of intrusion
detection also has difficulty with classifying or naming the attacks, since they
just depend on deviations from normal behaviour [30]. When new users are
introduced into the target system, two potential problems occur [28].
19
Both these problems will give a high rate of false positives to the system so it
is hard to know how to deal with these. One way to “solve” those problems is
to ignore anomalies during a short period, or raise the deviation value. Both
of these two solutions will give even more dangerous problems. What if the
new users make an intrusion? And what happens if the system is attacked
during this period?
The main advantage of misuse Intrusion Detection Systems is that they focus
analysis on the audit data and typically produce few false-positives [18].
Since they rely on signatures, the system knows what kind of attack it is
when it occurs. This way the system can easily assign names to the attacks
when they occur, and the system administrator can see what kind of attack
the system is under. The problem with these systems is that it is script based
and only recognize known scripts (“signatures”), but are unable to detect truly
novel attacks [10, 30]. Since misuse Intrusion Detection Systems have no
capability of autonomous learning they require frequent updates. As new
attacks are discovered, developers must model and add them to the
signature database.
20
A report from 1999 [13] showed that misuse Intrusion Detection Systems can
be very effective in reducing false alarms if they are implemented properly.
The problem is that there can also be small changes in the attack methods
and to detect the changes new signatures has to be written. There are often
written many variations of one signature and over time this will slow down the
system because the signature database grows so big.
Today, nearly all Intrusion Detection Systems are signature based. The
performance of these systems is limited by the signature database they work
from. Many known attacks can be easily modified to present many different
signatures. If the database does not contain all the different variations, even
known attacks may be missed [31]. Attackers can also bypass the signatures
by encrypting the code so that the packets do not match any known attack
signatures [13].
There are systems out now that combines the two types of Intrusion
Detection Systems. Hybrid systems can use a rules base to check for known
attacks against a system, and an anomaly algorithm to protect against new
types of attacks [18]. This type of Intrusion Detection System takes the
advantages from both systems, but unfortunately it also takes some of the
disadvantages. Misuse detection could be used in combination with anomaly
detection to name the attacks. This will shorten the response time the system
administrator needs as he can see what type of attack the system are under.
21
2-4, is made to ease the sharing of information between different Intrusion
Detection Systems [24].
22
2.4.8 Misuse versus Anomaly Detection
Despite the fact that there has been done a lot of research on intrusion
detection it is pretty clear that anomaly intrusion detection has more potential
because of its ability to catch novel attacks. Here are the advantages and
disadvantages of misuse and anomaly intrusion detection as they are today:
Advantages Disadvantages
Misuse - Can name attacks - The signature database tends to
IDS - System administrators can get big and clustered after a while.
write their own signatures This can slow down the system
- Easy to implement - Can not completely detect novel
- Properly implemented, it attacks
does not give many false - Needs to be updated with new
alarms. signatures to catch newly
discovered attacks
- Unprotected against new attacks
during the time it takes to write
new signatures
Anomaly - Can easily detect attacks - Complex to implement
IDS from the inside - High rate of false alarms
- Hard for an intruder to - Still not satisfying enough in a
know how he should behave dynamic environment
to not raise an alarm since - Can not name attacks.
profiles can be on individual
users
- Can detect previously
unknown attacks
- Can use more
sophisticated rules
23
If for example there is an anonymous FTP connection attempts from an
outside IP address this may not cause the system to be suspicious at all. But
if the FTP connection attempt is within a set period of time after a scan from
the same IP, it should become more suspicious. This can be done with the
use of anomaly systems. An anomaly intrusion detection system will not grow
“big and slow” over time, because it learns the pattern of the users over time.
The last 20 years there has been conducted much research on intrusion
detection, starting with James P. Andersons whitepaper “Computer Security
Threat Monitoring and Surveillance” in 1980 [27]. Anderson introduced the
concept of computer threats and detection of misuse. This is the same
concept that is applied to host based Intrusion Detection Systems. Dorothy
Denning wrote a report in 1987 [28]. This report has almost become a
fundamental stone and has inspired many researchers in the intrusion
detection research field. Almost every research paper on intrusion detection
uses this paper as a reference. Denning introduced the first model for
intrusion detection, and most of her work are still of current interest today.
24
parameters that were used in this experiment were username, host, type of
connection and time session started.
A report from late 2000 [30] concluded that all the evaluation performed to
that date indicated that Intrusion Detection Systems where only moderately
successful at identifying known intrusions, and quite a bit worse at identifying
those that had not been seen before.
There are still very much that needs to be done before Intrusion Detection
Systems are working satisfying enough. Here are the biggest issues:
• The biggest problem with Intrusion Detection Systems is that they are
reactive, not proactive.
• In anomaly detection there is a problem when there are small changes
in a user’s behaviour. This happens sometimes, and will the Intrusion
Detection System then alert this as an attack/misuse?
• Anomaly intrusion detection systems are still not working satisfying
enough in a dynamic environment where there are big changes in the
user behaviour.
• We need effective systems that can detect close up to 100% of attack
methods without the high rate of false-positives that we have today.
• We need security components that are resilient, and that can respond
intelligently to attacks and have countermeasures.
• Current intrusion detection systems have limited response
mechanisms that are inadequate given the current threat. While
intrusion detection system research has focused on better techniques
for intrusion detection, intrusion response remains principally a manual
process.
• A major weakness in today’s Intrusion Detection Systems is that they
rely on known attack methods to identify attacks. They lose
25
effectiveness from the time a new attack is discovered to a signature
for this attack is made.
The use of several security components can make a network more secure
because misconfigurations or weaknesses in one component can be
equalised by another component. Both firewalls and Intrusion Detection
Systems deliver functionality that the other component can not deliver. An
Intrusion Detection System complements a firewall by detecting what is going
on in the network. A firewall is only a kind of fence, so it will not detect what’s
happening on the inside. Also, the Intrusion Detection System can catch
attempts against the network that fails. This is important because it shows
how big the threats from the outside are. Even more important, an Intrusion
Detection System can catch attacks that pass the firewall, like for example
Denial of Service attacks.
Honeynet
26
The idea with several security components is to establish a network
perimeter and to identify all possible points of entry to the network. It is also
recommended to protect sensitive servers with intrusion detection sensors on
every server. The square boxes with magnifying glasses in them illustrate
intrusion detection sensors. The Intrusion Detection Systems’ sensors should
be both host based and network based. Host based sensors are more useful
for protecting critical servers, and network sensors are more useful for
detecting abnormal traffic on the local network. The Central Manager
receives reports from both the host based sensors and network based
sensors, and process and correlates these reports to detect intrusions.
The firewall protects the internal network from unwanted and unauthorized
traffic from the outside. Sensors for the Intrusion Detection System should be
placed on strategic places around the network. The first sensor is there to
identify attack on servers in the demilitarized zone and attacks that are
directed on the company’s network. The second sensor is placed right after
the firewall. This sensor serves to confirm secure configuration and operation
of the firewall, and it can also identify attacks that pass the firewall. The third
sensor identifies any attacks from the inside against the local servers. The
fourth and fifth sensors are sensors that protect single servers. These
sensors can protect the servers against attacks from outside and has passed
the firewall and the other sensors and against attacks from inside. All the
sensors should be configured to report to one central Intrusion Detection
System console.
27
2.6 Intrusion Prevention System
28
generalize from previously observed behaviour (normal or malicious) in order
to recognize similar future behaviour [37]. The general assumption is that the
normal behaviour of a system can often be characterized by a series of
observations over time. A simple approach is to define thresholds for each
monitored parameter of the system, and if a parameter exceeds this
threshold, it is considered an abnormality [38].
29
2.8 Neural Networks
The work on neural networks was inspired by the human brain. The human
brain consists of neural networks. As a person learns new things, paths
between different parts of the brain are created. If a person does not refresh
his mind from time to time, these paths will eventually vanish.
A neural network is a powerful data modelling tool that is able to capture and
represent complex input/output relationships. This tool can acquire
knowledge through learning of input data. Neural networks are essentially a
network of computational units that jointly implement complex mapping
functions [41]. It consists of a collection of processing elements that are
highly interconnected and transforms a set of inputs to a set of desired
outputs. Here are some of the characteristics of a neural network:
30
The neurons in a neural network are organized into layers. This is showed in
Figure 2-6. The layers is divided into an input layer, hidden layer (there can
be several hidden layers) and output layer. The inputs to the input layer are
set by the environment. This layer does not play any significant role to the
computing of the result. It only feeds information into the neural network. The
hidden layers have no external connections; they only have connections with
other layers in the network. The interaction between the hidden layers
continues until some condition is satisfied. The outputs from the output layer
are returned to the environment.
Traditional neural networks are unable to improve its analysis of new data
until it is taken off-line and retrained using representative data that includes
31
the new information. Today, neural networks are widely used in both software
and hardware products around the world.
The ability to learn is the fundamental point of neural networks. The neural
network learns by making systematic changes to the weights in each neuron.
Most neural networks learn by using an algorithm call backpropagation. The
invention of the backpropagation algorithm [42] has played a large part in the
resurgence of the interest in neural networks.
1. Select the next training pair from the training set, and apply input
vector to the network input.
2. Calculate the output of the network.
3. Calculate the error between the network output and the desired
output.
4. Adjust the weights of the network in a way that minimises the error.
5. Repeat steps 1 through 4 for each vector in the training set until the
error for the entire set is acceptable low.
There has already been some successful research on using neural networks
to train an anomaly based Intrusion Detection System [31, 37], and also
32
research done already back in 1992 [59]. In the research from 1992 [59],
Debar, Becker and Siboni used user commands on a SUN3 UNIX machine
as input to the neural network. In their paper, Lee and Heinbuch [31] showed
that an Intrusion Detection System can be devised that truly responds to
anomalies, not to signatures of known attacks. To produce this, the normal
behaviour of the network must be specifiable in advance. Neural networks
have also been used to improve misuse intrusion detection [60].
J. Cannady showed in his report [39] that it is possible for neural network to
autonomously learn new attacks rapidly through the use of a modified
reinforcement learning method that uses feedback from the protected
system. His system demonstrated the ability to learn new attack patterns
without the complete retraining required in other traditional neural network
approaches.
After reading about the use of neural networks in intrusion detection, and the
ability neural networks have to learn, it was decided that we wanted to look
further into this area of intrusion detection.
33
Over the last years, a large quantity of data has been gathered by the Lincoln
Laboratory for the purpose of testing and comparing Intrusion Detection
Systems. The data sets from Massachusetts Institute of Technology’s Lincoln
Laboratory are the most well-known and used data sets for Intrusion
Detection System research. These data sets include BSM audit files, and
tcpdump files from a variety of UNIX systems, as well as Microsoft Windows
NT audit files [17].
The data sets used in this research are the data sets from the 1998 DARPA
Intrusion Detection Evaluation Program. In the data sets for UNIX operating
system the attacks were categorized into four categories [44]:
• DoS - Denial of Service: Attacks used was Apache2, Back, Mail bomb,
Neptune, Ping of death, Process table, Smurf, Syslogd and UDP
storm.
• R2L - Unauthorized access from a remote machine to a local machine:
Attacks used was Dictionary, FTP-write, Guest, Imap, Named, Phf,
Sendmail, Xlock and Xnsnoop.
• U2R - Unauthorized access to local super user (root) privileges:
Attacks used was Perl and Xterm.
• Probing - Surveillance and scans of networks to find vulnerabilities:
Attacks used was IP sweep, Mscan, Nmap, Saint and Satan.
34
The DARPA Intrusion Detection Evaluation is designed to find the strength
and weaknesses of existing approaches and lead to large performance
improvements and valid assessment of Intrusion Detection Systems. The
concept was to generate a set of realistic attacks, embed them into normal
data, evaluate then false alarms and detection rates of systems with these
data, and the improve systems to correct weaknesses found. The data sets
from DARPA are used by many researchers around the world to test new
Intrusion Detection Systems, either they are anomaly based or misuse based
systems.
There are numerous attack methods to use against a computer system, and
several different types of each method. A good security administrator should
keep himself updated with attack methods by visiting security websites where
new attack methods are shown.
There are several different attack types, and these will be explained further in
this chapter. The attacks can mainly be sorted into three categories [45]:
All these and other attacks have been increasing in sophistication and power
to harm. Attack tool developers are using more advanced techniques. It is
more difficult to write signatures for signature-based systems such as anti-
virus software and misuse based Intrusion Detection Systems. We have seen
tools like Code Red and Nimda propagate themselves to a point of global
saturation in less than 18 hours [46].
35
As Figure 2-7 [25] shows, the sophistication of the attacks and attack tools
has grown very much in complexity. And these attack tools has also been
automated, so the skill needed to use these attack tools and to launch
attacks has been reduced.
The level of sophistication and knowledge required to carry out an attack has
been decreasing. This is because there are very many know-how’s available
on Web sites all over the world. Hackers constantly invent new attacks and
disseminate them over the Internet [45, 13]. Young and inexperienced
hackers can use these tools with almost the same power as experienced
hackers can. Some of the newer attack methods also use encrypted signals.
This keeps the signals from being recognized by Intrusion Detection Systems
that scans for bit strings from known commands. The malicious code writers
36
also works with an open source model in which they freely share successive
code improvements, thereby making their attacks more sophisticated.
Denial of Service attacks is attacks where the attacker is not interested in any
information from the network. He just wants to crash the system so that other
users can’t reach the targeted system [47]. In general, denial of service
attacks does little harm besides wasting people’s time and bandwidth [48].
The attacker just wants to deny the legitimate users to use the services
provided by the attacked server.
In the first versions of Denial of Service attacks [49], hackers usually tried to
block access to a Web site by using a single computer to send millions of
phony requests, thereby overloading the site so it could not respond to
legitimate queries, or even causing the host to crash altogether. But it was
pretty easy to stop these attacks. All requests from the attacking computer
were simply blocked, and the attack was stopped.
A newer version of the Denial of Service attack, also called Distributed Denial
of Service attack or DDoS, has evolved. These types of attacks are done by
using other computers on the Internet to attack a system. In most attacks, the
source address is faked [48]. This means that the attacker uses other
people’s computers to run the attack. The users who are used in such attack
normally do not know that they have been used in an attack. The
development of automation in attack tools enables a single attacker to install
their tools and control tens of thousands of compromised systems for use in
attacks [46]. Figure 2-8 shows how Distributed Denial of Service attacks are
done against a single victim.
37
Zombie
Zombie
Attacker/Master
Zombie Victim
Zombie
The first known large scale Distributed Denial of Service attack was seen in
August 1999 [49, 50]. This attack used 227 hosts to bring down the network
of University of Minnesota in USA for three days.
38
browsed the Web for information about the incidents that the entire Internet
slowed down. On the last day of the attacks, the Internet’s performance was
26,8 % worse than the week before [50].
In October 2002 nine of the 13 root-servers around the world were attacked
by a Denial of Service attack. The attacks used commandeered computers to
flood the root servers with Internet control message protocol requests [11].
Distributed Denial of Service attacks are seen as one of the biggest threats
for businesses on the Internet. “Distributed Denial of Service attacks
constitute one of the single greatest threats facing businesses involved in
electronic commerce because an attack can completely shut down a Web
site”, said Morgan Wright from REACT [50]. Others are even more
pessimistic about these attacks. Charles Palmer from IBM [49] had this to
say about Distributed Denial of Service attacks: “You’re not going to be able
to stop denial of service. The best thing you can do is reduce its impact”.
In today’s e-commerce environment, users have a low tolerance for web site
delay or failure. They will simply click their way to another site if the first is
unavailable. There has been conducted a research where they tried to
develop a new and efficient technique for the detection and alleviation of
Denial of Service attacks [51]. Their technique is similar to an Intrusion
Detection System, using anomaly based methods with data mining to detect
attacks.
39
• Propagate a virus or a worm
• Install a backdoor
• Destroy data
When it is installed, the Trojan Horse gives the intruder access to the data
stored on the victim’s computer. It can also give the attacker access to other
computers if the victim’s computer is in a local network.
Even though a virus is not actually an attack method, it causes much damage
and is expensive and time consuming so it should be mentioned. Viruses and
worms are malicious codes made to do some damage on the infected
system. 85% of the respondents in the FBI/CSI survey [7] reported virus and
worm outbreaks. Computer Economics estimated that the worldwide impact
of Code Red was $2.62 billion and the worldwide impact of Nimda was $635
million in 2002 [7].
Viruses and worms exploit vulnerabilities in the system, and large numbers of
systems can be infected within a matter of hours. The Code Red worm
infected more than 250.000 systems in just 9 hours on 19 July 2001 [25].
A decade ago, viruses were relatively easy to find and fix, and they spread
slowly, generally by floppy disks or LANs. Now, however, increasingly
40
creative authors are exploiting the Internet, open-source software, peer-to-
peer technology, and other developments to write viruses and worms that
invade computer systems in new ways, propagate around the world quickly,
and wreak havoc to victims [54].
During a virus’ lifetime, it normally goes through 4 stages [6]. These stages
are:
The detection of new viruses has become very difficult. Virus writing has
gone to a new level where the viruses are polymorphic, uses changing
encryption and decryption, and can infect both Windows and Linux platforms
[55]. They infect machines not only by using their own code, but also by
linking to and accessing malicious codes from newsgroups and Web sites
[53].
New software from different vendors is out now that requires users to define
which actions they will and will not allow on a computer or network. Joe
Hartman from Trend Micro [53] said: “If a machine suddenly starts to send
hundreds of e-mails, the software will know that something is wrong and
notify the user or system administrator”.
2.11 Honeynets
41
tools that are used for attacks, motives and tactics used and the sharing any
knowledge they have learned. The group gathers information by deploying
networks, called honeynets, which are designed to be compromised. These
networks are real networks with all the hardware that are needed. These
honeynets lures the hackers to a system and then analyze their activities.
The intent is for attackers to break into the system and have every action
captured and controlled without them knowing it [57]. Each computer in the
Honeynet is called a Honeypot.
42
Honeynets have two critical requirements [57]:
Honeynets are not a security component that will protect a system against an
attack, but they can be used with other components to learn new attacks, and
to see where the attacker comes from. With distributed honeynets
information can be collected on a global scale. This is the real potential of
honeynets, because it can be used to check for example how fast worms are
working through the Internet.
The use of a honeynet can be very useful for a medium to large company.
They could use the honeynet to tune their own system. By watching what
kind of attacks, when they are attacked and how the attackers work, they
could use this information in for example an Intrusion Detection System.
These extra parameters could be considered when for example a neural
network is used in the Intrusion Detection System. The honeynet can also be
used to track the attacker’s attention away from critical systems other places
in the local network.
There are huge amounts of log data the system administrators have to go
through if they should do it manually. Log file analysis is becoming the
greatest time consumer for the system administrators. A large network, as for
example a university, can have tens of thousands of connections from
thousands of hosts during a week and an undeterminable number of
unsuccessful connections. The perusal of the textual log files for this system
is totally inadequate. The system administrator will rarely use his time on
43
reading log files. To help the system administrator with this, there has been a
study [58] on the visualization of network traffic, with focus on network
intrusion data. In this study, they present a technique based on a glyph
metaphor where they visually present the textual log information collected
from the system. This is a very good technique to use because the system
administrator gets the opportunity to see the log files almost like a movie,
with different kinds of arrows pointing on one circle. There are different kinds
of arrows depending on the connection to the monitored system. Unusual or
unexpected activity is highlighted in read. The thickness of the circle
represents the load on the system.
The system administrator can see all the connections at one time, or see it as
a sort of a “movie” where the connections changes over a time period. This
application also has an interface which almost looks like the control panel on
a VCR. With this, the system administrator can stop, play, play in slow
motion, fast forward, rewind or restart the “movie”. The problem with
analyzing log files is that analyzing textual information is very time
consuming. If the system administrator can see the log files graphical, it will
44
save him very much time. There is a saying that goes like this: “One picture
says more than a thousand words”. This saying explains the visualization of
log files very good.
45
Chapter 3: Research Methodology
3.1 Introduction
3.2 Outline
Figure 3-1 on the next page shows the block diagram of the research
methodology for this research.
46
Figure 3-1 Block Diagram of Research Methodology
The neural network was trained several times with different number of
iterations and hidden units to see how this affected the RMS-error and the
classification rate.
47
3.3 Input files to the Neural Network
The input files to the neural network were created from the log files in the
data sets from the DARPA Intrusion Detection Evaluation Web site [43]. The
log files contained more information than we needed, so they had to be
“cleaned”, so they only contained the parameters that were needed for this
experiment.
There are 7 weeks of traffic logs available from the 1998 DARPA Intrusion
Detection Evaluation Data Sets [43]. For the experiments conducted in this
research, only data sets from week 3 were collected. From these log files we
used 12 different attacks. Some of these attacks was used both for training of
the neural network and for testing the neural network with known attacks.
The rest of the attacks were used for the testing of the neural network with
unknown attacks.
48
We did not want to use all these parameters in the experiments. We could
not see any use of a session ID, because it has no use for the detection of
any intrusions. Attack name could be useful for naming the attacks, but this
was no something we wanted to look at in this research. The parameters we
wanted were:
Because the DARPA log files had more information than we needed, we had
to “clean” the log files. This was done by writing a Java application that could
delete the parameters we did not want from the log file. The application read
the DARPA logs from a file and wrote the “cleaned” logs out to another file.
To delete the Session ID, the application deleted everything on each line
from the beginning to the first space. And we did not want the Attack Score
and Attack Name either. These fields were deleted by the same application
and deleted everything from the 9th space.
When the DARPA log files were “cleaned” and just containing the parameters
we wanted, they were ready to be converted into binary format.
For the training of the neural network, we collected 197 sessions from the
data sets. Out of these, there were 99 sessions with normal traffic and 98
sessions with attacks. The input files for the neural network had to be in
binary forma, therefore all log files had to be converted to binary format. We
49
decided that each session should be on 132 bits, and each parameter had
the following bits:
• Date: 9 bits – We just used day and month, so 5 bits were for day and
4 bits were for month.
• Time: 18 bits – Hours, minute and seconds. Together maximum of
235959.
• Duration: 18 bits – Same as time.
• Service: 3 bits – We decided to just have a number for the 7 most
used services, and others were set to 000. The services we had a
number for was http, smtp, domain/u, telnet, ftp, eco/i and imap.
• Source Port: 10 bits – A number for each standard port up to 1024.
Other ports where set to 0000000000.
• Destination Port: 10 bits – Same as source port.
• Source IP: 32 bits – From 0 to 255, 8 bits, times four.
• Destination IP: 32 bits – Same as source IP.
50
3.4 Testing environment
The input for the neural network has to be in some standard form. The first
part of the training pair was a 132 bit long representation of one session in
the traffic log from DARPA Intrusion Detection Evaluation data sets. The
second part of the training pair was a 2 bit long output vector. This part tells
the neural network if the representation of the session is an attack or normal
traffic. If it is classified as normal traffic the second part of the training pair
would be 0 1, if it was an attack it would be 1 0.
51
Parameters Value
Number of inputs 132
Number of outputs 2
Number of hidden units 12
Number of Training Pairs 197
Learning rate 0.1
Momentum 0.1
RMS-error 0.0001
Number of iterations 5000
Number of inputs tells how many bits there are in the first part of the training
pair. Number of outputs tells how many bits there are in the second part of
the training pair. Number of hidden units is used to test how well trained the
neural network can be. We used different numbers here in the testing.
Number of training pairs tells how many sessions we used in the training of
the neural network. We used 197, which was 98 sessions with normal traffic
and 97 sessions with attacks. RMS-error is a value that can be adjusted to
how we want to train the neural network. Number of iterations tells how many
times the neural network should run to learn the input.
52
∑
It was made 3 different experiments with the neural network. In the first
preliminary experiment we tried to use as few iterations and hidden units as
possible. This test was done to find out when the neural network was trained
properly to detect attacks. This test also gave the background for choosing
the number of hidden units and iterations for the training of the neural
network for the last two experiments. The second test was with a relatively
small amount of normal traffic and attacks. The final test was with a higher
amount of normal traffic and attacks. The two last experiments were done
with 100, 1000, 5000 and 20000 iterations in the training of the neural
network. For each of these three iterations, we also tested with 4, 6, 12 and
24 hidden units in the neural network.
53
3.4.5 Attacks in different files
When we tested the neural network with known attacks, the same attack type
was used but not exactly the same sessions.
54
Chapter 4: Experimental Results
4.1 Introduction
This chapter presents the experimental results obtained by using the neural
network based research methodology proposed in the previous chapter. The
experiments were conducted in three parts. The preliminary experiment was
conducted to see how many iterations and how many hidden units that was
needed before the neural network was properly trained. The second and the
final experiments were conducted to see how many percent of the normal
traffic and the attacks that were classified correctly. The second experiment
was done with 20 sessions of normal traffic, 10 sessions with known attacks
and 10 sessions with unknown attacks. The final experiment was done with
50 sessions of normal traffic, 25 sessions with known attacks and 25
sessions with unknown attacks.
To check if the neural network was trained correctly we used the same traffic
sessions as we used for the training of the neural network. Here we tested
with the same amount of iterations and hidden units as we did in the final
experiment. In all these tests the neural network had a classification rate of
100%, which means that it classified 197 out of 197 sessions correctly.
In this preliminary experiment we wanted to see how many hidden units that
were needed before the neural network was properly trained, and also how
many iterations the neural network needed. In this experiment we used both
55
known and unknown attacks in the same file. When we used just three
hidden units, no attacks was detected at all. With 4 hidden units, we got a
detection rate on 86 %. The results from this experiment gave the
background for choosing the number of hidden units and iterations used for
the training of the neural network in the last two experiments. This meant that
number of hidden units had to be over 4, and number of iterations had to be
over 100.
There is a huge drop in the RMS-error between 3 and 4 hidden units, and
this shows that a lower RMS-error affects the detection rate.
56
RMS-error
0,400000000000
0,350000000000
0,300000000000
0,250000000000
0,200000000000 RMS-error
0,150000000000
0,100000000000
0,050000000000
0,000000000000
2 3 4 5 6
This was just a brief testing where we did several tests and for each test
changed the number of iterations for the neural network with 100, and
number of hidden units with one. The results here would probably be different
if we tried to increase/decrease the number of iterations with just one, and
test all iterations with different number of hidden units. But this was just done
to see differences in the RMS-error rate when attacks were not detected and
when attacks were detected.
The testing was separated into normal traffic, known attacks and unknown
(novel) attacks. In the second experiment we used 40 sessions with traffic. Of
the 40 sessions we had 20 sessions with normal traffic, 10 sessions with
known attacks and 10 sessions with unknown attacks.
57
4.4.1 Normal traffic
For the testing with normal traffic we used sessions that were classified as
normal traffic in the DARPA data sets. Here are the results from the second
experiment with normal traffic:
58
4.4.2 Known attacks
Known attacks are attack types that the neural network has been trained
with. Here are the results from the second experiment with known attacks:
59
4.4.3 Unknown attacks
Unknown attacks are attack types that the neural network has not seen
before. Here are the results from the second experiment with unknown
attacks:
In the final experiment we also separated the traffic into normal traffic, known
attacks and unknown attacks. But here we used 100 sessions, divided into
50 sessions with normal traffic and 50 sessions with attacks. The attacks
60
were separated into 25 sessions with known attacks and 25 sessions with
unknown attacks.
For the testing with normal traffic we used sessions that were classified as
normal traffic in the DARPA data sets. Here we tested with some sessions of
normal traffic that the neural network had seen before, but mainly we used
traffic the neural network had not seen before. Here are the results from the
final experiment with normal traffic:
61
4.5.2 Known attacks
Known attacks are attack types that the neural network has been trained
with. Here are the results from the final experiment with known attacks:
62
4.5.3 Unknown attacks
Unknown attacks are attack types that the neural network has not seen
before. Here are the results from the final experiment with unknown attacks:
63
Chapter 5: Analysis and Comparison
5.1 Introduction
As explained before, the lower the RMS rate is, the better the detection rate
normally is. In Figure 5-1 the differences in the RMS-error when we used
100, 1000, 5000 and 20000 iterations for the training of the neural network
are shown.
0,022000000000
0,020000000000
0,018000000000
0,016000000000
0,014000000000 4 Hidden Units
RMS-error
64
The RMS-error dropped pretty much when we increased the hidden units
from 4 to 6, but there were small changes in the RMS-error between using 6
hidden units and using 24 hidden units.
This is the sessions that were not detected by the neural network in the final
experiment. All the sessions in the normal traffic file was classified correctly,
but there were some sessions both in the known attacks and unknown
attacks that were not detected.
The two sessions of known attacks that were not detected were two sessions
with an attack method called warezmaster. Warezmaster is anonymous
upload of warez (usually illegal copies of copywrited software) onto a FTP
server [43].
Illegal FTP traffic is very difficult to detect. This kind of traffic can easily be
mistaken with normal upload of files to a FTP server. A way to stop this could
be to just allow a few authorised users to use FTP.
65
5.3.2 Unknown attacks
The first unknown attack missed was an ipsweep. The second missed attack
was a Denial of Service attack called smurf. The three last missed attacks
were all a Denial of Service attack called neptune.
The surprising thing here is that there were 15 sessions with the attack type
ipsweep and 3 sessions with the attack type smurf, but only one session from
each of these attacks was missed. All the three sessions with the attack type
neptune were classified wrong. A positive result here is that all sessions with
the Denial of Service attack type back was classified correctly. This shows
that Denial of Service attacks can be completely stopped by the use of neural
networks.
66
5.5 Comparison with other research
Detection Rate
100,00 %
95,00 %
90,00 %
85,00 %
80,00 %
75,00 %
70,00 %
65,00 %
60,00 %
55,00 %
50,00 %
Earlier Research 1 [37] Earlier Research 2 [60] Our Research
67
In our research we had a detection rate of 86 % on 50 attacks (both known
and unknown attacks), but we had a false positive rate of 0 %. This means
that all normal sessions in our experiment were classified correctly as normal
traffic.
68
Chapter 6: Conclusion
6.1 Introduction
The attacks on computer networks that we can read about in the news are
only the ones detected. Some of the security breaches can be difficult to
detect without human involvement and today it is quite unrealistic to think that
detection, recovery and maintenance can be done automatically in a secure
environment. Intelligent support systems need to be designed to help
network administrators manage security, not to replace them.
69
Denial of Service type called back was detected. This shows that unknown
Denial of Service attacks can be stopped with the use of neural networks.
One problem with this result is that it did not detect another Denial of Service
attack called Neptune. We can not explain why this Denial of Service type
went completely undetected.
Another surprising result was the classification rate on normal traffic. Here
the neural network had a classification rate of 100 %, which gives a false
positive rate of 0 %. This means that none of the normal sessions were
classified as an attack. If normal traffic was classified as an attack a false
alarm would be raised, and false alarms are one of the biggest problems with
Intrusion Detection Systems today.
70
In this experiment all traffic was made on the same date, and experiments
with traffic from a longer period of time should be conducted. Another issue is
how much details should be collected. More collection activity is likely to raise
the detection rate, but at the same time too much data collection will slow
down the system. Collecting too little data gives the risk of missing some
attacks, so this is a trade-off that needs to be evaluated in each case of
implementation.
71
Appendix
The appendix contains all the original sessions we used from the DARPA
Intrusion Detection Evaluation data sets. They are exactly in the same format
as on the DARPA web page [54].
Normal traffic
72
4460 06/19/1998 09:59:54 00:00:01 domain/u 1675 53 192.168.001.010 172.016.112.020 0 -
4461 06/19/1998 10:00:05 00:00:01 http 16507 80 197.218.177.069 172.016.114.050 0 -
4462 06/19/1998 10:00:06 00:00:03 smtp 16515 25 135.008.060.182 172.016.114.168 0 -
4463 06/19/1998 10:00:06 00:00:01 domain/u 1855 53 172.016.112.020 192.168.001.010 0 -
4464 06/19/1998 10:00:06 00:00:01 smtp 16514 25 135.008.060.182 172.016.113.084 0 -
4465 06/19/1998 10:00:06 00:00:01 http 16513 80 197.218.177.069 172.016.114.050 0 -
4466 06/19/1998 10:00:06 00:00:01 http 16512 80 197.218.177.069 172.016.114.050 0 -
4467 06/19/1998 10:00:06 00:00:01 http 16511 80 197.218.177.069 172.016.114.050 0 -
4468 06/19/1998 10:00:06 00:00:01 http 16510 80 197.218.177.069 172.016.114.050 0 -
4691 06/19/1998 10:04:50 00:00:02 smtp 17339 25 194.007.248.153 172.016.113.204 0 -
4692 06/19/1998 10:04:51 00:00:02 smtp 17340 25 194.007.248.153 172.016.112.149 0 -
4693 06/19/1998 10:04:52 00:00:02 smtp 11397 25 172.016.114.168 197.182.091.233 0 -
4694 06/19/1998 10:04:52 00:00:01 smtp 11396 25 172.016.114.168 194.007.248.153 0 -
4695 06/19/1998 10:04:55 00:00:01 http 11398 80 172.016.114.169 198.003.096.170 0 -
4696 06/19/1998 10:05:00 00:00:01 domain/u 1715 53 192.168.001.010 172.016.112.020 0 -
4697 06/19/1998 10:05:00 00:00:01 domain/u 1715 53 192.168.001.010 172.016.112.020 0 -
4698 06/19/1998 10:05:00 00:00:01 domain/u 1755 53 192.168.001.010 172.016.112.020 0 -
4699 06/19/1998 10:05:00 00:00:01 domain/u 1802 53 192.168.001.010 172.016.112.020 0 -
4700 06/19/1998 10:05:00 00:00:01 domain/u 1883 53 192.168.001.010 172.016.112.020 0 -
4701 06/19/1998 10:05:00 00:00:01 domain/u 1891 53 192.168.001.010 172.016.112.020 0 -
4702 06/19/1998 10:05:02 00:00:01 http 11400 80 172.016.112.149 207.049.149.093 0 -
4703 06/19/1998 10:05:02 00:00:01 http 11399 80 172.016.114.169 198.003.096.170 0 -
4704 06/19/1998 10:05:10 00:00:01 domain/u 1924 53 192.168.001.010 172.016.112.020 0 -
4705 06/19/1998 10:05:10 00:00:01 domain/u 1961 53 192.168.001.010 172.016.112.020 0 -
4706 06/19/1998 10:05:11 00:00:01 domain/u 2036 53 192.168.001.010 172.016.112.020 0 -
4707 06/19/1998 10:05:11 00:00:01 domain/u 1985 53 192.168.001.010 172.016.112.020 0 -
5094 06/19/1998 10:16:26 00:00:01 domain/u 1411 53 172.016.112.020 192.168.001.010 0 -
5095 06/19/1998 10:16:26 00:00:01 domain/u 1419 53 172.016.112.020 192.168.001.010 0 -
5096 06/19/1998 10:16:27 00:00:01 smtp 19377 25 195.073.151.050 172.016.114.207 0 -
5097 06/19/1998 10:16:31 00:00:01 smtp 19463 25 197.218.177.069 172.016.114.148 0 -
5098 06/19/1998 10:16:31 00:00:01 domain/u 53 1426 192.168.001.010 172.016.112.020 0 -
5105 06/19/1998 10:16:45 00:00:01 http 19083 80 195.073.151.050 172.016.114.050 0 -
5106 06/19/1998 10:16:50 00:00:01 domain/u 1746 53 192.168.001.010 172.016.112.020 0 -
5107 06/19/1998 10:16:50 00:00:01 domain/u 1746 53 192.168.001.010 172.016.112.020 0 -
5221 06/19/1998 10:19:51 00:00:01 domain/u 1945 53 192.168.001.010 172.016.112.020 0 -
5222 06/19/1998 10:19:53 00:00:01 http 13369 80 172.016.114.168 207.025.071.024 0 -
5223 06/19/1998 10:19:54 00:00:01 smtp 19647 25 195.073.151.050 172.016.112.194 0 -
5224 06/19/1998 10:19:56 00:00:01 smtp 19773 25 195.073.151.050 172.016.113.105 0 -
5225 06/19/1998 10:19:57 00:00:01 http 13371 80 172.016.114.169 204.050.058.005 0 -
5226 06/19/1998 10:20:01 00:00:01 domain/u 1152 53 192.168.001.010 172.016.112.020 0 -
5227 06/19/1998 10:20:01 00:00:01 domain/u 2016 53 192.168.001.010 172.016.112.020 0 -
5228 06/19/1998 10:20:07 00:00:02 finger 19774 79 195.115.218.108 172.016.114.207 0 -
5229 06/19/1998 10:20:08 00:00:01 smtp 19773 25 195.073.151.050 172.016.113.105 0 -
5230 06/19/1998 10:20:11 00:00:01 domain/u 1191 53 192.168.001.010 172.016.112.020 0 -
7323 06/19/1998 11:22:45 00:00:01 domain/u 1679 53 192.168.001.010 172.016.112.020 0 -
73
7324 06/19/1998 11:22:45 00:00:01 domain/u 1728 53 192.168.001.010 172.016.112.020 0 -
7329 06/19/1998 11:22:52 00:00:06 smtp 21872 25 172.016.114.168 195.073.151.050 0 -
7334 06/19/1998 11:22:57 00:00:12 smtp 21937 25 172.016.112.149 195.115.218.108 0 -
7335 06/19/1998 11:22:57 00:00:01 domain/u 1873 53 192.168.001.010 172.016.112.020 0 -
10881 06/19/1998 13:07:07 00:00:01 domain/u 1423 53 192.168.001.010 172.016.112.020 0 -
10882 06/19/1998 13:07:07 00:00:01 domain/u 1426 53 192.168.001.010 172.016.112.020 0 -
10883 06/19/1998 13:07:07 00:00:01 domain/u 1434 53 192.168.001.010 172.016.112.020 0 -
10884 06/19/1998 13:07:07 00:00:01 domain/u 1439 53 192.168.001.010 172.016.112.020 0 -
10885 06/19/1998 13:07:07 00:00:01 domain/u 1426 53 192.168.001.010 172.016.112.020 0 -
10886 06/19/1998 13:07:09 00:00:01 http 3393 80 172.016.115.234 209.001.112.251 0 -
10887 06/19/1998 13:07:17 00:00:01 domain/u 1487 53 192.168.001.010 172.016.112.020 0 -
10888 06/19/1998 13:07:17 00:00:01 domain/u 1537 53 192.168.001.010 172.016.112.020 0 -
10889 06/19/1998 13:07:22 00:00:01 http 3456 80 172.016.115.234 204.177.145.235 0 -
14606 06/19/1998 14:42:54 00:00:01 http 12392 80 172.016.114.168 208.002.188.061 0 -
14607 06/19/1998 14:42:55 00:00:01 http 12415 80 135.008.060.182 172.016.114.050 0 -
14608 06/19/1998 14:42:55 00:00:03 smtp 12393 25 172.016.113.105 195.115.218.108 0 -
14609 06/19/1998 14:43:03 00:00:01 http 12395 80 172.016.114.168 208.002.188.061 0 -
14610 06/19/1998 14:43:04 00:00:01 http 12416 80 194.007.248.153 172.016.114.050 0 -
14611 06/19/1998 14:43:04 00:00:01 http 12417 80 194.007.248.153 172.016.114.050 0 -
14612 06/19/1998 14:43:18 00:00:01 http 12384 80 172.016.116.044 132.025.001.025 0 -
14613 06/19/1998 14:43:21 00:00:01 domain/u 1359 53 192.168.001.010 172.016.112.020 0 -
17472 06/19/1998 16:04:57 00:00:01 domain/u 53 1488 172.016.112.020 192.168.001.010 0 -
17473 06/19/1998 16:04:57 00:00:01 domain/u 1482 53 192.168.001.010 172.016.112.020 0 -
17474 06/19/1998 16:04:58 00:00:04 smtp 23705 25 172.016.113.084 135.008.060.182 0 -
17475 06/19/1998 16:04:58 00:00:01 http 23706 80 172.016.117.132 131.084.001.031 0 -
17476 06/19/1998 16:05:00 00:00:01 smtp 24737 25 195.073.151.050 172.016.112.149 0 -
74
Attacks
220 06/19/1998 08:16:42 00:00:01 imap 1029 143 202.049.244.010 172.016.114.050 1 imap
223 06/19/1998 08:16:45 00:00:01 imap 1029 143 202.049.244.010 172.016.114.050 1 imap
224 06/19/1998 08:16:51 00:07:01 imap 1029 143 202.049.244.010 172.016.114.050 1 imap
1190 06/19/1998 08:49:21 00:00:43 imap 1107 143 202.077.162.213 172.016.114.050 1 imap
12949 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.000 1 nmap
12950 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.001 1 nmap
12951 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.002 1 nmap
12952 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.003 1 nmap
12953 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.004 1 nmap
12954 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.005 1 nmap
12955 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.006 1 nmap
12956 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.007 1 nmap
12957 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.008 1 nmap
12958 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.009 1 nmap
12959 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.010 1 nmap
12960 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.011 1 nmap
12961 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.012 1 nmap
12962 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.014 1 nmap
12963 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.013 1 nmap
12964 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.016 1 nmap
12965 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.017 1 nmap
12966 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.018 1 nmap
12967 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.019 1 nmap
12968 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.020 1 nmap
12969 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.021 1 nmap
12970 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.022 1 nmap
12971 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.023 1 nmap
12972 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.024 1 nmap
12973 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.025 1 nmap
12974 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.026 1 nmap
12975 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.027 1 nmap
12976 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.028 1 nmap
12977 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.029 1 nmap
12978 06/19/1998 14:18:25 00:00:01 eco/i - - 208.240.124.083 172.016.112.030 1 nmap
20334 06/19/1998 19:03:44 00:00:11 ftp-data 20 2624 172.016.112.050 206.186.080.111 1
warezmaster
20335 06/19/1998 19:03:55 00:00:11 ftp-data 20 2636 172.016.112.050 206.186.080.111 1
warezmaster
20340 06/19/1998 19:04:05 00:00:03 ftp-data 20 2638 172.016.112.050 206.186.080.111 1
warezmaster
20728 06/19/1998 22:56:41 00:00:01 finger 79 79 172.016.113.050 172.016.113.050 1 land
2203 06/15/1998 09:36:59 00:00:01 eco/i - - 152.169.215.104 172.016.112.050 1 satan
75
2204 06/15/1998 09:37:01 00:00:01 eco/i - - 152.169.215.104 172.016.112.050 1 satan
2205 06/15/1998 09:37:04 00:00:01 eco/i - - 152.169.215.104 172.016.112.050 1 satan
2206 06/15/1998 09:37:04 00:00:01 1/u 1694 1 152.169.215.104 172.016.112.050 1 satan
2209 06/15/1998 09:37:04 00:00:01 1/u 1694 1 152.169.215.104 172.016.112.050 1 satan
2210 06/15/1998 09:37:04 00:00:01 1/u 1694 1 152.169.215.104 172.016.112.050 1 satan
2211 06/15/1998 09:37:04 00:00:01 ecr/i - - 172.016.112.050 152.169.215.104 1 satan
2212 06/15/1998 09:37:05 00:00:01 1/u 1694 177 152.169.215.104 172.016.112.050 1 satan
2213 06/15/1998 09:37:05 00:00:01 domain/u 1694 53 152.169.215.104 172.016.112.050 1 satan
2215 06/15/1998 09:37:05 00:00:01 1/u 1694 177 152.169.215.104 172.016.112.050 1 satan
2217 06/15/1998 09:37:05 00:00:01 domain/u 1694 53 152.169.215.104 172.016.112.050 1 satan
2219 06/15/1998 09:37:06 00:00:01 1/u 1694 177 152.169.215.104 172.016.112.050 1 satan
2220 06/15/1998 09:37:06 00:00:01 finger 4196 79 152.169.215.104 172.016.112.050 1 satan
2221 06/15/1998 09:37:06 00:00:02 finger 4200 79 152.169.215.104 172.016.112.050 1 satan
2222 06/15/1998 09:37:07 00:00:02 finger 4201 79 152.169.215.104 172.016.112.050 1 satan
2223 06/15/1998 09:37:08 00:00:02 finger 4204 79 152.169.215.104 172.016.112.050 1 satan
2224 06/15/1998 09:37:08 00:00:01 finger 4203 79 152.169.215.104 172.016.112.050 1 satan
2226 06/15/1998 09:37:09 00:00:02 finger 4205 79 152.169.215.104 172.016.112.050 1 satan
2227 06/15/1998 09:37:10 00:00:01 gopher 4211 70 152.169.215.104 172.016.112.050 1 satan
2228 06/15/1998 09:37:10 00:00:01 http 4212 80 152.169.215.104 172.016.112.050 1 satan
2229 06/15/1998 09:37:10 00:00:02 ftp 4214 21 152.169.215.104 172.016.112.050 1 satan
2230 06/15/1998 09:37:10 00:00:04 telnet 4216 23 152.169.215.104 172.016.112.050 1 satan
2231 06/15/1998 09:37:11 00:00:01 smtp 4218 25 152.169.215.104 172.016.112.050 1 satan
2232 06/15/1998 09:37:11 00:00:01 nntp 4219 119 152.169.215.104 172.016.112.050 1 satan
2233 06/15/1998 09:37:11 00:00:02 540 4220 540 152.169.215.104 172.016.112.050 1 satan
2234 06/15/1998 09:37:11 00:00:01 x11 4221 6000 152.169.215.104 172.016.112.050 1 satan
2235 06/15/1998 09:37:13 00:00:01 sunrpc 998 111 152.169.215.104 172.016.112.050 1 satan
2236 06/15/1998 09:37:13 00:00:01 56/u 1003 111 152.169.215.104 172.016.112.050 1 satan
2237 06/15/1998 09:37:13 00:00:01 444/u 32775 1004 172.016.112.050 152.169.215.104 1 satan
2238 06/15/1998 09:37:13 00:00:01 28/u 111 1003 172.016.112.050 152.169.215.104 1 satan
2239 06/15/1998 09:37:13 00:00:01 40/u 1004 32775 152.169.215.104 172.016.112.050 1 satan
4183 06/15/1998 11:14:34 00:00:02 http 24682 80 197.218.177.069 172.016.114.050 1 phf
4464 06/15/1998 11:32:20 00:02:31 telnet 25134 23 202.247.224.089 172.016.112.050 1 ffb
11748 06/15/1998 19:28:06 00:00:01 100 1234 100 207.075.239.115 172.016.114.050 1 portsweep
11749 06/15/1998 19:28:06 00:00:01 99 1234 99 207.075.239.115 172.016.114.050 1 portsweep
11750 06/15/1998 19:28:06 00:00:01 98 1234 98 207.075.239.115 172.016.114.050 1 portsweep
11751 06/15/1998 19:28:06 00:00:01 97 1234 97 207.075.239.115 172.016.114.050 1 portsweep
11752 06/15/1998 19:28:06 00:00:01 96 1234 96 207.075.239.115 172.016.114.050 1 portsweep
11753 06/15/1998 19:28:06 00:00:01 95 1234 95 207.075.239.115 172.016.114.050 1 portsweep
11754 06/15/1998 19:28:06 00:00:01 94 1234 94 207.075.239.115 172.016.114.050 1 portsweep
11755 06/15/1998 19:28:06 00:00:01 93 1234 93 207.075.239.115 172.016.114.050 1 portsweep
11756 06/15/1998 19:28:06 00:00:01 92 1234 92 207.075.239.115 172.016.114.050 1 portsweep
11757 06/15/1998 19:28:06 00:00:01 91 1234 91 207.075.239.115 172.016.114.050 1 portsweep
11758 06/15/1998 19:28:06 00:00:01 90 1234 90 207.075.239.115 172.016.114.050 1 portsweep
11759 06/15/1998 19:28:06 00:00:01 89 1234 89 207.075.239.115 172.016.114.050 1 portsweep
11760 06/15/1998 19:28:06 00:00:01 88 1234 88 207.075.239.115 172.016.114.050 1 portsweep
76
11761 06/15/1998 19:28:06 00:00:01 87 1234 87 207.075.239.115 172.016.114.050 1 portsweep
11762 06/15/1998 19:28:06 00:00:01 86 1234 86 207.075.239.115 172.016.114.050 1 portsweep
11763 06/15/1998 19:28:06 00:00:01 85 1234 85 207.075.239.115 172.016.114.050 1 portsweep
11764 06/15/1998 19:28:06 00:00:01 84 1234 84 207.075.239.115 172.016.114.050 1 portsweep
11765 06/15/1998 19:28:06 00:00:01 83 1234 83 207.075.239.115 172.016.114.050 1 portsweep
11766 06/15/1998 19:28:06 00:00:01 82 1234 82 207.075.239.115 172.016.114.050 1 portsweep
11767 06/15/1998 19:28:06 00:00:01 81 1234 81 207.075.239.115 172.016.114.050 1 portsweep
11768 06/15/1998 19:28:06 00:00:01 http 1234 80 207.075.239.115 172.016.114.050 1 portsweep
11769 06/15/1998 19:28:06 00:00:01 finger 1234 79 207.075.239.115 172.016.114.050 1 portsweep
77
Appendix 2: Testing data
Normal traffic
78
5619 06/19/1998 10:31:22 00:00:01 ftp-data 20 15183 197.218.177.069 172.016.112.207 0 -
5620 06/19/1998 10:31:22 00:00:01 domain/u 1106 53 192.168.001.010 172.016.112.020 0 -
6820 06/19/1998 11:03:42 00:00:01 smtp 23122 25 194.027.251.021 172.016.114.168 0 -
6821 06/19/1998 11:03:42 00:00:01 domain/u 1402 53 192.168.001.010 172.016.112.020 0 -
7334 06/19/1998 11:22:57 00:00:12 smtp 21937 25 172.016.112.149 195.115.218.108 0 -
7335 06/19/1998 11:22:57 00:00:01 domain/u 1873 53 192.168.001.010 172.016.112.020 0 -
16493 06/19/1998 15:35:34 00:00:01 http 19974 80 196.037.075.158 172.016.114.050 0 -
16494 06/19/1998 15:35:34 00:00:01 ftp-data 20 19975 172.016.114.148 135.008.060.182 0 -
16884 06/19/1998 15:49:05 00:03:00 telnet 21004 23 172.016.112.207 194.027.251.021 0 -
16885 06/19/1998 15:49:05 00:00:01 domain/u 53 53 192.168.001.010 192.168.001.020 0 -
17474 06/19/1998 16:04:58 00:00:04 smtp 23705 25 172.016.113.084 135.008.060.182 0 -
17475 06/19/1998 16:04:58 00:00:01 http 23706 80 172.016.117.132 131.084.001.031 0 -
79
Known attacks
220 06/19/1998 08:16:42 00:00:01 imap 1029 143 202.049.244.010 172.016.114.050 1 imap
223 06/19/1998 08:16:45 00:00:01 imap 1029 143 202.049.244.010 172.016.114.050 1 imap
224 06/19/1998 08:16:51 00:07:01 imap 1029 143 202.049.244.010 172.016.114.050 1 imap
1190 06/19/1998 08:49:21 00:00:43 imap 1107 143 202.077.162.213 172.016.114.050 1 imap
13970 06/19/1998 14:20:11 00:00:01 eco/i - - 208.240.124.083 172.016.112.234 1 nmap
13971 06/19/1998 14:20:11 00:00:01 eco/i - - 208.240.124.083 172.016.112.235 1 nmap
13972 06/19/1998 14:20:11 00:00:01 eco/i - - 208.240.124.083 172.016.112.236 1 nmap
13973 06/19/1998 14:20:11 00:00:01 eco/i - - 208.240.124.083 172.016.112.237 1 nmap
13974 06/19/1998 14:20:11 00:00:01 eco/i - - 208.240.124.083 172.016.112.238 1 nmap
20316 06/19/1998 19:01:32 00:00:01 ftp-data 20 2605 172.016.112.050 206.186.080.111 1
warezmaster
20317 06/19/1998 19:01:32 00:00:01 ftp-data 20 2606 172.016.112.050 206.186.080.111 1
warezmaster
20322 06/19/1998 19:02:03 00:00:11 ftp-data 20 2611 172.016.112.050 206.186.080.111 1
warezmaster
20323 06/19/1998 19:02:13 00:00:12 ftp-data 20 2612 172.016.112.050 206.186.080.111 1
warezmaster
20324 06/19/1998 19:02:24 00:00:11 ftp-data 20 2613 172.016.112.050 206.186.080.111 1
warezmaster
20325 06/19/1998 19:02:34 00:00:11 ftp-data 20 2615 172.016.112.050 206.186.080.111 1
warezmaster
20328 06/19/1998 19:02:44 00:00:11 ftp-data 20 2616 172.016.112.050 206.186.080.111 1
warezmaster
20329 06/19/1998 19:02:54 00:00:12 ftp-data 20 2619 172.016.112.050 206.186.080.111 1
warezmaster
20728 06/19/1998 22:56:41 00:00:01 finger 79 79 172.016.113.050 172.016.113.050 1 land
20729 06/19/1998 22:56:42 00:00:01 finger 79 79 172.016.112.050 172.016.112.050 1 land
20730 06/19/1998 22:56:44 00:00:01 finger 79 79 172.016.114.050 172.016.114.050 1 land
20731 06/19/1998 22:56:45 00:00:01 finger 79 79 172.016.115.234 172.016.115.234 1 land
20732 06/19/1998 22:56:47 00:00:01 finger 79 79 172.016.115.005 172.016.115.005 1 land
20733 06/19/1998 22:56:48 00:00:01 finger 79 79 172.016.115.087 172.016.115.087 1 land
20734 06/19/1998 22:56:50 00:00:01 finger 79 79 172.016.116.194 172.016.116.194 1 land
20735 06/19/1998 22:56:51 00:00:01 finger 79 79 172.016.116.201 172.016.116.201 1 land
80
Unknown attacks
81
Appendix 3: Attacks used in this research
82
References
[7] Richard Power, “2002 CSI/FBI Computer Crime and Security Survey”,
Vol. VIII, No.1, Spring 2002
83
[11] G. Goth, “Securing the Internet Against Attack”, IEEE Internet
Computing, Vol.7, No. 1, pp. 8-10, January-February, 2003
[12] Deloitte Touche Tohmatsu. AusCERT & NSW Police, “2002 Computer
Crime and Security Survey”, 2002
[19] D. Joo, T. Hong and I. Han, “The Neural Network Models for IDS
based on the asymmetric costs of false negative errors and false
positive errors”, Expert Systems with Applications, Vol. 25, pp. 69-75,
2003
84
[20] G. Giacinto, F. Roli and L. Didaci, “Fusion of Multiple Classifiers for
Intrusion Detection in Computer Networks”, Pattern Recognition
Letters, Vol. 24, pp. 1795-1803, 2003
85
[30] R. A. Maxion and K. M. C. Tan, ”Anomaly Detection in Embedded
Systems”, IEEE Trans. On Computers, Vol. 51, No. 2, pp. 108-120,
February 2002
86
[38] D. Dasgupta and F. González, “An Immunity-Based Technique to
Characterize Intrusions in Computer Networks”, IEEE Trans. On
Evolutionary Computation, Vol. 6, No 3, pp. 1081-1088, June 2002
87
[46] S. Mukkamala, G. Janoski and A. Sung, “Intrusion Detection Using
Neural Networks and Support Vector Machines”, Proc. of the 2002
International Joint Conference on Neural Networks, Vol. 2, pp. 1702-
1707, May 2002
[51] R. Comerford, “No Longer in Denial”, IEEE Spectrum, Vol. 38, No. 1,
pp. 59-61, January 2001
88
[55] F. Cohen, “Current Best Practice Against Computer Viruses”, 25th
IEEE International Carnahan Conference on Security Technology, Oct.
1-3, 1991
[59] L. Spitzner, “The Honeynet Project: Trapping the Hackers”, Security &
Privacy – IEEE Computer Magazine, Vol. 1, pp. 15-23, March/April
2003
89