You are on page 1of 18

SET-2

Information Security 1. a) Explain the terms related to Buffer overflow: i. Stack dumping
The contents of a stack. A stack dump is often displayed when an error occurs

Smashing the stack


The best known attack against the call-stack is the stack-smashing attack. This exploits the lack of array bounds-checking in C to overwrite the return address of a function. By planting the right address, the target program can be tricked to jump to and execute the attackers code. To illustrate this, we will consider this example program: #include <stdio.h> main() { char buffer[128]; FILE* file; freopen("fifo", "r", stdin); gets(buffer); } All this program does is to read a string from standard input into a buffer. We use the gets() function which reads characters from stdin until an newline of EOF characters is read. In this stdin has been remapped to a named pipe (fifo in UNIX terms) but in principle it could be the users terminal or a TCP socket. There is a well-known vulnerability in the gets function. Since this function doesnt check the size of the input from stdin, it is possible to overflow the 128 byte buffer that we have set up. Recall, that since buffer is allocated on the stack, it is right before to the saved frame-pointer followed by the return address (i.e. thememory location holding the return address has a higher address than the last byte of the buffer). By writing past the end we can overwrite the return address. So the stack from figure 1 ends up looking like figure 2. At this point, when the function returns it will jump to our shellcode. Notice that we have included some nop (all these do is advance the program counter to the next instruction, i.e. they are a null instruction) instructions. This is because the return address is just a guess and the nops give us some freedom in where we jump to. Other functions vulnerable to this attack include strcpy as well as the C++ iostreams library.

Smashed call-stack

ii. Execute Payload.

Once we control the execution path, we probably want it to execute our code. In this case, we need to include these codes or instruction sets in our exploit. Then, the part of code which allows us to execute arbitrary code is known as payload. The

payload can virtually do everything a computer program can do with the appropriate permission and right of the vulnerable programs or services. Shellcode as a payload
When the shell is spawned, it may be the simplest way that allows the attacker to explore the target system interactively. For example, it might give the attacker the ability to discover internal network, to further penetrate into other computers. A shell may also allow upload/download file/database, which is usually needed as proof of successful penetration test (pen-test). You also may easily install Trojan horse, key logger, sniffer, enterprise worm, WinVNC, etc. A shell is also useful to restart the vulnerable services keeping the service running. But more importantly, restarting the vulnerable service usually allows us to attack the service again. We also may clean up traces like log files and events with a shell. For Windows we may alter the registry to make it running for every system start up and stopping any antivirus programs. You also can create a payload that loop and wait for commands from the attacker. The attacker could issue a command to the payload to create new connection, upload/download file or spawn another shell. There are also a few others payload strategies in which the payload will loop and wait for additional payload from the attacker such as in multistage exploits and the (Distributed) Denial of Service (DDOS/DOS). Regardless whether a payload is spawning a shell or loop to wait for instructions; it still needs to communicate with the attacker, locally or remotely. There are so many things that can be done.

b) Describe in detail about model for internetwork security

This general model shows that there are four basic tasks in designing a particular security service: 1. Design an algorithm for performing the security-related transformation. The algorithm should be such that an opponent cannot defeat its purpose. 2. Generate the secret information to be used with the algorithm. 3. Develop methods for the distribution and sharing of the secret information. 4. Specify a protocol to be used by the two principals that makes use of the security algorithm and the secret information to achieve a particular security service. 2. a) With neat illustration explain Advanced Encryption Standard algorithm Inner Workings of a Round

The algorithm begins with an Add round key stage followed by 9 rounds of four stages and a tenth round of three stages. This applies for both encryption and decryption with the exception that each stage of a round the decryption algorithm is the inverse of its counterpart in the encryption algorithm. The four stages are as follows: 1. Substitute bytes 2. Shift rows 3. Mix Columns 4. Add Round Key

The tenth round simply leaves out the Mix Columns stage. The first nine rounds of the decryption algorithm consist of the following: 1. Inverse Shift rows 3. Inverse Add Round Key 2. Inverse Substitute bytes 4. Inverse Mix Columns

Again, the tenth round simply leaves out the Inverse Mix Columns stage. Each of these stages will now be considered in more detail.

b) Compare and contrast SHA-1 and HMAC functions. SHA-1: SHA-1 produces a 160-bit message digest based on principles similar to those used by Ronald L. Rivest of MIT in the design of the MD4 and MD5 message digest algorithms, but has a more conservative design. The original specification of the algorithm was published in 1993 as the Secure Hash Standard, FIPS PUB 180, by U.S. government standards agency NIST (National Institute of Standards and Technology). This version is now often referred to as SHA-0. It was withdrawn by NSA shortly after publication and was superseded by the revised version, published in 1995 in FIPS PUB 180-1 and commonly referred to as SHA-1. SHA-1 differs from

SHA-0 only by a single bitwise rotation in the message schedule of its compression function; this was done, according to NSA, to correct a flaw in the original algorithm which reduced its cryptographic security. However, NSA did not provide any further explanation or identify the flaw that was corrected. Weaknesses have subsequently been reported in both SHA-0 and SHA-1. SHA-1 appears to provide greater resistance to attacks, supporting the NSAs assertion that the change increased the security. HMAC Function: HMAC can be used in combination with any iterated cryptographic hash function. MD5 and SHA-1 are examples f such hash functions. HMAC also uses a secret key for calculation and verification of the message authentication values. The main goals behind this construction are * To use, without modifications, available hash functions. In particular, hash functions that perform well in software, and for which code is freely and widely available. * To preserve the original performance of the hash function without incurring a significant degradation. * To use and handle keys in a simple way. * To have a well understood cryptographic analysis of the strength of the authentication mechanism based on reasonable assumptions on the underlying hash function. * To allow for easy replace ability of the underlying hash function in case that faster or more secure hash functions are found or required. 3. a) Alice and Bob wish to share private messages, where each of them of two separate keys generated. What kind of strategy would you suggest to ensure confidentiality, key management and authentication for the conversation between Alice and Bob? Explain the strategy and also highlight the design issues related to the strategy proposed.
Public-Key Cryptosystem: Secrecy

With the message X and the encryption key PUb as input, A forms the ciphertext Y = [Y1, Y2,..., YN]: Y = E(PUb, X) The intended receiver, in possession of the matching private key, is able to invert the transformation:

X = D(PRb, Y) An adversary, observing Y and having access to PUb but not having access to PRb or X, must attempt to recover X and/or PRb. It is assumed that the adversary does have knowledge of the encryption (E) and decryption (D) algorithms.
Public-Key Cryptosystem: Authentication

A more efficient way of achieving the same results is to encrypt a small block of bits that is a function of the document. Such a block, called an authenticator, must have the property that it is infeasible to change the document without changing the authenticator. If the authenticator is encrypted with the sender's private key, it serves as a signature that verifies origin, content, and sequencing.
Public-Key Cryptosystem: Authentication and Secrecy

3. b) What is the realm in Kerberos environment? Explain with suitable diagram how a service between two realms takes place. 14.1. Kerberos We would like for servers to be able to restrict access to authorized users and to be able to authenticate requests for service. In this environment, a workstation cannot be trusted to identify its users correctly to network services. In particular, the following three threats exist: "In Greek mythology, a many headed dog, commonly three, perhaps with a serpent's tail, the guardian of the entrance of Hades." From Dictionary of Subjects and Symbols in Art, by James Hall, Harper & Row, 1979. Just as the Greek Kerberos has three heads, the modern Kerberos was intended to have three components to

guard a network's gate: authentication, accounting, and audit. The last two heads were never implemented.

A user may gain access to a particular workstation and pretend to be another user operating from that workstation.

A user may alter the network address of a workstation so that the requests sent from the altered workstation appear to come from the impersonated workstation.

A user may eavesdrop on exchanges and use a replay attack to gain entrance to a server or to disrupt operations. Two versions of Kerberos are in common use. Version 4 implementations still exist. Version 5 corrects

some of the security deficiencies of version 4 and has been issued as a proposed Internet Standard (RFC 1510). Versions 1 through 3 were internal development versions. Version 4 is the "original" Kerberos. Motivation Three approaches to security can be envisioned: 1. Rely on each individual client workstation to assure the identity of its user or users and rely on each server to enforce a security policy based on user identification (ID). 2. Require that client systems authenticate themselves to servers, but trust the client system concerning the identity of its user. 3. Require the user to prove his or her identity for each service invoked. Also require that servers prove their identity to clients. The first published report on Kerberos listed the following requirements:

Secure: A network eavesdropper should not be able to obtain the necessary information to impersonate a user. More generally, Kerberos should be strong enough that a potential opponent does not find it to be the weak link.

Reliable: For all services that rely on Kerberos for access control, lack of availability of the Kerberos service means lack of availability of the supported services. Hence, Kerberos should be highly reliable and should employ distributed server architecture, with one system able to back up another.

Transparent: Ideally, the user should not be aware that authentication is taking place, beyond the requirement to enter a password.

Scalable: The system should be capable of supporting large numbers of clients and servers. This suggests a modular, distributed architecture. The client saves each service-granting ticket and uses it to authenticate its user to a server each time a

particular service is requested. Let us look at the details of this scheme: 1. The client requests a ticket-granting ticket on behalf of the user by sending its user's ID and password to the AS, together with the TGS ID, indicating a request to use the TGS service.

Thus, the client now has a reusable ticket and need not bother the user for a password for each new service request. Finally, note that the ticket-granting ticket is encrypted with a secret key known only to the AS and the TGS. This prevents alteration of the ticket. The ticket is reencrypted with a key based on the user's password. This assures that the ticket can be recovered only by the correct user, providing the authentication. Now that the client has a ticket-granting ticket, access to any server can be obtained with steps 3 and 4: 1. The client requests a service-granting ticket on behalf of the user. For this purpose, the client transmits a message to the TGS containing the user's ID, the ID of the desired service, and the ticketgranting ticket. 2. The TGS decrypts the incoming ticket and verifies the success of the decryption by the presence of its ID. It checks to make sure that the lifetime has not expired. Then it compares the user ID and network address with the incoming information to authenticate the user. If the user is permitted access to the server V, the TGS issues a ticket to grant access to the requested service. Finally, with a particular service-granting ticket, the client can gain access to the corresponding service with step 5:
5. The client requests access to a service on behalf of the user. For this purpose, the client transmits a message to the server containing the user's ID and the service-granting ticket. The server authenticates by using the contents of the ticket.

This new scenario satisfies the two requirements of only one password query per user session and protection of the user password. Finally, at the conclusion of this process, the client and server share a secret key. This key can be used to encrypt future messages between the two or to exchange a new random session key for that purpose. Kerberos Realms and Multiple Kerberi A full-service Kerberos environment consisting of a Kerberos server, a number of clients, and a number of application servers requires the following: 1. The Kerberos server must have the user ID and hashed passwords of all participating users in its database. All users are registered with the Kerberos server. 2. The Kerberos server must share a secret key with each server. All servers are registered with the Kerberos server. However, users in one realm may need access to servers in other realms, and some servers may be willing to provide service to users from other realms, provided that those users are authenticated.
Request for Service in Another Realm

4. a) Discuss the requirement of segmentation and reassembly function in PGP. Segmentation and Reassembly E-mail facilities often are restricted to a maximum message length. For example, many of the facilities accessible through the Internet impose a maximum length of 50,000 octets. Any message longer than that must be broken up into smaller segments, each of which is mailed separately. To accommodate this restriction, PGP automatically subdivides a message that is too large into segments that are small enough to send via e-mail. The segmentation is done after all of the other processing, including the radix-64 conversion. At the receiving end, PGP must strip off all e-mail headers and reassemble the entire original block before performing the steps in following figure

b) Define S/MIME. What are the key algorithms used in S/MIME?

S/MIME (Secure/Multipurpose Internet Mail Extension) is a security enhancement to the MIME Internet e-mail format standard, based on technology from RSA Data Security. To understand S/MIME, we need first to have a general understanding of the underlying e-mail format that it uses, namely MIME. But to understand the significance of MIME, we need to go back to the traditional e-mail format standard, RFC 822, which is still in common use. Accordingly, this section first provides an introduction to these two earlier standards and then moves on to a discussion of S/MIME.
Cryptographic Algorithms Used in S/MIME

Function
Create a message digest to be used in forming a digital signature. Encrypt message digest to form digital signature. MUST support SHA-1.

Requirement

Receiver SHOULD support MD5 for backward compatibility. Sending and receiving agents MUST support DSS. Sending agents SHOULD support RSA encryption. Receiving agents SHOULD support verification of RSA signatures with key sizes 512 bits to 1024 bits. Encrypt session key for transmission with message. Sending and receiving agents SHOULD support Diffie-Hellman. Sending and receiving agents MUST support RSA encryption with key sizes 512 bits to 1024 bits. Encrypt message for transmission with one-time session key. Sending and receiving agents MUST support encryption with triple DES Sending agents SHOULD support encryption with AES. Sending agents SHOULD support encryption with RC2/40. Create a message authentication code Receiving agents MUST support HMAC with SHA-1. Receiving agents SHOULD support HMAC with SHA-1.

5. a) What does authentication header provide? Explain in detail about fields of authentication header. Authentication Header

The Authentication Header consists of the following fields:


Next Header (8 bits): Identifies the type of header immediately following this header. Payload Length (8 bits): Length of Authentication Header in 32-bit words, minus 2. For example, the default length of the authentication data field is 96 bits, or three 32-bit words. With a three-word fixed header, there are a total of six words in the header, and the Payload Length field has a value of 4.

Reserved (16 bits): For future use. Security Parameters Index (32 bits): Identifies a security association. Sequence Number (32 bits): A monotonically increasing counter value, discussed later.

Authentication Data (variable): A variable-length field (must be an integral number of 32-bit words) that contains the Integrity Check Value (ICV), or MAC, for this packet, discussed later.

Figure 16.3. IPSec Authentication Header

b) Explain briefly how IPSec documents are categorized.


IPSec Documents

The IPSec specification consists of numerous documents. The most important of these, issued in November of 1998, are RFCs 2401, 2402, 2406, and 2408:

RFC 2401: An overview of a security architecture RFC 2402: Description of a packet authentication extension to IPv4 and IPv6 RFC 2406: Description of a packet encryption extension to IPv4 and IPv6 RFC 2408: Specification of key management capabilities Encapsulating Security Payload (ESP): Covers the packet format and general issues related to the use of the ESP for packet encryption and, optionally, authentication.

Authentication Header (AH): Covers the packet format and general issues related to the use of AH for packet authentication.

Encryption Algorithm: A set of documents that describe how various encryption algorithms are used for ESP.

Authentication Algorithm: A set of documents that describe how various authentication algorithms are used for AH and for the authentication option of ESP.

Key Management: Documents that describe key management schemes. Domain of Interpretation (DOI): Contains values needed for the other documents to relate to each other. These include identifiers for approved encryption and authentication algorithms, as well as operational parameters such as key lifetime.

6. a) What are services SET provides? Give an overview of SET. SET provides three services:

Provides a secure communications channel among all parties involved in a transaction Provides trust by the use of X.509v3 digital certificates Ensures privacy because the information is only available to parties in a transaction when and where necessary

Requirements Book 1 of the SET specification lists the following business requirements for secure payment processing with credit cards over the Internet and other networks:

Provide confidentiality of payment and ordering information: It is necessary to assure cardholders that this information is safe and accessible only to the intended recipient. Confidentiality also reduces the risk of fraud by either party to the transaction or by malicious third parties. SET uses encryption to provide confidentiality.

Ensure the integrity of all transmitted data: That is, ensure that no changes in content occur during transmission of SET messages. Digital signatures are used to provide integrity.

Provide authentication that a cardholder is a legitimate user of a credit card account: A mechanism that links a cardholder to a specific account number reduces the incidence of fraud and the overall cost of payment processing. Digital signatures and certificates are used to verify that a cardholder is a legitimate user of a valid account.

Provide authentication that a merchant can accept credit card transactions through its relationship with a financial institution: This is the complement to the preceding requirement. Cardholders need to be able to identify merchants with whom they can conduct secure transactions. Again, digital signatures and certificates are used.

Ensure the use of the best security practices and system design techniques to protect all legitimate parties in an electronic commerce transaction: SET is a well-tested specification based on highly secure cryptographic algorithms and protocols.

Create a protocol that neither depends on transport security mechanisms nor prevents their use: SET can securely operate over a "raw" TCP/IP stack. However, SET does not interfere with the use of other security mechanisms, such as IPSec and SSL/TLS.

Facilitate and encourage interoperability among software and network providers: The SET protocols and formats are independent of hardware platform, operating system, and Web software.

b) Explain about the Web Security Considerations. Web Security Considerations The World Wide Web is fundamentally a client/server application running over the Internet and TCP/IP intranets. As such, the security tools and approaches discussed so far in this book are relevant to the issue of Web security. But, as pointed out in [GARF97], the Web presents new challenges not generally appreciated in the context of computer and network security:

The Internet is two way. Unlike traditional publishing environments, even electronic publishing systems involving teletext, voice response, or fax-back, the Web is vulnerable to attacks on the Web servers over the Internet.

The Web is increasingly serving as a highly visible outlet for corporate and product information and as the platform for business transactions. Reputations can be damaged and money can be lost if the Web servers are subverted.

Although Web browsers are very easy to use, Web servers are relatively easy to configure and manage, and Web content is increasingly easy to develop, the underlying software is extraordinarily complex. This complex software may hide many potential security flaws. The short history of the Web is filled with examples of new and upgraded systems, properly installed, that are vulnerable to a variety of security attacks.

A Web server can be exploited as a launching pad into the corporation's or agency's entire computer complex. Once the Web server is subverted, an attacker may be able to gain access to data and systems not part of the Web itself but connected to the server at the local site.

Casual and untrained (in security matters) users are common clients for Web-based services. Such users are not necessarily aware of the security risks that exist and do not have the tools or knowledge to take effective countermeasures.

7. a) Suggest any three password selection strategies and identify their advantage and disadvantages if any. Password Selection Strategies The lesson from the two experiments just described is that, left to their own devices, many users choose a password that is too short or too easy to guess. At the other extreme, if users are assigned passwords consisting of eight randomly selected printable characters, password cracking is effectively impossible. But it would be almost as impossible for most users to remember their passwords. Fortunately, even if we limit the password universe to strings of characters that are reasonably memorable, the size of the universe is still too large to permit practical cracking. Our goal, then, is to eliminate guessable passwords while allowing the user to select a password that is memorable. Four basic techniques are in use:

User education Computer-generated passwords Reactive password checking Proactive password checking

Users can be told the importance of using hard-to-guess passwords and can be provided with guidelines for selecting strong passwords. Computer-generated passwords also have problems. If the passwords are quite random in nature, users will not be able to remember them. A reactive password checking strategy is one in which the system periodically runs its own password cracker to find guessable passwords. The most promising approach to improved password security is a proactive password checker. In this scheme, a user is allowed to select his or her own password. b) Discuss in detail about the model of network management that can be SNMP and explain about key elements. Network Management refers to the activities, methods, procedures, and tools that pertain to the operation, administration, maintenance and provisioning of networked systems. The explanation for the above terms is given below. (i) Operation deals with keeping the network and the services that the network provides, up and running smoothly. It includes monitoring the network to spot problems as soon as possible, ideally before a user is affected. (ii) Administration involves keeping track of resources in the network and how they are assigned. It deals with all the housekeeping that is necessary to keep things under control. (iii) Maintenance is concerned with performing repairs and upgradesfor example, when a line card must be replaced, when a router needs a new operating system image with a patch, when a new switch is added to the

network. Maintenance also involves corrective and preventive proactive measures such as adjusting device parameters as needed and generally intervening as needed to make the managed network run better. (iv) Provisioning is concerned with configuring resources in the network to support a given service. For xample, this might include setting up the network so that a new customer can receive voice service. a general or basic model of network architecture is designed with the following key elements. Management station Management agent Management information base Network management protocol 8. a)What are two default policies that can be taken in a packet filter if there is no match to any rule? Which is more conservative? Explain with example rule sets both the policies? Packet-Filtering Router A packet-filtering router applies a set of rules to each incoming and outgoing IP packet and then forwards or discards the packet. The router is typically configured to filter packets going in both directions (from and to the internal network). Filtering rules are based on information contained in a network packet:

Source IP address: The IP address of the system that originated the IP packet (e.g., 192.178.1.1) Destination IP address: The IP address of the system the IP packet is trying to reach (e.g., 192.168.1.2) Source and destination transport-level address: The transport level (e.g., TCP or UDP) port number, which defines applications such as SNMP or TELNET

IP protocol field: Defines the transport protocol Interface: For a router with three or more ports, which interface of the router the packet came from or which interface of the router the packet is destined for

The packet filter is typically set up as a list of rules based on matches to fields in the IP or TCP header. If there is a match to one of the rules, that rule is invoked to determine whether to forward or discard the packet. If there is no match to any rule, then a default action is taken. Two default policies are possible:

Default = discard: That which is not expressly permitted is prohibited. Default = forward: That which is not expressly prohibited is permitted. Table 8.1. Packet-Filtering Examples

action ourhost A block *

port theirhost port * SPIGOT *

comment we don't trust these people

Table 8.1. Packet-Filtering Examples

A action ourhost allow OUR-GW action ourhost B block * action ourhost C allow * action src

port theirhost port 25 * *

comment connection to our SMTP port comment default comment connection to their SMTP port flags comment our packets to their SMTP port ACK their replies flags comment our outgoing calls ACK replies to our calls traffic to nonservers

port theirhost port * * *

port theirhost port * * 25 port 25 * port * * >1024

port dest * *

D allow {our hosts} * allow * action src 25

port dest * * *

allow {our hosts} * E allow * allow * * *

A. Inbound mail is allowed (port 25 is for SMTP incoming), but only to a gateway host. However, packets from a particular external host, SPIGOT, are blocked because that host has a history of sending massive files in e-mail messages. B. This is an explicit statement of the default policy. All rule sets include this rule implicitly as the last rule. C. This rule set is intended to specify that any inside host can send mail to the outside. A TCP packet with a destination port of 25 is routed to the SMTP server on the destination machine. The problem with this rule is that the use of port 25 for SMTP receipt is only a default; an outside machine could be configured to have some other application linked to port 25. As this rule is written, an attacker could gain access to internal machines by sending packets with a TCP source port number of 25.

D. This rule set achieves the intended result that was not achieved in C. The rules take advantage of a feature of TCP connections. Once a connection is set up, the ACK flag of a TCP segment is set to acknowledge segments sent from the other side. Thus, this rule set states that it allows IP packets where the source IP address is one of a list of designated internal hosts and the destination TCP port number is 25. It also allows incoming packets with a source port number of 25 that include the ACK flag in the TCP segment. Note that we explicitly designate source and destination systems to define these rules explicitly. E. This rule set is one approach to handling FTP connections. With FTP, two TCP connections are used: a control connection to set up the file transfer and a data connection for the actual file transfer. The data connection uses a different port number that is dynamically assigned for the transfer. Most servers, and hence most attack targets, live on low-numbered ports; most outgoing calls tend to use a highernumbered port, typically above 1023. Thus, this rule set allows Packets that originate internally Reply packets to a connection initiated by an internal machine Packets destined for a high-numbered port on an internal machine This scheme requires that the systems be configured so that only the appropriate port numbers are in use. One advantage of a packet-filtering router is its simplicity. Also, packet filters typically are transparent to users and are very fast. lists the following weaknesses of packet filter firewalls:

Because packet filter firewalls do not examine upper-layer data, they cannot prevent attacks that employ application-specific vulnerabilities or functions. For example, a packet filter firewall cannot block specific application commands; if a packet filter firewall allows a given application, all functions available within that application will be permitted.

Because of the limited information available to the firewall, the logging functionality present in packet filter firewalls is limited. Packet filter logs normally contain the same information used to make access control decisions (source address, destination address, and traffic type).

Most packet filter firewalls do not support advanced user authentication schemes. Once again, this limitation is mostly due to the lack of upper-layer functionality by the firewall.

They are generally vulnerable to attacks and exploits that take advantage of problems within the TCP/IP specification and protocol stack, such as network layer address spoofing. Many packet filter firewalls cannot detect a network packet in which the OSI Layer 3 addressing information has been altered. Spoofing attacks are generally employed by intruders to bypass the security controls implemented in a firewall platform.

Finally, due to the small number of variables used in access control decisions, packet filter firewalls are susceptible to security breaches caused by improper configurations. In other words, it is easy to

accidentally configure a packet filter firewall to allow traffic types, sources, and destinations that should be denied based on an organization's information security policy.

b) What are the advantages of decomposing a user operation into elementary actions? The decomposition of a user operation into elementary actions has three advantages: 1. Because objects are the protectable entities in a system, the use of elementary actions enables an audit of all behavior affecting an object. Thus, the system can detect attempted subversions of access controls (by noting an abnormality in the number of exception conditions returned) and can detect successful subversions by noting an abnormality in the set of objects accessible to the subject. 2. Single-object, single-action audit records simplify the model and the implementation.
3. Because of the simple, uniform structure of the detection-specific audit records, it may be relatively easy to btain this information or at least part of it by a straightforward mapping from existing native audit records to he detection-specific audit records.

c) What are false negatives and false positives?


False Positive: An event signaling an IDS to produce an alarm when no attack has taken place. False Negative: A failure of an IDS to detect an actual attack. Profiles of Behavior of Intruders and Authorized Users

Prepared by

G. DAYANANDAM Prof. and HOD QISIT, ONGOLE.

You might also like