Professional Documents
Culture Documents
Deploying ACE
Overview
Objectives
Connecting the ACE to the Network
ACE Installation Procedure
ACE Appliance GUI
Network Topologies
SNAT
Policy-Based Routing
Virtualization
Resource Management
Authorizing Management Users
Configuring Interfaces
Summary
1
1
1
3
4
5
7
7
7
8
16
29
47
47
47
48
61
63
63
63
64
66
69
71
73
74
81
90
99
107
111
113
Overview
Objectives
Class Maps
Syntax Summary
Policy Maps
Policy Type
Match Type
Syntax Summary
Applying Policy Maps
Primary Policy Maps
Secondary Policy Maps
Summary
113
113
114
121
123
126
126
131
133
136
136
140
141
Overview
Objectives
Permitting Management Traffic
SNMP Manageability
Summary
141
141
142
148
156
Security Features
Overview
Objectives
IP Access Control Lists
TCP/IP Fragmentation/Reassembly
TCP/IP Normalization
Network Address Translation
Summary
Health Monitoring
Overview
Objectives
Health Monitoring Overview
Active Health Probes
No ip address Command in Probe
Optional ip address Command Without routed Keyword
Optional ip address routed Command
HTTP Error Code Monitoring
Using TCL Scripting
Summary
ii
157
157
157
158
160
164
170
178
179
179
179
180
188
189
189
189
195
198
198
206
207
207
207
208
214
215
216
216
232
234
240
241
241
241
242
250
254
257
267
268
270
274
281
283
283
283
284
291
294
301
307
309
309
310
312
324
325
326
326
329
330
331
332
333
334
335
335
338
High Availability
339
Overview
Objectives
Redundancy
Object Tracking
Failover
State Replication
Fault-Tolerance Configuration
Displaying Fault-Tolerance Information
Summary
339
339
340
344
347
349
352
358
363
365
Overview
Objectives
Analyzing Network Requirements
Designing ACE Contexts
Designing ACE Features
Configuring Multiple Integrated Features
Summary
365
365
366
371
376
384
392
Summary
Self-Check
393
395
309
397
iii
iv
ACEAP
Objectives
Upon completing this lesson, you will be able to describe how to deploy and configure
intelligent network services using the Cisco ACE appliance. This includes being able to meet
these objectives:
Present the Cisco icons and symbols that are used in this course, and information on where
to find additional technical references
ACEAP v1.01-2
The figure lists the skills and knowledge that you must possess to benefit fully from the course.
Course Goal
Deploy and configure intelligent network services
using the Cisco 4710 Application Control Engine
appliance.
ACEAP v1.01-4
Upon completing this course, you will be able to meet these objectives:
Describe IP application delivery with the Cisco Application Control Engine (ACE)
appliance
Describe the structure and function of the Modular Policy CLI statements used to configure
ACE features
Describe the ACE features that provide IP application-based security on the ACE appliance
Describe the capabilities and configuration of the ACE features used to provide load
balancing of IP-based applications
Identify the Layer 7 processing options used to provide advanced application networking
Describe the ACE web application acceleration, optimization, and compression features
Describe the high-availability features of the ACE appliance, which are used to provide
reliable application networking services
Course Flow
This topic presents the suggested flow of the course materials.
Course Flow
A
M
DayDay
4 5
Day 1
Day 2
Day 3
Introducing ACE
Layer 4 Load
Balancing
Processing
Secure
Connections
Integrating
Multiple Features
Health Monitoring
Labs
Labs
Deploying ACE
High Availability
Lunch
Modular Policy
CLI
P
M
Layer 7 Protocol
Processing
WAA Features
Managing the
ACE appliance
Labs
Security Features
Labs
Labs
Labs
ACEAP v1.01-6
The schedule reflects the recommended structure for this course. This structure allows enough
time for the instructor to present the course information and for you to work through the lab
activities. The exact timing of the subject materials and labs depends on the pace of your
specific class.
Additional References
This topic presents the Cisco icons and symbols that are used in this course, and information on
where to find additional technical references.
IP Router
Ethernet Switch
Multilayer Switches
Third-Party FC
Director
ACE
ACEAP v1.01-8
These are some of the icons and symbols that you will see throughout this course.
Application Server
with FC HBA
HBA
FC
iSCSI
FC RAID
Subsystem
Application Server
with iSCSI
FC
NAS
NAS Filer
FC Tape
Subsystem
Workstation
ACEAP v1.01-9
For more information on Cisco terminology, see the Cisco Internetworking Terms and
Acronyms glossary of terms at
http://www.cisco.com/univercd/cc/td/doc/cisintwk/ita/index.htm.
Lesson 1
Objectives
Upon completing this lesson, you will be able to describe IP application delivery with the ACE
4710 appliance and introduce the features of the ACE 4710 appliance. This includes being able
to meet these objectives:
IP Protocol Stack
Application Protocols
TCP
UDP
IP
ACEAP v1.01-4
IP networks are widely deployed and provide data transport for a large percentage of modern
data networks. The IP stack consists of the Internet Protocol and all the protocols that depend
on IP for network services.
The IP stack is often represented as a four-layer architecture. From bottom to top, these layers
are as follows:
Physical and data link protocols define how IP uses each supported physical media type to
transmit data.
TCP and User Datagram Protocol (UDP) define program-to-program communication. This
is done by adding more addressing, in addition to the IP address, in the form of port
numbers. A particular TCP or UDP port is associated with an individual program or
process that is using the IP network for communications. Notice that TCP or UDP port
numbers are significant only within a particular computer system. TCP and UDP also
describe the format of their respective packets. TCP adds additional functionality and
provides reliable data transmission. TCP does this through acknowledgment,
retransmission, and flow-control functions, which are part of the TCP specification.
Application protocols define the format and meaning of the data portion of a TCP or UDP
packet. Examples of application protocols are the Simple Mail Transfer Protocol (SMTP),
which describes how to transmit e-mail from one system to another, and the HTTP used to
request and transmit web pages.
IP Header Fields
Physical
Header
Version
IP Hdr
20 bytes Protocol Data: ICMP, TCP, UDP
Header
Length
Type of Service
Flags (3
bits)
Identification
Time to Live
Protocol
Fragment Offset
Header Checksum
Source IP Address
Destination IP Address
ACEAP v1.01-5
IP packets contain a standard 20-byte header, the layout of which is diagrammed in the figure.
The highlighted fields are the most important to understand when designing ACE solutions.
Optional header fields exist and are placed after the standard headers.
The standard IP header fields are:
10
Version: Four bits that specify the version of the IP protocol used by the packet. IP
Version 4 (IPV4) is the most prevalent version in modern networks. IP Version 6 (IPV6),
also known as IP Next Generation (IPNG) is a newer version of the IP protocol that has
been developed and is now supported by major network vendors including Cisco. This
course concentrates on IPV4; however, the concepts outlined here are applicable to IPV6.
Header length: 4 bits that specify the length of the IP header in 32-bit words. The standard
IP header carries a header length of 5. The maximum length of the IP header is 15 words or
60 bytes.
Type of service (ToS): 8 bits that describe the performance characteristics of the service
requested for this packet. ToS is used by quality-of-service (QoS) algorithms to schedule
packet transmission and to drop lower-priority packets if necessary.
Total length: 16 bits that specify the total byte count in the IP packet. This total length
includes the bytes in the header.
Identification: 16-bit field that contains a number identifying this packet. This field is used
to associate the fragments of a fragmented IP datagram together.
Flags: 3 bits, of which only two are used. The first is reserved. The second bit is used to
indicate that a packet should not be fragmented (the Do Not Fragment flag). The third bit is
used to indicate that More Fragments follow this one. If an IP datagram is fragmented, the
More Fragments flag is set on all fragments except the last one.
Fragment offset: 13 bits used to specify the offset at which the data in this fragment
should be placed in the reassembled IP datagram. The fragment offset counts groups of 8
bytes.
Time to Live: 8 bits that indicate how many hops the IP packet can transit. Each router
decrements the Time to Live (TTL) value when it moves a packet from one physical
network to another. If the TTL reaches 0 the packet is dropped.
Protocol: 8 bits that specify the upper-level protocol contained in the packet.
Header checksum: 16-bit checksum value used to detect corruption of any header values.
11
IP Hdr
UDP Hdr
20 bytes 8 bytes
Application Data
Length
Checksum
ACEAP v1.01-6
UDP datagrams contain a standard 8-byte header, the layout of which is diagrammed in the
figure. The fields most relevant to ACE services are highlighted.
The UDP header fields are:
Source Port Number: The 16-bit port address used by the sending process. The operating
system on the transmitting system provides a means to map the port number to the
appropriate process.
Destination Port Number: The 16-bit port address used by the receiving process. The
operating system on the receiving system provides a means to map the port number to the
appropriate process.
Length: 16 bits that contain a count of the bytes in the UDP datagram including both the
header and the data.
Checksum: The 16-bit checksum used to detect corruption in the datagram. The UDP
checksum actually covers more fields than just the UDP header and the data field: the
source IP address, the destination IP address, and the protocol field in the IP header are also
included in the checksum calculation.
Note
12
The IP address and the UDP/TCP port number for the end of a particular flow of packets is
often written as [ip-address]:[port-number] or [domain-name]:[port-number]; for example,
198.133.219.25 :80 or www.cisco.com:80 specifies the well-known port for the
www.cisco.com web server.
IP Hdr
TCP Hdr
20 bytes 20 bytes Application Data
Acknowledgment Number
Header
Length
Reserved (6
bits)
Flags (6 bits)
TCP Checksum
Window Size
Urgent Pointer
ACEAP v1.01-7
TCP segments contain a standard 20-byte header, the layout of which is diagrammed in the
figure. The fields most relevant to ACE services are highlighted.
The standard TCP header fields are:
Source port number: The 16-bit port address used by the sending process. The operating
system on the transmitting system provides a means to map the port number to the
appropriate process.
Destination port number: The 16-bit port address used by the receiving process. The
operating system on the receiving system provides a means to map the port number to the
appropriate process.
Sequence number: The 32-bit number used to indicate the position in the senders byte
stream of the data in this segment. When a TCP connection is opened, a system establishes
a starting sequence number. The sequence number is incremented for each data byte
transmitted.
Acknowledgment number: 32-bit number used to indicate the sequence number of the
next byte that it is expected to receive.
Header length: 4-bit number containing the number of 32-bit words in the TCP header.
Flags: Six flags used for TCP connection and flow control: urgent (URG),
acknowledgment (ACK), push (PSH), reset (RST), synchronize (SYN), and finish (FIN).
Windows size: 16-bit counter of the number of bytes the receiver is still capable of
receiving.
TCP checksum: 16-bit field used to detect data corruption. The TCP checksum covers the
entire TCP segment and the source and destination address, protocol, and length fields from
the IP packet.
Urgent pointer: 16-bit field that points to the last byte of urgent data.
13
TCP Connection
Client
Server
SYN
Initialize
SYN-ACK
ACK
Data
ACK
Data
Use
More Data
ACK
FIN
Close
ACK
FIN
ACK
ACEAP v1.01-8
Protocols that use TCP for transport must first establish a TCP connection. After the connection
is established, application-level data can be transmitted. The figure shows a use of TCP: a web
client establishes a TCP connection with a web server to retrieve information.
TCP connections use a 32-bit sequence number to count each byte of transmitted data. When a
TCP connection is initialized, each system determines the sequence number to use to start the
byte numbering for data it transmits on that connection. This sequence number is then
transmitted to the communications partner. TCP acknowledges the receipt of data by
transmitting acknowledgments that indicate the next expected sequence number. For example,
if an acknowledgment contains the sequence number 64125, all bytes up to and including
64124 have been received.
The TCP packet header contains six flags, three of which are used in the normal setup and
teardown of a TCP connection. These flags are:
SYN: A packet with the SYN flag set is used to synchronize the sequence numbers; this
informs the receiver what sequence number the sender intends to use to start counting
transmitted data.
ACK: A packet with the ACK flag set acknowledges how much data has been received by
the system sending the ACK packet.
FIN: A packet with the FIN flag set indicates that the sending system has finished sending
data and wants to gracefully close the connection.
Many TCP packets have some combination of flags set. For example, a SYN-ACK packet
signifies that this packet is synchronizing the sequence number to be used in one direction and
at the same time acknowledging the packets received in the opposite direction.
14
The figure shows the packet flow throughout each phase of a TCP connection:
Initialize: A client normally starts a TCP connection by sending a SYN packet. The server
responds with a SYN-ACK, which the client then acknowledges with an ACK packet. At
this point, the connection is ready for use.
Use: Data is transmitted by each system as appropriate. Multiple data packets can be sent in
one direction before they are acknowledged by the receiving system.
Close: Each system informs its communications partner that it is done sending data and
wants to close a connection by transmitting a FIN packet. When each side has transmitted
and acknowledged a FIN packet, the connection is closed.
15
IP Application Review
This topic describes the characteristics of the prominent IP-based applications and the
underlying technologies.
OSI Layers
7
Application
Presentation
Session
Transport
Network
IP
Data Link
Physical
Application Protocols
TCP / UDP
TCP/IP Stack
ACEAP v1.01-10
The Open Systems Interconnection (OSI) suite of protocols developed by the ISO defines a
seven-layer network model. The seven layers in the OSI model are:
16
Data Link: Defines how devices are addressed and how data is presented and transmitted
over the physical medium
Network: Defines device addressing, services, and data formats that are used to provide
consistent services over many different physical media
Transport: Defines data transit management services such as end-to-end error recovery
and flow control
Presentation: Defines transformations that application data might need to undergo for
transport on a network (for example encryption)
Application: Defines the individual applications and how they transmit and receive
information
The TCP/IP protocol stack uses layer boundaries that are different from those of the OSI
model. However, even when content switching in a TCP/IP environment is described, it is
referred to as the corresponding OSI layers. The figure shows how the seven-layer OSI model
maps to the TCP/IP stack, which is generally defined with four layers.
In general, the OSI layer numbers are used to describe products and the functionality they
provide. For example, Layer 3 and 4 switching devices make switching decisions based on the
information in the IP or TCP/UDP layers. TCP/UDP application layer processing is usually
referred to as Layer 57 switching or just Layer 7 switching.
17
Client
ACK
ACEAP v1.01-11
After a TCP connection is established, the client sends requests to the server. The format of
these requests is defined by the individual protocol specification for the service the client is
requesting.
For example, a web request using HTTP has specific characteristics. The RFCs that describe
HTTP refer to method tokens to define the type of request being sent. The most common
methods are the following:
The standard HTTP request includes a method token, a path to the resource being acted upon,
and the HTTP version number used in the request.
Web server responses start with a numeric return code and a description of the return code. The
meaning of the possible return codes is defined by the HTTP specification and is processed by
the web client. The description of the return code is often used to provide additional readable
information, which can be provided to the end user.
Web server responses then contain any data to be presented to the end user as a result of the
HTTP request. These responses can span several TCP packets.
HTTP has two versions in use, version 1.0 and version 1.1. HTTP interactions include version
information in both the request and the response.
18
This example shows a GET request. The GET request includes the URL of the resource being
requested (/) and the version of the HTTP protocol being used (1.0). The response starts with a
return code or retcode (200) and description (OK), followed by the data, which requires a
second packet. Finally, the client acknowledges receipt of the data.
For more information on HTTP, see the relevant RFCs: RFC1945 for HTTP Version 1.0, and
RFC 2616 and RFC 2817 for HTTP Version 1.1.
Tip
19
Layer 4 Switching
Layer 4 information is always present in the first packet of the flow:
IP protocol
Source and destination IP addresses
Source and destination port addresses (for TCP/UDP)
Client Side
Server Side
ACEAP v1.01-12
IP Protocol: This field is used to differentiate between the higher level protocols such as
UDP and TCP that are supported by IP.
Source and destination IP addresses: The IP address of the transmitting system and the
intended recipient.
Source and destination port: The port number being used on the transmitting system and
the intended recipient.
Note
Port numbers are used to direct the IP traffic to a particular application process such as a
web client or server. Well-known port numbers are defined for most IP-based services. For
example, port 80 is used for HTTP.
Layer 4 content-switching decisions can be based on any of the Layer 4 fields listed above.
With TCP connections, the Layer 4 information is consistent for all packets in the connection.
The Layer 4 information is often said to define a flow, which is the communication path for a
particular connection.
The figure shows a flow of packets coming from the client side of the network to a ACE. The
ACE examines the first packet in a new flow or connection and a Layer 4 switching decision is
made for the flow as a whole. The content switch makes this decision and then records the flow
parameters and the switching decision. This table of switching decisions is used to switch every
subsequent packet in the flow. Information is removed from the switching table when a
connection is closed. In the case of Layer 4 switching of TCP packets, these decisions are
normally made on the basis of SYN and FIN packets and are done at TCP connection setup and
termination. Reset (RST) packets are also analyzed because they are used to refuse a
connection when it is requested or to abort an existing connection.
20
Layer 7 Switching
Layer 57 information
Received only after TCP connection setup
Might span multiple IP packets
SYN
SYN_ACK
SYN
SYN_ACK
1
ACEAP v1.01-13
Information about layer 7 is available only after application data has been transmitted, but
transmission requires that the TCP connection be fully functional, leading to a dilemma. A
server must respond to the client to fully start the TCP connection before the client sends the
Layer 7 information that the content switch needs to select the server.
The content switch handles this dilemma by buffering client data and temporarily acting as a
server. To do this, the content switch responds to the incoming SYN packet with its own
SYN_ACK. The content switch then buffers packets until it has enough Layer 7 information to
make a load-balancing decision.
After a destination server has been selected, the content switch must make a connection to the
server on behalf of the client. To establish the TCP connection to the server, a SYN packet is
sent to the server and then the ACE waits for the SYN_ACK packet to be sent from the server.
At this point all buffered packets received from the client are sent to the server.
After the buffered packets have been sent, the two TCP connections can be spliced together by
the content switch. This splicing is done by receiving packets from one connection and
retransmitting them to the other.
Because there are two different TCP connections from the content switch, one to the client and
one to the server, there are probably two sets of sequence numbers in use, one on each
connection. The content switch translates the sequence numbers from one connection to the
other.
21
Digital Encryption
ACEAP v1.01-14
Encryption transforms data in ways that can be used to provide security. Unencrypted data is
often referred to as plaintext, and encrypted data is referred to as cipher text. A pair of
algorithms are used, one for encryption and one for decryption. Plaintext is encrypted to yield
cipher text, and cipher text is decrypted to yield plaintext. Encryption algorithms are designed
in such a way that the cipher text is unusable without specialized or secret knowledge. The
complexity of the algorithm and the specialized or secret knowledge affect the amount of
security provided by the encryption algorithm.
For example, you might take every byte of a data message and invert the order of the bits to
encrypt it. This does, in fact, create cipher text but it is not usable. The problem is that as soon
as the algorithm becomes known, the knowledge required to recover the plaintext is not secret.
This encryption algorithm is therefore not very secure.
Modern encryption algorithms are much more complicated. However, security that requires the
algorithm itself to stay secret does not last. Instead, modern encryption algorithms use a second
piece of data to modify the results of the process. This second piece of data is referred to as the
key. Modern encryption algorithms are designed so that a single plaintext data block encrypted
with two different keys results in two different cipher texts.
The figure shows the encryption and decryption process. Plaintext on the left is modified by a
key-based encryption algorithm to produce the cipher text in the middle. This cipher text is then
modified by a key-based decryption algorithm to produce a copy of the original plaintext.
22
DMZ
Outside Network
Inside Network
Internet
State Table
Entries for each active connection:
Source and destination addresses
Source and destination ports
Sequence numbers
TCP flags
Source
Destination
Permitted?
Outside
DMZ:80
Yes
Outside
DMZ:!80
No
DMZ
Any
Yes
Inside
Any
Yes
Outside
Inside
Established Session
2007 Cisco Systems, Inc. All rights reserved.
No
Yes
ACEAP v1.01-15
A firewall technology that is used in modern firewall products including the ACE module is
stateful packet filtering. Firewalls employing stateful packet filtering use defined policy entries
to determine under what conditions a connection can be initiated. When a new connection is
detected, it is added to the state table. Packets that are not initiating connections but that are
consistent with the information in the state table are allowed to pass through the firewall.
New connection detection for TCP is a matter of looking for TCP SYN packets. Because UDP
does not use connections, a connection entry is created the first time a packet is allowed by the
policy table. Any other UDP packets that match the source and destination information in the
state table are allowed.
Entries are removed from the state table when no longer needed. Again, this is easy to detect in
TCP because the firewall sees the FIN packets used to close a connection. UDP entries are
removed from the state table after a period of inactivity.
23
Internal Network
External Network
Inside Systems
Outside Systems
IL
OL
Local Addresses
IG
OG
Global Addresses
Source
Destination
Source
Destination
Inside Local
Outside Local
Inside Global
Outside Global
Source
Destination
Source
Destination
Outside Local
Inside Local
Outside Global
Inside Global
ACEAP v1.01-16
Network Address Translation (NAT) translates addresses when traffic transits between two
networks. These two networks are considered the internal and external networks from the
perspective of the NAT function.
NAT uses two different but closely related pairs of termsinside and outside, and local and
globalto describe systems attached to and addressing used by internal and external networks.
These terms are defined as follows:
Local addresses are addresses that are valid on the internal network. They are used in
packets transiting the internal network.
Global addresses are addresses that are valid on the external network. They are used in
packets transiting the external network.
The packet flows shown at the bottom of the figure also illustrate how the addresses are used.
24
IL
External Network
198.133.219.0
OL
IG
OG
10.0.0.83
198.133.219.25
Source
Destination
Source
Destination
10.0.0.83
198.133.219.25
198.133.219.83
198.133.219.25
Source
Destination
Source
Destination
198.133.219.25
10.0.0.83
198.133.219.25
198.133.219.83
ACEAP v1.01-17
The figure shows destination NAT. The network address has an internal network of 10.0.0.0/24
and an external network of 198.133.219.0/24. Network translation is being used to allow a
system on a private network address space used on the internal network to communicate with a
web server that is on the public, external Internet. To perform this function, the network
translation on the firewall is configured to translate the IP address of the inside system to a
valid address in the outside network. An address with the same last octet has been allocated for
this purpose.
25
IL
External Network
198.133.219.0
OL
IG
OG
10.0.0.83
198.133.219.25
Source
Destination
Source
Destination
10.0.0.83
10.0.0.25
198.133.219.83
198.133.219.25
Source
Destination
Source
Destination
10.0.0.25
10.0.0.83
198.133.219.25
198.133.219.83
ACEAP v1.01-18
The figure shows both source and destination NAT. This example of a network address
translation has an internal network of 10.0.0.0/24 and an external network of 198.133.219.0/24.
Network translation is being used to allow two systems with IP addresses of 10.0.0.83 and
198.133.219.25 to communicate without knowing about the other network. Translation is set up
to leave the last octet of the IP address unchanged.
26
Internal Network
10.0.0.0/24
External Network
198.133.219.0
198.133.219.25
10.0.0.84
Source
Destination
Source
Destination
10.0.0.83:2418
198.133.219.25:80
198.133.219.83:2418
198.133.219.25:80
Source
Destination
Source
Destination
10.0.0.84:2417
198.133.219.25:80
198.133.219.83:2417
198.133.219.25:80
Source
Destination
Source
Destination
10.0.0.84:2418
198.133.219.25:80
198.133.219.83:2419
198.133.219.25:80
ACEAP v1.01-19
Port Address Translation (PAT) adds port numbers to the translation table. A typical use of
PAT is to provide network access for a large internal network while conserving addresses in the
external network. In this example, one address is used in the external network to provide access
for an internal network with a Class C worth of hosts. The packets show two different systems
generating requests to a web server.
The first request comes from the top system in the internal network. The request is translated on
the external network to use a single outside local address as the source. Because no other
connections are currently using this outside local address, the port number is maintained
through the translation.
A second request comes from the bottom system in the internal network. Again, it is translated
to use the same outside local address as the source address for these packets. No other
connection is currently using the port number in the request, so you can maintain the port
number through the translation.
The third request again comes from the bottom system in the internal network. Again, it is
translated with the single outside local address; however, this time there is a conflict with the
source port that is used by the client. This port is translated to a different port, which is unique
on the IP address being used as the outside local address.
27
GET OID
Data
SET OID
Response
Trap
ACEAP v1.01-20
Simple Network Management Protocol (SNMP) provides methods for remote management of
network attached devices. The managed devices contain a software component that is referred
to as the SNMP agent. This component talks to the Network Management Station (NMS) with
SNMP.
The figure shows the major interactions that most devices utilize out of the entire protocol.
The top interaction shows the NMS retrieving management data from the managed device. This
is done by sending a GET request (of which there are several variations) to the managed device.
The GET request specifies the object ID (OID) of the management information to be retrieved.
OIDs are a hierarchical, numerical namespace used to identify individual manageable
attributes. Most NMSs use data in an MIB definition to associate OIDs with human-readable
variable names.
The middle interaction shows the NMS using a SET request to modify the operation of the
managed device. Some devices support full configuration activities through SNMP SET
commands.
The bottom interaction shows the managed device sending an unsolicited alert message called a
trap to the NMS. SNMP traps are often used to communicate error indications.
28
HTTP
Client Request
Server Response
ACEAP v1.01-22
HTTP is the most common application protocol for transferring resources on the web. HTTP
defines the format and meaning of messages exchanged between systems. There are two
categories of HTTP messages: request and response. Requests come from clients, and
responses are sent by HTTP servers.
All HTTP messages are made up of three fields:
Start line: Specifies the method or reason for the message. The method is a verb or action
word, such as GET.
Header fields: Zero or more header fields follow the start line.
Body: The body is optional and can contain any kind of data, such as web pages and image
and audio files.
There are two main versions of HTTP in use: HTTP/1.0 and HTTP/1.1. This overview deals
specifically with HTTP/1.1.
HTTP is a stateless protocol; there is no awareness of previous sessions or conversations, either
on the client or on the server. The clients and servers can cache information that they receive
from the request-response process, but they do not track the previous conversations in the
HTTP protocol. The original intent for not maintaining state was to provide scalability for web
servers. If a server needed to maintain state for a large number of clients connections, the use
of resources (such as memory and CPU cycles) and the time for connections would increase,
degrading both the client and the server sessions. For applications that require state to be
maintained across multiple HTTP requests, other enhancements and headers are included (such
as cookies), or other protocols and applications are used.
For more information, see http://www.ietf.org/rfc/rfc2616.txt.
29
http://www.cisco.com:80/US/partner/index.html
Scheme
Host
Optional Port
Host: www.cisco.com
User-Agent: Mozilla/5.0 (Windows; rv:1.8.1.6) Firefox/2.0.0.6
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-23
The figure shows the different elements of a URL, which is a specialized form of a Uniform
Resource Identifier (URI), and where these fields are located in the HTTP request header.
Every resource (for example, HTML page, image file, script) on a server has a distinct name
that the client identifies it by. The client requests a specific resource with a URI. URIs have
been known by many names: WWW addresses, Universal Document Identifiers, and most
often as URLs.
URIs have a simple format that identifies three pieces of information: scheme or protocol,
location or Internet address (name or IP address), and a resource (such as path, file, or
mailbox).
http://hostname/path
ftp://host/file
mailto:mailbox@domain
URIs in HTTP can represent an absolute path or a relative path. The syntax of an absolute and a
relative path contain the same information, but in a relative path some information is implied.
For example:
Absolute path, http://www.cisco.com/US/partner/index.html:
30
Protocol: HTTP over TCP is the protocol to use when communicating with this resource.
This usually also implies the destination port, unless explicitly cited differently from the
default port for this protocol. For example, the default port for HTTP communications is
TCP port 80, but if :8080 were added to the end of the domain name/IP address of the URI,
the client would establish a TCP connection with port 8080 instead (that is,
http://www.example.com:8080/somepath).
Host: The hostname or IP address identifies the target system to which to connect (using
the protocol already specified by the first part of the URI).
Path and/or Filename: The URI path similar to a file path on a system (such as Windows
or UNIX). HTTP URIs are formatted using the UNIX file convention a/b/c.
Relative Path:
./example.jpg: This implies that protocol, host, and path remain unchanged, but instead of
looking for the index.html (from the absolute path example), look for the image
example.jpg in the same directory on the HTTP server. This could be expanded to
http://www.cisco.com/US/partner/example.jpg.
/swa/i/logo.gif: This implies that protocol and host remain unchanged but now look for the
image file logo.gif in the new absolute path specified relative to the host root directory.
This could be expanded to http://www.cisco.com/swa/i/logo.gif.
test/somefile.html: This implies that protocol and host remain unchanged and append the
directory test to the end of the original directory /US/partner/. This could be expanded to
http://www.cisco.com/US/partner/test/somefile.html.
31
Client Request
HTTP Request
Methods
GET
HEAD
POST
PUT
DELETE
TRACE
OPTIONS CONNECT
ACEAP v1.01-24
HTTP supports several different request commands, called HTTP methods. Every HTTP
request uses a method. The client sends an HTTP request to the web server. The HTTP request
method tells the server what kind of response the client wants to receive or what to do with the
request.
There are eight defined request methods in HTTP/1.1. Not all HTTP servers implement all
methods; all servers are supposed to accept GET, HEAD, and if possible, OPTIONS methods.
32
GET: Retrieve whatever information is identified by the request. The client requests a
resource identified by the URI. Another option that can be included with a GET request is
an If-Modified-Since message, which transforms the GET request into a conditional GET.
A conditional GET retrieves the information only if the condition is met. It is also possible
to perform a partial GET, and send a range in the request.
HEAD: The HEAD method is identical to the GET request, except that the server must not
return the message body in the response, only the HTTP header with meta-information
identical to what it would have returned to a GET request.
POST: The POST method requests the server to accept information that is associated with
the request-URI previously requested. This method is often used with forms data or for
executing Common Gateway Interface (CGI) scripts on the server.
PUT: The PUT method is similar to the POST method, but instead of just altering an
existing URI, the PUT method creates a new URI if one does not exist. With the PUT
method, the data is stored as part of the request, not as part of the URI.
DELETE: The DELETE method is used to remotely delete the resource identified in the
request URI. Authorization is required to use this method.
TRACE: The TRACE method is used to observe how client messages are altered as they
pass through a proxy server (or multiple proxy servers). The request is echoed back to the
client, and by using the Max-Forward and Via headers, the client can determine how each
proxy server in the path is altering the request packets.
OPTIONS: The OPTIONS method represents a request for information about the
communication options available for requests and responses for a specific request-URI.
This method allows the client to determine the options and requirements associated with a
resource, or the capabilities of a server, without implying a resource action or initiating a
resource retrieval. Responses to this request are not cacheable.
CONNECT: Converts the request connection into a transparent TCP/IP tunnel, usually to
set up Secure Sockets Layer (SSL)-encrypted communication through an unencrypted
HTTP proxy.
33
Client Request
Host: www.cisco.com
User-Agent: Mozilla/5.0 (Windows; rv:1.8.1.6) Firefox/2.0.0.6
Accept: text/xml,application/xml,application/xhtml+xml,text/html;
q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-25
The boldface text in the figure is an example of request headers (except Connection: keepalive, which is an example of a general HTTP header).
HTTP headers determine how the end system (a server, in the case of a request header) handles
an HTTP request. In HTTP/1.1, unrecognized headers should be ignored by the receiving
system. Only the HOST header is required in all request headers, although some headers are
mandatory depending on the request type or server response.
The following are the well-formed request headers (experimental request headers are not
included):
34
Accept: The Accept request-header field can be used to specify certain media types that are
acceptable for the response.
Authorization: A user agent that wants to authenticate itself with a server usually, but not
necessarily, after receiving a 401 response, does so by including an Authorization requestheader field with the request. The Authorization field value consists of credentials
containing the authentication information of the user agent for the realm of the resource
being requested.
Client-IP: Provides the IP address of the machine on which the client is running (not
defined in RFC2616, but implemented in most client browsers).
Expect: The Expect request-header field is used to indicate that particular server behaviors
are required by the client. A server that does not understand or is unable to comply with
any of the expectation values in the Expect field of a request must respond with appropriate
error status. The server must respond with a 417 (Expectation Failed) status if any of the
expectations can not be met or, if there are other problems with the request, some other 4xx
status.
From: The From request-header field, if given, should contain an Internet e-mail address
for the person sending the client request. This header field can be used for logging purposes
and as a means for identifying the source of invalid or unwanted requests. It should not be
used as an insecure form of access protection.
Host: The Host request-header field specifies the Internet host and port number of the
resource being requested. The Host field value must represent the naming authority of the
origin server or gateway given by the original URL. A host without any trailing port
information implies the default port for the service requested (for example, 80 for an HTTP
URL). A client must include a Host header field in all HTTP/1.1 request messages.
If-Match: The If-Match request-header field is used with a method to make it conditional.
A client that has one or more entities previously obtained from the resource can verify that
one of those entities is current by including a list of the associated entity tags in the IfMatch header field. An entity is the information transferred as the payload of a request or
response and is made up of metainformation.
If-Range: If a client has a partial copy of an entity in its cache and wants to have an up-todate copy of the entire entity in its cache, it could use the Range request-header with a
conditional GET (using either or both If-Unmodified-Since and If-Match.) However, if the
condition fails because the entity has been modified, the client would then have to make a
second request to obtain the entire current entity-body. The If-Range header allows a client
to short-circuit the second request. Its meaning is if the entity is unchanged, send me the
parts that I am missing; otherwise, send me the entire new entity.
35
The Proxy-Authorization request-header field allows the client to identify itself (or its user) to a
proxy that requires authentication. The Proxy-Authorization field value consists of credentials
containing the authentication information of the user agent for the proxy and/or realm of the
resource being requested.
36
Range: HTTP retrieval requests using conditional or unconditional GET methods can
request one or more subranges of the entity, instead of the entire entity, using the Range
request header, which applies to the entity returned as the result of the request; byte range
specifications in HTTP apply to the sequence of bytes in the entity-body (not necessarily
the same as the message-body). A byte range operation might specify a single range of
bytes, or a set of ranges within a single entity.
Referer: The Referer request-header field allows the client to specify, for the benefit of the
server, the address (URI) of the resource from which the request-URI was obtained (that is,
the referrer, although the header field is misspelled). The Referer request-header allows a
server to generate lists of back-links to resources for interest, logging, optimized caching,
and so on. It also allows obsolete or mistyped links to be traced for maintenance. The
Referer field must not be sent if the request-URI was obtained from a source that does not
have its own URI, such as input from the user keyboard.
User-Agent: The User-Agent request-header field contains information about the user
agent (browser type and version, and system OS and version) originating the request. This
is for statistical purposes, for the tracing of protocol violations, and for automated
recognition of user agents for the sake of tailoring responses to avoid particular user agent
limitations. User agents should include this field with requests.
Client Request
GET /swa/i/logo.gif HTTP/1.1
Host: www.cisco.com
Server Response
HTTP/1.1 200 OK
Content-type: image/gif
Content-length: 4523
ACEAP v1.01-26
The figure shows the response code received from the server when the client requests the URI
http://www.cisco.com/swa/i/logo.gif. The server sends the response code 200 (OK) and send
file along with content type and content length.
Every HTTP response message (from the server) contains a status code, a three-digit code that
tells the client whether its request succeeded, or if it was unsuccessful, why it was unsuccessful,
and offers possible remedies, either browser initiated or human initiated (like answering a
username/password challenge):
1XX: Informational
100 = Continue
2XX: Success
200 = OK
201 = Created
202 = Accepted
204 = No Content
3XX: Redirection
37
38
306 = (Unused)
401 = Unauthorized
403 = Forbidden
409 = Conflict
410 = Gone
HTTP Cookies
Client Request
GET /index.html HTTP/1.1
Host: www.cisco.com
Server Response
HTTP/1.1 200 OK
Set-cookie: id=1234; domain=cisco.com
Client Request
GET /index.html HTTP/1.1
Host: www.cisco.com
Cookie: id= 1234
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-27
The figure shows how cookies are set by a server and how they are requested by the server on
subsequent visits.
Because HTTP is a stateless protocol, and the transactions are composed of multiple short-lived
HTTP sessions*, user persistence (or user identification) must be maintained by something
other than the HTTP protocol. There are many ways for the client to identify users to the server,
such as HTTP headers, client IP address tracking, user login, fat URLs, and cookies. HTTP
cookies have the fewest limitations of those listed to identify a user from a previous visit or
from an action earlier in the current session.
HTTP cookies are created by the server and sent to the client to store for:
A cookie can store anything the server wants it to store, because the server writes the cookie
information before sending it along with the response. Common items stored as a cookie are:
User ID
User preferences
Version 0, which was created by Netscape and uses the Set-Cookie attribute
Version 1, which comes from RFC2965 and uses the Set-Cookie2 attribute
39
Cookies are sent to the user agent (client browser) by the server. The server can write whatever
information it needs in the cookie and then retrieve that information automatically the next time
the user agent visits the web page. Cookies can be specific to a domain, to a host, or to a
directory under a domain or host. The cookie is valid and requestable only for the domain that
created it. This means that the server at www.cisco.com can not request the cookie for
www.yourbankwithlotsofmoney.com. Multiple cookies are also allowed for a single domain;
restrictions on the number of cookies per domain are configurable in the client browser.
*Even with the advent of HTTP/1.1s persistent and pipelined connections, the HTTP state is
maintained only through the request-response. When a user clicks an object on the page, the
HTTP server knows little about the user to tie the user to a particular web session or event that
happened previously in the TCP session.
40
Server Response
HTTP/1.1 200 OK
Cache-Control: private
Content-Type: text/html charset=UTF-8
Set-Cookie: CP_GUTC=10.0.0.1.1191547089294320; path=/; expires=Tue,
28-Sep-3201:18:09GMT;domain=.cisco.com
Server: Apache/2.0
Transfer-Encoding: chunked
Date: Fri, 21 Sept 2007 01:09:00 GMT
Connection: closed
ACEAP v1.01-28
HTTP response header fields provide information about the server and the response sent for the
client. Some are required (such as Content-Type:) and some are optional and provide
additional information to the client (such as Server:).
Here is a list of many of the common response headers:
Accept-Ranges: The type of ranges that a server accepts for this resource.
Connection: Options that are specified for a particular connection and must not be
communicated by proxies over further connections.
Content-Base: The base URL for resolving relative URLs within the body.
Content-Language: The natural language that is best used to understand the body.
41
42
Content-Range: The range of bytes that this entity represents from the entire resource.
Content-Transfer_Encoding: Addition.
Expires: The date and time at which this entity will no longer be valid and will need to be
fetched from the original source.
From: E-mail address for the human user who controls the requesting user agent.
Last-Modified: The last date and time when this entity changed.
Max-Forward: Number of proxies or gateways that can forward the request to the next
inbound server.
MIME-Version: Version of the MIME protocol that was used to construct the message.
Pragma: Implementation-specific directives that might apply to any recipient along the
request-response chain.
Proxy-Authorization: Header that is used to identify the users to a proxy that requires
authentication.
Public: A list of request methods the server supports for its resources.
Raw-Headers: All the headers returned by the server. Each header is terminated by \0. An
additional \0 terminates the list of headers.
Raw-Headers-CRLF: All the headers returned by the server. Each header is separated by
a carriage return/line feed (CR/LF) sequence.
Referer: URI of the resource where the requested URI was obtained.
Set-Cookie: Not a true security header, but it has security implications; used to set a token
on the client side that the server can use to identify the client.
Status-Text: Any additional text returned by the server on the response line.
Title: For HTML documents, the title as given by the HTML document source.
Transfer-Encoding: Type of transformation that has been applied to the message body so
it can be safely transferred between the sender and recipient.
URI: Some or all of the URIs by which the request-URI resource can be identified.
Vary: A list of other headers that the server looks at and that might cause the response to
vary; that is, a list of headers the server looks at to pick which is the best version of a
resource to send the client.
Warning: A more detailed warning message than what is in the reason phrase.
43
Server Response
HTTP/1.1 200 OK
Content-type: text/html
OR
Content-type: audio/x-wav
OR
Content-type: video/mpeg
Content-length: 4523
ACEAP v1.01-29
HTTP has become the workhorse of the Internet. When responding to a request, HTTP can
send almost any kind of data in the body of a message. This can be a simple HTML page, a
picture file in the form of a JPEG or GIF file, or an audio or video file. For the client to accept
only files that it can understand, the client should send an Accept: request header in the request
message. When responding to a request, the server must also let the client know what type of
data is in the body of the message using the Content-Type: response header.
Content-Type is defined using MIME (Multipurpose Internet Mail Extensions), which was
originally created to handle multimedia e-mail. MIME media type names are formatted in the
general media type, like text or image or audio, and then followed by a / and then the specific
type of encoding or format, like html or jpeg or x-wav. The preceding examples would be
expressed, respectively, as text/html, image/jpeg, and audio/x-wav. From this information the
browser can decode the response or hand it off to the appropriate application.
44
HTTP Compression
ACEAP v1.01-30
Both HTTP/1.0 and HTTP/1.1 support compression, but HTTP/1.0 implementations were not
well supported. HTTP/1.1 can compress the body portion of a response message. The client
communicates its decompression capabilities through the Accept-Encoding request header. If
the server is configured to compress the body of a response, the server sends the compressed
response along with the Content-Encoding response header set to the appropriate encoding
algorithm. The standard IANA compression algorithms are:
The client can rate the compression algorithms for most or least preferred.
45
HTML
HTML
HTML
HTML
ACEAP v1.01-31
When you visit a website, the content that appears in your web browser looks as if it is coming
from one source, but because of the way HTTP works, the content can come from anywhere.
You have seen how a URI uniquely identifies where to find a resource. When you request or
GET a web page, the objects can come from myriad sources. By using the HREF to a URI, the
browser can request information wherever the data resides.
Content on a web page can be either static or dynamic. Static content is updated only when it is
changed by an administrator maintaining it. Dynamic content can change each time a user visits
a page, or it can be configured by user preferences, as with a user home page (such as
igoogle.com or my.yahoo.com). Dynamic content can also be a sports score feed or real-time
financial information. Often web pages are made up of both static and dynamic content.
Dynamic content is commonly created using JavaScript, cascading style sheets (CSS), and
document object model (DOM), as well as with several browser-specific technologies.
46
Browser Behavior
Client Request
GET index.html HTTP/1.1
Host: www.cisco.com
Server Response
HTTP/1.1 200 OK
ACEAP v1.01-32
Client/Server
Because HTTP messaging depends on both a client and a server, the clients and the servers
behavior must be taken into account. To take advantage of HTTP/1.1 features both must
support those features.
User Agent
Although many user agents support HTTP 1.1, not all features are fully supported or supported
at all in the individual software.
Currently most browsers either turn off HTTP pipelining by default or do not support it. This
might be due to the fact that some web servers do not properly handle HTTP pipelined requests.
Firefox and Opera web browsers support pipelining, and when pipelining is enabled, these
browsers built-in heuristics to detect whether the web server supports pipelining.
Currently most browsers support HTTP persistent connections. The RFC (RFC2616) states that
Clients that use persistent connections SHOULD limit the number of simultaneous
connections that they maintain to a given server. As a result of the vagueness of the RFC, each
browser implements HTTP persistent connections differently. Internet Explorer (since 5.0)
supports two persistent connections per server with a 60-second inactivity timeout. Other
browsers allow the user to configure this feature.
Server Caveats
Implementing pipelining in web servers is a relatively simple matter of making sure that
network buffers are not discarded between requests. For that reason, most modern web servers
handle pipelining well. Exceptions include Microsoft Internet Information Services (IIS) 4 and
5 and some versions of Apache.
47
Introducing ACE
This topic describes the features and architecture of the Cisco 4710 Application Control Engine
appliance.
Application Control
Engine
Integrated
Layer 4
and
Layer 7
Rules
ACEAP v1.01-34
The Cisco ACE 4710 appliance integrates content switching, SSL offload, web application
acceleration, and data center security features on one device. This provides the same
functionality as a collection of appliances with reduced latency through a single point of TCP
termination. Centralized functionality also allows integrated rules configuration and
management along with simplified design through reduced numbers of VLANs and IP
addresses to manage. The ACE 4710 appliance is the newest product line in the Cisco
Application Networking Services (ANS) portfolio.
Note
48
The ACE 4710 appliance provides data center security features, but it is not a replacement
for perimeter firewalls. Ace is often deployed in conjunction with upstream firewalls as part of
a defense-in-depth security strategy.
Back Panel
ACEAP v1.01-35
The front panel of the ACE 4710 appliance has a console port, a USB port, a power button, and
a nonmaskable interrupt button. The console port provides console access to the onboard
command line interface (CLI) and is the only mechanism that can be used to access the ACE
4710 appliance if it is in the internal ROM monitor mode. The USB port currently is not
supported by the ACE software. The back panel of the ACE 4710 appliance has a PS2
keyboard and mouse connection (currently unused), a 9-pin serial port for console connections,
a VGA port, two Ethernet ports (currently unused), and four Gigabit Ethernet ports.
49
CONTROL
PLANE
Serial Port
PCI-X
BUS
1 Gb Ethernet
1 Gb Ethernet
1 Gb Ethernet
DATA PLANE
1 Gb Ethernet
ACEAP v1.01-36
The figure shows a functional diagram detailing the hardware architecture of the ACE 4710
appliance.
Control plane (x86 architecture):
50
Routing
Syslog
SNMP
Virtualization
High availability
1 GB Compact Flash:
Configurations
SSL certificates/keys
Boot image
6 GB RAM
Connection/session management
TCP proxy
HTTP parsing
Fastpath
SLB
Inspection
IP fragmentation/reassembly
ACL processing
SSL acceleration
HTTP compression
2 GB RAM
10/100/1000 autosensing
PortChannels
51
ACEAP v1.01-37
The figure lists some new features of the ACE 4710 appliance.
52
XML Switching
Multimodule
(64 Gb/s)
Module
(416 Gb/s)
ACE
Module
16 Gb/s
ACE
Module
8 Gb/s
Appliance
(12 Gb/s)
ACE
Module
4 Gb/s
ACE XML
Gateway
Manager
ACE 4710
2 Gb/s
ACE 4710
1 Gb/s
ACE GSS
20K DNS RPS
ACE
Networking
Manager
ACE
AppScope
53
Up to 4 Gb/s Throughput
Up to 2 Gb/s Throughput
Up to 1 M Concurrent
Connections
Up to ~44 K L4 Sustained
Connections per Second
Up to 5.6 K SSL
Transactions per Second
2007 Cisco Systems, Inc. All rights reserved.
1 M Concurrent
Connections
3x
~120 K L4 Sustained
Connections per Second
7.5 K SSL
Transactions per Second
ACEAP v1.01-39
The ACE 4710 appliance provides enhanced functionality and capacity compared to the
Content Services Switch (CSS). The figure shows a comparison of several key attributes.
Although some features appear equal or higher on the CSS, the ACE 4710 appliance is a fixed
device, and features are limited only by the licenses purchased. For example, if a CSS11506
were set up for maximum compression (1.1 Gb/s), the number of I/O and session modules
allowed on the chassis would be limited, reducing the throughput and concurrent connection.
54
Virtualized Configuration
per Context
IOS CLI
(Session or SSH/Telnet)
No Software Licenses
(Except GSLB, but GSS
Is Recommended)
Software Licenses
for Features or
Additional Performance
Per-Box Active-Standby
Redundancy Model
Active-Active
(Per-Context Active Standby)
ACEAP v1.01-40
55
CSS
CSM
ACE SM
ACE AP
9
9
9
9
9
9
9
9
Per VIP or
per box
Per box
Per context
Per context
9
9
Fully supported
Limited
Limited
TCP Termination
for Generic Protocols
Pipelining
(HTTP 1.1)
X (Future phase) X
9
ACEAP v1.01-41
The figure shows application delivery features available with CSS, CSM, the ACE service
module (SM), and the ACE appliance (AP).
56
CSM
ACE-SM
ACE-AP
L7 RDP-Aware Balancing
Windows Terminal Services
(Future phase)
L7 SIP-Aware LB
(Future phase)
TCL
Routing
on CSS
HTTP Compression
Proprietary
X
X
TCL
TCL
VRF-aware
X
(Future phase)
ACEAP v1.01-42
The figure shows more application delivery features available with CSS, CSM, the ACE
service module (SM), and the ACE appliance (AP).
57
Security Features
CSS
CSM
ACE-SM
ACE-AP
uRPF Check
9
9
9
9
FWLB
CSM recommended
SYN Cookies
9
(300K SYNs/
sec)
(4M SYNs/
(Future phase)
sec)
ACEAP v1.01-43
The figure shows security features available with CSS, CSM, the ACE service module (SM),
and the ACE appliance (AP).
The ACE service module supports the following protocol inspections: HTTP, FTP, RTSP,
ICMP, DNS, H.323, SCCP, and SIP.HTTP, FTP, RTSP, ICMP, DNS, H.323, SCCP, and SIP.
The ACE appliance supports the following protocol inspections: HTTP, FTP, RTSP, ICMP,
and DNS.
HTTP, FTP, RTSP, ICMP, and DNS.
58
CSM
ACE-SM
ACE-AP
9
9
9
9
9
9
9
WebNS
Supervisor IOS
9
9
9
9
9
9
9
CVDM
CVDM
(Future phase)
HSE
HSE
Onboard IOS-like
9
ANM
9
ANM
ACEAP v1.01-44
The figure shows network management features available with CSS, CSM, the ACE service
module (ACE SM), and the ACE appliance (ACE AP).
59
SSL Features
CSS
SSLM
CSM-S
ACE-SM
ACE-AP
9
9
9
9
9
9
Export Cipher-Suites
9
9
9
9(SSLM 3.1)
AES Cipher-Suites
9
9
9
9
Client Authentication
(Future phase)
(Future phase)
URL Rewrite
HREF http:// to https://
(Future phase)
SSL Termination
Back-End SSL
X
X
X
ACEAP v1.01-45
The figure shows SSL features available with CSS, CSM, the ACE service module (ACE SM),
and the ACE appliance (ACE AP).
60
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
The IP protocol stack includes data link, IP, TCP, UDP,
and application protocols.
IP applications processed by the ACE correspond to
OSI Layers 57.
HTTP is a stateless protocol that is part of the TCP/IP
suite. HTTP comprises response and request
messages that can be anything from HTML to image
files to streaming movies.
The ACE 4710 appliance is built specifically to process
IP applications.
ACEAP v1.01-46
61
62
Lesson 2
Deploying ACE
Overview
In this lesson, you will learn how to describe the configuration tasks necessary to successfully
deploy an ACE appliance.
Objectives
Upon completing this lesson, you will be able to describe the configuration tasks necessary to
successfully deploy an ACE appliance. This includes being able to meet these objectives:
Describe possible deployment topologies including routed, bridge, and one-arm modes, and
direct server return
Explain the process of granting access to authorized users for management tasks
Web
Client
Client
VLAN
ACE
Server
VLAN
Web
Server
ACE
Appliance
ACEAP v1.01-4
The figure shows a basic network where a Cisco 4710 Application Control Engine (ACE)
appliance is physically connected to a router and server network using Gigabit Ethernet,
PortChannels, and VLAN trunking.
64
ACE VLANs
Web
Client
ACE
Client
VLAN
Server
VLAN
Web
Server
ACE
Appliance
ACEAP v1.01-5
In this example, the ACE connects to the network over all four Gigabit Ethernet links logically
bonded together using a PortChannel link. Two VLANs are used, one for a connection to
clients (a web client in the figure), and the other for a connection to servers (a web server in the
figure). Diagramming the individual VLAN connections is often necessary to completely
understand and document a network topology. As a result, the ACE appliance is shown
diagrammed as a standalone component of the network in much of this course.
Deploying ACE
65
First-Time Bootup
Power Button
Serial Port
ACEAP v1.01-7
The figure shows the pertinent ACE appliance components needed to start the initial setup
process and the boot dialog.
66
Note
Accessing ACE for the first time requires using the console port, a rollover cable, an RJ-45to-serial adapter, and a terminal emulation application with the settings of 9600 b/s, 8 data
bits, no parity, and 1 stop bit, to access the command-line interface (CLI). Only the Admin
context is accessible through the console port; all other contexts can be reached through a
Telnet or Secure Shell (SSH) remote access session.
Note
When you boot the ACE for the first time and the appliance does not detect a startupconfiguration file, a setup script guides you through the process of configuring a
management VLAN on the ACE through one of its Gigabit Ethernet ports. The primary intent
of the setup script is to simplify connectivity to the Device Manager GUI.
Setup Dialog
ACEAP v1.01-8
The figure shows the Gigabit Ethernet port numbering and the initial setup dialog.
After you specify a Gigabit Ethernet port, a port mode, and a management VLAN, the setup
script automatically applies the following default configuration:
Extended IP access list that allows IP traffic originating from any other host addresses.
Traffic classification (class map and policy map) created for management protocols HTTP,
HTTPS, ICMP, SSH, Telnet, and XML-HTTPS. HTTPS is dedicated for connectivity with
the Device Manager GUI.
VLAN interface configured on the ACE and a policy map assigned to the VLAN interface.
Deploying ACE
67
68
Extensive help
library available
ACEAP v1.01-10
After completing the initial setup script, the ACE appliance should be accessible via the
network (using Telnet, SSH, HTTPS, and SNMP). To access the ACE device manager GUI,
use a web browser to log in. Enter the secure HTTP address of your ACE (the VLAN IP
address configured during the setup script) in the browser address field. For example, in the
example in the figure you would use https://172.19.110.29/.
Note
Logging into http://172.19.110.29/ logs you into the System Info screen.
Next, you are probably prompted to accept and install a certificate from Cisco Systems,
Inc.
This should bring you to the ACE 4710 Device Manager login screen. The initial username
and password that you use are admin and admin.
Click Enter or press the Enter key after completing the password to enter the ACE Device
Manager GUI.
Note
The admin password can be changed, but not the username admin.
Deploying ACE
69
GUI Overview
The GUI is organized in a hierarchical structure. The first highlighted box (1)
shows the configuration location in the GUI. In this example, the user is in the
Config > Virtual Contexts > Expert > Class Map section.
2
3
4
10
6
9
ACEAP v1.01-11
70
Network Topologies
This topic describes possible deployment topologies including routed, bridge, and one-arm
modes, and direct server return.
VLAN 10
VLAN 20
Subnet A
ACEAP v1.01-13
The ACE can be configured in bridge mode. In this mode, the client and server VLANs are part
of the same IP subnet, as shown in the figure. The ACE uses an Address Resolution Protocol
(ARP) table to track which VLAN contains what physical devices.
In the figure, VLAN 10 is used as the client-side VLAN, and VLAN 20 is the server-side
VLAN. The same IP subnet is used on both VLANs. The physical port attached to the upstream
router is assigned to VLAN 10. Physical ports connected to the servers are assigned to VLAN
20. The servers in a bridge mode environment are configured to use the IP address of the
upstream router interface as their default gateway.
Deploying ACE
71
Subnet A
VLAN 10
Subnet B
VLAN 20
ACEAP v1.01-14
The ACE can be configured in routed mode. In this mode, the client and server VLANs are part
of different IP subnets, as shown in the figure.
In the figure, VLAN 10 is configured as the client-side VLAN, and VLAN 20 is the server-side
VLAN. Different IP subnets are associated with each VLAN. The physical port attached to the
upstream router is assigned to VLAN 10. Physical ports connected to the servers are assigned
to VLAN 20. The servers in a router mode environment are configured to use the IP address of
the CSM as their default gateway.
72
Subnet A
VLAN 10
Subnet B
VLAN 20
ACEAP v1.01-15
The one-arm mode removes the ACE from a position directly in the transit path for all traffic to
the server farms. This configuration has the advantage that the ACE does not have to process
traffic that is not effected by ACE features. In the figure, VLAN 10 is used for traffic between
the ACE and the router, and VLAN 20 is used for traffic to the server farms. A VLAN 10
interface is configured on the router and an IP address from subnet A is configured on the ACE.
Additional IP addresses from subnet A are used to configure the virtual server IP addresses. A
VLAN 20 interface is configured on the router and is used by the servers as their default
gateway.
Note
Return traffic generated by the servers in response to load-balanced requests is still needed by
the ACE for full functionality. Getting this traffic to flow through the ACE is more complicated
than with an inline configuration. There are two ways to address this situation:
SNAT
SNAT is configured by creating a pool of IP addresses. Client IP addresses are translated to IP
addresses from the client pool. These translated addresses are used as the source address in the
source packet that is sent to the server.
Deploying ACE
73
Policy-Based Routing
Policy-based routing (PBR) is a router feature available on Cisco IOS-based routers. PBR
allows the router to be configured to select a next hop for a packet based on a configured
policy. This policy overrides the routing decision that would have been made by consulting the
routing database. A routing policy is attached to the ingress interface on the router. Access lists
can be used to limit the traffic to which the policy is applied. For example, web responses being
sent to clients can be load balanced and redirected via policy-based routing, while SNMP
responses from the servers are routed normally.
74
VIP
1
4
5
Server
IP
2
3
ACEAP v1.01-16
Traffic flow for load-balanced requests is shown in the figure. Packets are handled as follows:
1. Traffic from the client to the virtual IP (VIP) is routed normally by the router.
2. Traffic from the ACE to the server is routed normally by the router. If SNAT is used, the
source IP address is in the client NAT pool. Otherwise, the source IP address remains the
client IP address.
3. Traffic from the server is returned to the router because the router is the server default
gateway.
4. If SNAT is used, the destination IP address in the server response is routed normally to the
ACE. If SNAT is not used, PBR must be used on the router interface that is used as the
server default gateway. The policies configured must match any traffic being sent in
response to a load-balanced request. The IP address specified for the ACE is set as the
next-hop address by PBR.
5. Traffic from the ACE to the client is routed normally by the router. If SNAT is used, the
ACE has translated the destination IP address from the NAT pool IP address to the client IP
address. If PBR is used, the ACE does not need to modify the destination IP address
because the client IP address is already in the packet.
Deploying ACE
75
Subnet B
ACEAP v1.01-17
A variation of one-arm mode is a direct server return. The figure shows the architecture of this
variation.
The ACE and the servers are placed in the same VLAN and IP subnet. An interface on that
VLAN is defined on the router and is the default gateway for the ACE and the servers. NAT is
turned off for the server destination address. Return traffic does not flow through the ACE but
returns directly to the client.
The advantage of a direct server return is that web servers can return higher-bandwidth traffic
than can be handled by the ACE. Because the return traffic is not processed by the ACE, the
following restrictions apply:
76
TCP termination is not possible. This restriction limits load balancing to Layer 4.
VIP
Loopback IP = VIP
2
3
ACEAP v1.01-18
Traffic flow for load-balanced requests is shown in the figure. Packets are handled as follows:
1. Incoming client requests are routed to the server VLAN. The packet is switched to the
ACE.
2. The ACE rewrites the Layer 2 destination MAC address and returns the packet to the
switch processor. The packet is switched to the server. The server uses a loopback interface
configured with the VIP address so that the server accepts a packet destined for the VIP.
3. The server responds directly to the client. This traffic is routed normally because the router
is the default gateway for the server.
When no more traffic is generated by the client on this TCP connection, the connection goes
idle. After the idle timeout, the ACE removes the connection from its session table.
Deploying ACE
77
Mixed Modes
ACEAP v1.01-19
The ACE is capable of handling multiple pairs of VLANs and mixed modes. The figure shows
one ACE handling several VLANs. The following mode configurations are possible:
Subnet C on VLAN 102 routed to Subnet D on VLAN 203 or Subnet E on VLAN 204
Note
78
The bridging versus routing mode selection is not an appliance-wide configuration option.
Rather, it is driven by the interface configuration within the ACE appliance.
ACEAP v1.01-20
After completing the initial setup script, you can configure further network changes from the
GUI or from the CLI. In the GUI, go to Config > Virtual Contexts > Network.
Note
PortChannel interfaces and Gigabit Ethernet interfaces can be configured only in the Admin
context.
Gigabit interfaces can be bundled into PortChannel interfaces to make them appear as one link.
Gigabit Ethernet and Port Channel interfaces can be turned into trunk interfaces so that multiple
VLANs can be bundled on the same link.
Minimal required configuration:
4710/Admin(config)#interface port-channel 255
Configures the port-channel interface for fault tolerance using a dedicated fault-tolerant (FT)
VLAN for communication between the members of an FT group.
4710/Admin(config-if)# port-channel load-balance src-dst-ip
Sets the load-distribution method among the ports in the EtherChannel bundle, for example,
to configure an EtherChannel to balance the traffic load across the links using source or
destination IP addresses
Deploying ACE
79
Specifies that vlan 101 will be the access port for the port channel
4710/Admin(config-if)# switchport trunk allowed vlan 101,201,250-260
80
Virtualization
This topic describes the use of multiple contexts.
100%
Traditional device:
Single configuration file
Single routing table
Limited RBAC
Limited resource
allocation
25%
The ACE supports the creation of virtual ACE images called contexts. Each context has its own
configuration file and operational data, providing complete isolation from other contexts on
both the control and data levels. Hardware resources are shared among the contexts on a
percentage basis.
Deploying ACE
81
Multi-tier Applications
Enterprise
Network
Enterprise
Network
Firewalls
Front-End
Firewalls
LB
Front-End
Servers
ACE with
Application
Infrastructure
Control and
Application
Security
LB
Application
Servers
Front-End
Servers
Application
Servers
Database
Servers
FE Virtual
Partition
APP Virtual
Partition
DB Virtual
Partition
LB
Database
Servers
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-23
One use of ACE contexts is to provide application controls at multiple levels of a multi-tier
application architecture. On the left in the figure is a typical multi-tier architecture with frontend web servers, application or middleware servers, and back-end database servers. Typically,
load-balancing and firewall services are required between layers. Each layer can be
implemented using an ACE context, which maintains separate data flows and security controls
while minimizing the number of devices to be managed.
82
Enterprise
Network
App C
LB 2
LB
App A
App D
LB
App B
ACEAP v1.01-24
Many enterprise networks add load-balanced applications over time. Given various
organizational or capacity issues, these applications are often implemented with standalone
server farms and load-balancing devices. Adding a new application in this mode requires the
purchase of at least one new load balancer.
Deploying ACE
83
ACE
Virtual Partition 1
App C
Virtual Partition 2
App D
Virtual Partition 3
App E
Virtual Partition 4
App F
ACE
Virtual Partition 1
App A
Virtual Partition 2
App B
ACEAP v1.01-25
Multiple ACE contexts can be used to provide the same load-balancing functions needed by the
preceding situation. Additional applications can be added without the purchase of additional
hardware if capacity is available.
84
Multiple Contexts
Physical Device
Admin
Context
Context 1
Context 2
Context 3
Context
Definition
Resource
Allocation
Management
Station
AAA
ACEAP v1.01-26
Network resources can be dedicated to a single context or shared between contexts, as shown in
the figure.
By default, a context named Admin is created by the ACE. This context can not be removed or
renamed. Additional contexts, and the resources to be allocated to each context, are defined in
the configuration of the Admin context.
The number of contexts that can be configured is controlled by licensing on the ACE. The base
code allows five contexts to be configured, and licenses are available that expand the
virtualization possible to 250 contexts. The Admin context does not count toward the licensed
limit on the number of contexts.
Deploying ACE
85
Configuring Contexts
ACE-AP/Admin# show vlan
Vlans configured on the physical port(s)
vlan33 vlan110 vlan231-238 vlan331-338
ACE-AP/Admin# config
Enter configuration commands,
ACE-AP/Admin(config)# context
ACE-AP/Admin(config-context)#
ACE-AP/Admin(config-context)#
ACE-AP/Admin(config-context)#
ACE-AP/Admin(config)# exit
vlan431-438
ACEAP v1.01-27
The show vlan command can be used to verify the VLANs that have been connected to the
ACE over the Gigabit Ethernet ports.
Contexts are defined in the ACE configuration mode, as shown in the figure. The context
context_name command creates a context and gives it a name. Individual VLANs are then
allocated to each context with the allocate-interface vlan vlan_list command.
86
ACEAP v1.01-28
The figure shows configuration of a context using the ACE appliance device manager.
The GUI configuration requires the name of the user-defined management policy map, VLAN
interface, IP address/netmask, and allowed protocols when creating a context.
Deploying ACE
87
ACEAP v1.01-29
The show context context_name and show running-config commands can be used to verify
the configuration of ACE contexts.
88
Config Status
shows here
Config changes
made here
ACEAP v1.01-30
Deploying ACE
89
Resource Management
This topic explains the resource management controls available on the ACE.
Resource Control
Per-Context Control:
Resource levels for each context
Support for oversubscription
Rates
Memory
Bandwidth
Access lists
Regular expressions
Data connections
SSL bandwidth
Management connections
SSL connections
Xlates
Sticky entries
ACEAP v1.01-32
ACE hardware resources are allocated to individual contexts under the control of resource-level
controls configured in the Admin context. The resources that can be managed fall into two
categories and are listed in the figure. The rate-based resources are amounts of data or number
of events per second. The memory-based resources control the storage for different kinds of
state and configuration objects, which are stored in memory.
Note
90
The memory available on an appliance-wide basis for each type of state and configuration
data is preallocated.
Resource Classes
Context
Context
Context
Context
Context
Context
Context
ACEAP v1.01-33
Resource allocation controls are defined in resource classes. Each context is then assigned to a
single resource class. If no resource classes are defined, every context is a member of a
resource class named default, which has unlimited access to system resources. A maximum of
100 resource classes can be configured.
Deploying ACE
91
Maximum
Unlimited
Maximum
Equal to
Minimum
Minimum
Guarantee
Minimum
Guarantee
ACEAP v1.01-34
Individual resource allocation controls specify a minimum and a maximum resource allocation,
which is configured as a percentage of overall system resources. Minimum resource allocations
constitute a guaranteed resource allocation. Maximum resource allocations can either be
specified as unlimited or as equal to the minimum.
The resource allocations defined in a resource class are applied to each context in the resource
class. For example, a resource class that specifies a 20 percent minimum resource allocation
and that has four contexts results in 80 percent of system resources allocated.
92
Resource Oversubscription
ACEAP v1.01-35
Defining a resource class with a maximum resource allocation of unlimited allows resources to
be oversubscribed. Each context in the resource class receives the guaranteed minimum. Given
a maximum of unlimited, the contexts that are able to allocate from the global pool of resource
are not guaranteed. Allocation requests in the global pool are handled on a first-come-firstserved basis. The figure shows four contexts, all of which are competing for resources in the
global pool to satisfy requirements above their guaranteed minimums.
Deploying ACE
93
ACEAP v1.01-36
Resource classes are defined with the resource-class resource_class_name command. Within
the resource-class definition, limit-resource commands are used to define the resource controls
for each type of resource. Contexts are placed in a resource class by configuring the member
resource_class_name command in the context configuration submode.
In the configuration shown in the figure, a resource class named testing, which guarantees a
minimum of 20 percent of all system resource types with an unlimited maximum, is defined.
The IT lab context is then configured to be a member of the testing resource class.
The full syntax of the limit-resource command is as follows:
limit-resource {acl-memory | all | buffer {syslog}| conc-connections |
mgmt-connections | proxy-connections | rate {bandwidth | connections |
inspect-conn | mac-miss | mgmt-traffic | ssl-connections | syslog} |
regexp | sticky | xlates} {minimum percentage} {maximum {equal-to-min
| unlimited}}
94
ACEAP v1.01-37
Resource configuration in the ACE Device Manager located at Config > Virtual Contexts >
Systems > Resource Class.
Note
The default category in the GUI is equivalent to the all option in the CLI.
Deploying ACE
95
0.00%
20.00%
100.00%
200.00%
default
gold
conc-connections
0.00%
20.00%
100.00%
200.00%
default
gold
mgmt-connections
0.00%
20.00%
100.00%
200.00%
default
gold
proxy-connections
0.00%
20.00%
100.00%
200.00%
default
gold
ACEAP v1.01-38
The show resource allocation command shows the aggregate results of all of the resource
class and context configuration. The command output lists each resource type in the left
column. Each resource class that defines allocation controls for that resource type is
enumerated with a minimum and maximum amount and the resource class name that is in the
far-right column.
The Min column represents the minimum guarantee defined in the resource class times the
number of member contexts. For a example, a minimum of 20 percent might represent two
contexts that are members of a resource class that defines a minimum of 10 percent.
In resource class definitions that specify maximum equal-to-min, the maximum column
contains the same number as the minimum. In cases where maximum unlimited is configured,
the maximum column gives an indication of the oversubscription level for that particular
resource type, given the number of contexts in the resource class. For example, a maximum of
1200 percent represents a resource class with 12 contexts, which permits oversubscription.
96
ACEAP v1.01-39
Deploying ACE
97
ACEAP v1.01-40
The figure shows how to display resource usage for the GUI.
To get to the resource usage page, go to Monitor > Virtual Contexts > Resource Usage.
98
Rules grant:
Create
Modify
Debug
Monitor
User-Accessible Commands
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-42
RBAC defines roles to which management users are assigned. Each role defines the level of
action that the user can perform on predefined categories of commands. The available levels of
actions are:
Monitor: Specifies commands for monitoring resources and objects (show commands)
Modify: Specifies commands for modifying existing configurations (includes debug and
monitor commands)
Create: Specifies commands for the creation of new objects or the deletion of existing
objects (includes modify, debug, and monitor commands)
Deploying ACE
99
ACEAP v1.01-43
Predefined default roles exist that can not be modified. However, new roles can be created and
customized appropriately.
The capabilities of each role are shown in the following show role command output:
switch/Admin# show role
Role: Admin (System-defined)
Description: Administrator
Number of rules: 4
--------------------------------------------Rule Type Permission
Feature
--------------------------------------------1. Permit Create
all
2. Permit Create
user access
3. Permit Create
system
4. Permit Create
changeto
Role: Network-Admin (System-defined)
Description: Admin for L3 (IP and Routes) and L4 VIPs
Number of rules: 7
--------------------------------------------Rule Type Permission
Feature
--------------------------------------------1. Permit Create
interface
2. Permit Create
routing
3. Permit Create
connection
4. Permit Create
nat
5. Permit Create
vip
6. Permit Create
config_copy
7. Permit Create
changeto
100
Deploying ACE
101
5. Permit Create
aaa
6. Permit Create
nat
7. Permit Create
config_copy
8. Permit Create
changeto
Role: SSL-Admin (System-defined)
Description: Administrator for all SSL features
Number of rules: 5
--------------------------------------------Rule Type Permission
Feature
--------------------------------------------1. Permit Create
ssl
2. Permit Create
pki
3. Permit Modify
interface
4. Permit Create
config_copy
5. Permit Create
changeto
Role: Network-Monitor (System-defined)
Description: Monitoring for all features
Number of rules: 2
--------------------------------------------Rule Type Permission
Feature
--------------------------------------------1. Permit Monitor
all
2. Permit Monitor
changeto
102
Management Domains
Physical Appliance
Context
A
Context
B
Domain1
Domain2
VIP1
VIP 2
Farm1
Farm2
VIP3
Farm3
Farm4
SSL
cert1,2
ACEAP v1.01-44
Objects can be grouped in management domains to restrict management access. Users can issue
commands that affect objects only if they are members of the same domain. Both users and
objects can be members of multiple domains with a limit of ten domains per context. Objects
created by a user who is a member of a domain are automatically added to that users domain.
If no domains are configured, all objects are part of the default domain.
Deploying ACE
103
Physical Appliance
Admin
Context
Role
Context
A
Context
B
Context A
Definition
Context B
Definition
Resource
Allocation
Admin
Management
Config
Domain1
Domain2
VIP1
VIP 2
Farm1
Farm2
VIP3
Farm3
Farm4
SSL
Cert1,2
Management Station
2007 Cisco Systems, Inc. All rights reserved.
Admin
Network/Security
Server Admin
Monitor
AAA
ACEAP v1.01-45
The commands that a user is authorized to issue are controlled by both the role and domain
mechanisms. A user is able to issue only commands that are authorized by the role of which the
user is a member against objects in a common domain.
104
ACEAP v1.01-46
A role is configured with the role role_name command. Within the role definition, the rule
command specifies the command categories that the user can issue.
Domains are configured with the domain domain_name command. Within the domain
definition, the add-object command is used to add objects to a domain.
Management users are created with the username command, which is used to specify the
username, password, role, and domain.
In the figure, a user named c_mgr_a, who can issue debug interface commands against the
VLAN 110 interface, has been created.
Deploying ACE
105
GUI RBAC
1
2
5
3
ACEAP v1.01-47
The figure shows the screen for adding users, roles, and domains to a context in the ACE
Device Manager GUI.
The GUI allows you to access many pieces of the configuration simultaneously, as shown in
the figure:
1. Here is the location for making changes to RBAC features: Admin > Role-Based Access
Control > [Users | Active Users | Roles | Domains].
2. This is the context being used. RBAC information is context specific. This means that
creating a user c_mgr_a, role CONN-MGR, and domain GROUP_A would only be
available in the development context.
3. Roles and domains can be created, modified, and deleted from here.
4. Roles and domains can also be created in the user configuration area on the fly if needed.
5. Administrators can also monitor active users here and force them to log off if necessary.
106
Configuring Interfaces
This topic describes the steps to configure ACE interfaces.
Interface Types
Single Context
BVI
Int
VLAN
INT
VLAN
INT
Bridge Group
VLAN
INT
VLAN
INT
FT
INT
ACEAP v1.01-49
VLAN interfaces connect the ACE to regular data transit VLANs. Configuration of these
interfaces determines whether the VLAN is a routed or bridged interface. VLAN interfaces
that are to be routed are configured with Layer 3 information, and VLANs that are to be
bridged are configured to be members of a bridge group.
Bridge group virtual interfaces (BVIs) are software-only interfaces that are used to
configure the Layer 3 information for the ACE to participate in a bridged network.
Note
The ACE appliance can be connected to regular VLANs or to the primary VLAN of a PVLAN
configuration.
Deploying ACE
107
Configuring Interfaces
Routed interfaces:
interface vlan 231
description Client vlan
ip address 172.16.31.5 255.255.255.0
no shutdown
Bridged interfaces:
interface vlan
bridge-group
no shutdown
interface vlan
bridge-group
no shutdown
231
3
232
3
interface bvi 3
description Server Access vlan
ip address 172.16.31.5 255.255.255.0
no shutdown
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-50
Interface configuration is started using the interface vlan vlan_number command. In interface
configuration submode, a description can be added to the interface with the description
command.
Configuration of routed mode VLAN interfaces must also specify the Layer 3 information with
the ip address ip_address network_mask command.
Configuration of bridged mode VLAN interfaces must be associated with a bridge group with
the bridge-group group_number command. The bridge group number is locally significant
within the context and is not visible to connected resources. The two interfaces to be bridged
must be associated with the same bridge group.
Layer 3 information for a bridge group is configured by defining a BVI interface with the
interface bvi group_number command. The bridge group number must match the bridge group
number that is used to tie the bridged VLAN interfaces together.
Finally, all interfaces must be administratively activated with the no shutdown command.
After the interface is administratively active, its state is controlled by several factors. Routed
mode VLAN interfaces and BVI interfaces must have Layer 3 information configured. Bridged
mode VLAN interfaces must have been assigned to a bridge group and the associated BVI must
be up.
108
Shared Interfaces
Contexts can share routed interfaces.
No conflicting IPs on shared interface.
Intercontext traffic must be L3 routed.
VLAN 100
192.168.100.0/24
Context
A
Context
B
ACEAP v1.01-51
VLAN interfaces can be shared by more than one context as long as the interfaces are
configured in routed mode. IP addresses on the shared interfaces must be unique and must be in
the same subnet. This includes the IP addresses assigned to the VLAN interface and any
floating IP addresses, such as alias IP addresses or VIP addresses.
Intercontext traffic must be processed by a Layer 3 router outside the ACE router, even if the
destination IP address is Layer 2 adjacent to the transmitting context.
Deploying ACE
109
ID Prom
MAC
Use
Supervisor
Layer 3
Unshared
VLAN
38
Unused
ACEAP v1.01-52
The MAC address IDPROM on an ACE contains eight unique MAC addresses. The first is
used as the MAC address of the Supervisor Layer 3 interface in any unshared VLAN attached
to the ACE. The second address in the IDPROM is used as the MAC address for any IP
addresses on unshared interfaces on the ACE.
Shared interfaces are addressed differently. One of the interfaces on a shared VLAN uses the
second MAC address from the IDPROM above. Additional MAC addresses for this VLAN are
allocated from 1 of 16 pools of addresses, each of which contains 1000 MAC addresses. These
16 MAC address pools are shared by every ACE worldwide. The shared-vlan-hostid
command can be used to manually configure which pool the ACE will select.
110
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
ACEAP v1.01-53
Deploying ACE
111
112
Lesson 3
Objectives
Upon completing this lesson, you will be able to describe the structure and function of the
Modular Policy CLI statements used to configure ACE features. This includes being able to
meet these objectives:
Class Maps
This topic describes the structure and configuration of class maps.
Define Actions
Define processing steps to be
performed on traffic in policy
maps
Activate Policy
Associate policy defined with a
traffic stream
ACEAP v1.01-4
The first step in configuring the necessary traffic processing in the Cisco 4710 Application
Control Engine (ACE) appliance is to define class maps that contain criteria that classify traffic
as interesting; that is, to be handled by further configuration steps.
114
Matched
Traffic
Characteristics
Match type
Not Matched
ACEAP v1.01-5
A class map defines the traffic characteristics that are used to analyze traffic. For each packet,
the class map returns an indication of whether the packet matched the defined criteria. Notice
that the class map does not change anything about the packets. It merely decides if a packet is
interesting, based on selection criteria. As shown in the figure, a stream of packets is going to
be processed by the class map. After a packet has been processed, it has an indication of
whether it was classified as interesting (that is, matched) by the class map. In the figure, the
first two packets have been analyzed and only the first packet was marked as interesting.
Several different types of class maps can be defined. Each type of class map is designed to
analyze characteristics that are relevant for different types of traffic processing.
Class maps have four components:
115
Class Name
Class Type
Match Criteria
Match Type
Matched
Not Matched
ACEAP v1.01-6
Like most objects created in the ACE CLI, class maps are named objects. The names are used
to identify the class map that is to be used when policy maps are constructed. The class names
are case sensitive and are available for Tab-key completion in commands that have a class
name as a parameter. In the example shown in the figure, a class called AuthUsers is created.
116
Class Name
Class Type
Match Criteria
Match Type
Matched
Not Matched
ACEAP v1.01-7
HTTP load balancing: Analyzes the HTTP request fields used in making load-balancing
decisions
The specific details of the various class map types are discussed on a feature-by-feature basis.
In the example in the figure, the AuthUsers class map is designated as a Layer 3 and 4 class
map.
117
Class Name
Class Type
Match Criteria
Match Type
Matched
Not Matched
ACEAP v1.01-8
The match criteria in the class map define the traffic characteristics of the incoming traffic that
is analyzed. The specific match criteria available in a class map depend on the type of the class
map. Multiple criteria can be created within a single class map. The sample class map in the
figure has three criteria, which are used to analyze traffic.
118
Class Name
Class Type
Match Criteria
Match Type
Matched
and
= matched
X
Not Matched
ACEAP v1.01-9
119
Class Name
Class Type
Match Criteria
Match Type
Matched
or
= matched
X
Not Matched
ACEAP v1.01-10
120
Class Maps
General Structure:
class-map [type class_type] [match_type] class_name
[linenumber] match match_criteria
ACEAP v1.01-11
Class maps are configured with the class-map command in configuration mode. The class-map
command specifies the name, type, and match type of the class being defined. After the classmap command is entered, you are placed in the class map configuration submode, where you
can configure match statements to specify the match criteria used for this class. Every match
statement has a line number. The line number can be used to easily delete a long match
statement by specifying no linenumber in the class map configuration submode. A match
statement can be replaced by configuring a new match statement with the same line number.
Note
The line numbers do not imply a specific order or priority of the match criteria.
Syntax Summary
The following is the full syntax of all possible class map-related configuration statements:
class-map [match-all | match-any] map_name
[line_number] match access-list name
[line_number] match any
[line_number] match destination-address ip_address [mask]
[line_number] match port {tcp | udp} {any | eq {port_number} | range
port1 port2}
[line_number] match source-address ip_address mask
[line_number] match virtual-address vip_address {[netmask]
protocol_number | any | {tcp | udp {any | eq port_number | range port1
port2}}}
class-map type ftp inspect match-any map_name
[line_number] match request-method ftp_command
class-map type http inspect [match-all | match-any] map_name
[line_number] match content expression [offset number]
[line_number] match content length {eq bytes | gt bytes | lt bytes |
range bytes1 byte2}
2007 Cisco Systems, Inc.
121
122
Policy Maps
This topic describes the structure and configuration of policy maps.
Define Actions
Define processing steps
to be performed on traffic
in policy maps
Activate Policy
Associate policy defined
with a traffic stream
ACEAP v1.01-13
The second step in defining traffic processing in the ACE is to define the actions that are to be
performed on traffic that has been classified by a previously defined class map. This
configuration is performed by creating a policy map.
123
Policy/Match Type
Policy name
Policy type
Match type
Classification/action
clauses
ACEAP v1.01-14
A policy map defines the actions to be applied to traffic that has been determined to be
interesting. Actions can also be specified for all traffic that has not been classified as
interesting.
Policy maps contain four components, which must be defined:
1. Name of the policy
2. Policy type
3. Match type
4. Classification/action clauses
124
ACEAP v1.01-15
Like other resources created by the ACE CLI policy, maps have a name. This name is case
sensitive and is available for the Tab-key completion function when the argument of a
command is the name of a policy. In the figure, a sample policy named Passengers has been
started.
125
Type: Loadbalance
Policy Type
Match Type
Layer 3 and 4
multimatch
FTP Inspect
first-match
HTTP Inspect
all-match
Loadbalance
first-match
Management
first-match
ACEAP v1.01-16
Policy maps have a policy type and a match type, which are interrelated. The policy type
defines the types of classifications and actions that are available, and the match type handles
situations in which multiple classification criteria have defined the traffic as interesting.
Policy Type
The available policy types correspond to the class map types and are:
Layer 3 and 4 : Processes traffic selected by the IP and TCP/UDP fields in the packet
In the figure, the Passengers policy map is designated as a load-balancing policy map.
Match Type
The match type controls how packets are processed if they are classified as interesting by more
than one of the class maps used in a policy. There are three match types:
126
all-match: Actions associated with all the matching class maps are performed on the
packet.
first-match: Actions associated with the first class that matches are performed on the
packet, and successive classes are ignored.
multi-match: Multiple classes exist in the policy map that use different features of the
ACE. Processing actions from several features can be applied to the packet, but each
feature operates in a first-match basis within the collection of classification/action clauses
that use that feature.
Each of the defined policy types has a match type that is used for that policy type. The mapping
between policy type and match type is shown in the table.
Policy maps have a policy type and a match type that are interrelated. The policy type defines
the type of classifications and actions that are available, while the match type handles situations
in which multiple classification criteria have defined the traffic as interesting.
127
Type: Loadbalance
Classification:
Class
Inline match
Actions defined in
configuration submode
Business Class
Give Food
Give Legroom
ACEAP v1.01-17
Classification-action clauses are used to specify the actions that are to be performed on any
packet that is classified as interesting. The clause is started by specifying the classification to be
performed. Class maps of the appropriate type can be used to classify traffic. Inline match
statements can also be used to classify traffic on a single criteria. After a classification
mechanism has been specified, you are in a submode of the policy map configuration mode,
which allows one or more actions to be specified.
The policy map in the figure uses a class map called Business Class to classify traffic. Anyone
who is a member of this class is the recipient of two actions giving them food and legroom.
128
Business Class
Type: Loadbalance
Give Food
Give Legroom
Give Food
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-18
The predefined class class-default can be used to specify the actions to be performed on
packets that have not matched any of the class maps specified in the policy map. The example
in the figure shows passengers who, if they have not been booked in a better class, are given
food.
129
Type: Loadbalance
First Class
Give Food
Give Drinks
Give Legroom
Business Class
Give Food
Give Legroom
Give Food
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-19
The order in which classification/action clauses are processed is important for many of the
policy map types. Normally the classification/action clauses are processed in the order in which
they were configured. However, a mechanism exists to insert a classification/action clause
preceding an existing one. This is done with the insert-before keyword on the classification
definition command.
In the example, policy maps have been extended as shown with the addition of a First Class
classification/action clause preceding the Business Class classification/action clause.
Note
130
Policy Maps
General structure:
policy-map [type policy_type] match_type policy_name
class {name [insert-before name] | class-default}
action_specifications
match name match_criteria [insert-before name]
action_specifications
multi-match map_name
type inspect ftp first-match map_name
type inspect http all-match map_name
type loadbalance first-match map_name
type management first-match map_name
ACEAP v1.01-20
Policy maps are configured in configuration mode. The policy-map command is used to
specify the policy type, match type, and policy name. After this command is entered, you are
placed in the policy map configuration submode, where you can specify your classification
statements with either the class or match command. Either of these statements places you in a
submode of the policy map configuration submode where you can specify the actions to be
taken for packets matching the classification.
Syntax Summary
The following is the full syntax of all possible policy map-related configuration statements.
policy-map multi-match map_name
class {name1 [insert-before name2] | class-default}
appl-parameter http advanced-options name
connection advanced-options name
inspect {dns [maximum-length bytes]} | {ftp [strict policy
policy_map1]} | {http [policy policy_map2 | url-logging]} | {icmp
[error]} | rtsp
loadbalance policy name
loadbalance vip advertise [active] | [metric number]
loadbalance vip icmp-reply [active]
loadbalance vip inservice
nat dynamic nat_id vlan number
nat static ip_address netmask mask {port1 | tcp eq port2 | udp eq
port3} vlan number
ssl-proxy {client | server} ssl_service_name
131
132
Define Actions
Define processing steps to be
performed on traffic in policy
maps
Activate Policy
Associate policy defined with a
traffic stream
ACEAP v1.01-22
The third step in defining traffic processing in ACE is to associate previously defined policy
maps with a stream of packets to be classified and processed. This traffic stream contains the
packets that are analyzed for the presence of the criteria configured in the class maps. Packets
that have these criteria are processed according to the actions of the policy map associated with
the traffic stream.
133
Traffic Stream
Class Map
Policy Map
Activate Policy
Traffic
Characteristics
Matched
Traffic
Characteristics
Matched
class-default
2007 Cisco Systems, Inc. All rights reserved.
Actions
Actions
Actions
ACEAP v1.01-23
134
Traffic Stream
Traffic
Characteristics
Matched
Traffic
Characteristics
Matched
Actions
Traffic Stream
Class Map
Policy Map
Activate Policy
class-default
2007 Cisco Systems, Inc. All rights reserved.
Actions
Actions
ACEAP v1.01-24
Multiple traffic streams can be associated with the same policy map. This provides a
mechanism to provide the same traffic-mapping function to several different sources of packets.
For example, a policy map can be used to provide the same classification and processing to
packets being received by the ACE module on several different interfaces.
135
Multilayer Processing
Class
Type 1
Class
Type 2
Class
Type 1
Class
Type 2
Layer 3 and 4
Management
Class
Type 2
FTP Inspect
HTTP Inspect
Loadbalance
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-25
Policy maps must be associated with one or more traffic streams. These traffic streams supply
the packets that are analyzed and processed. Two layers of processing are possible with the
ACE; therefore, two methods are used to associate traffic streams with a policy map. Which
method is used depends on the type of policy map.
136
Using service-policy
Class
Type 1
Class
Type 1
ACEAP v1.01-26
Primary policy maps are attached to one or more interfaces with the service-policy command.
This command is issued in the interface configuration submode to attach a policy map to a
single interface. The service-policy command can also be used in global configuration mode to
attach the policy map to all interfaces. Policy maps that are applied to a single interface
override any globally applied policy maps for overlapping classification types.
Several policy maps can be attached to a particular interface. However, only one policy map for
each different feature is recommended.
137
ACEAP v1.01-27
The configured class and policy maps can be displayed by specifying the class-map or policymap arguments to the show running-config command. Information about applied policy maps
and the statistics related to them can be displayed with the show service-policy command.
138
Management traffic
Source NAT
Destination NAT
ACEAP v1.01-28
Traffic streams received on a particular interface can be processed by several features on the
ACE. The Modular Policy CLI enables you to create several policy maps covering many of
these different features and associate them all with the same interface. The ACE processes
packets with the features configured. Instead of using the order in which the features were
configured, the ACE processes packets with features in a consistent order on every interface.
139
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Class maps are used to classify traffic.
Policy maps match actions with traffic classification (class maps).
Policy maps must be activated for processing to occur by
attaching the service policy (Layer 3 and 4 policy map) to a traffic
stream or interface.
140
ACEAP v1.01-29
Lesson 4
Objectives
Upon completing this lesson, you will be able to describe the methods used to manage the ACE
appliance. This includes being able to meet these objectives:
ACEAP v1.01-4
Management access to the Cisco 4710 Application Control Engine (ACE) appliance is initially
limited to the console port and the session command from the Catalyst 6500 chassis. The first
step in allowing remote management access is the definition of a management-type class map.
This class map classifies traffic based on the protocol being used and, optionally, on the source
IP address.
The full syntax of the match protocol command is as follows:
match protocol {http | https | icmp | snmp | ssh | telnet} {any |
source-addressip_address mask}
In the figure, a management class map named remote-access is created. The remote-access
class map matches incoming Telnet connections from systems in the 172.16.31.0 255.255.255.0
subnet. It also matches incoming Secure Shell (SSH) or HTTPS connections from anywhere.
142
ACEAP v1.01-5
The second step in permitting remote access is to configure a management-type policy map.
This policy map associates the permit action with traffic classified as interesting by
management-type class maps used in the policy map. The deny action is also available in a
management-type policy map.
In the figure, the remote-access class map that was previously defined in a new policy map
called remote-mgt is used. Any traffic matching the remote-access class is permitted for
management access to the ACE.
143
ACEAP v1.01-6
Now the policy must be associated with a traffic stream to be effective. Management-type
policy maps are attached to one or more interfaces with the service-policy command.
The figure shows the interface configuration for VLAN 231. You attach the remote-mgmt
policy map to this interface to allow traffic received by the interface to be processed by the
policy map.
144
ACEAP v1.01-7
Subsets of running-config can be displayed by adding optional keywords to the show runningconfig command. In the figure, the class-map, policy-map, and interface keywords are used.
Another technique that is useful for finding information in a long configuration is use of the
pipe symbol (|) followed by one of several available filters. Shown here is the use of the begin
filter, which starts the output when the specified regular expression is found in the output.
145
ACEAP v1.01-8
The show service-policy policy_name [detail] command can be used to display run-time
information about a service policy. The figure shows the output when this command is applied
to a management policy.
146
2
3
ACEAP v1.01-9
The ACE Device Manager GUI automatically sets up the management class map, policy map,
and service policy.
The sections of the GUI correspond to the CLI as follows:
1. Protocols to allow = class map type management (permit action clauses for policy-map)
2. Policy name = policy map type management
3. VLAN to use = service policy applied to the VLAN interface
Note
Service policies can also be applied globally to all interfaces from the Config > Virtual
Contexts > System > Global Policy area of the Device Manager.
147
SNMP Manageability
This topic describes SNMP support for multiple contexts.
SNMP Support
ACE
NMS
ACEAP v1.01-11
SNMP versions 1, 2c, and 3 can be used to manage the ACE appliance. The SNMP support
allows for a configurable SNMP engine ID per context. User context information can be
retrieved through SNMP GET requests sent to the Admin context or through SNMP GET
requests to the individual user context.
148
ACE
NMS
ACEAP v1.01-12
SNMP management of contexts on the ACE appliance can be accomplished through the NMS
configuration guidelines shown in the figure.
149
ACE
NMS
ACEAP v1.01-13
The SNMP contact information and location information are configured with the snmp-server
contact contact_information command and the snmp-server location location command,
respectively. The figure shows the contact information configured for example.coms ACE
appliance in Atlanta Data Center 1.
150
ACE
NMS
ACEAP v1.01-14
SNMP community strings used by SNMP versions 1 and 2c are configured with the snmpserver community community_name [group group_name | ro] command. The ro option
specifies that a particular community string is granted only read-only access. The figure shows
a community string of grand-poobah configured with read-write access and a second
community string of status-keepers that grants only read access.
151
ACE
NMS
ACEAP v1.01-15
SNMP version 3 user IDs are created with the snmp-server user user_name [group_name]
[auth {md5 | sha} password1 [localizedkey | priv {password2 | aes-128 password2}]]
command. Passwords for users with both SNMP and CLI access are synchronized by the ACE
appliance. The figure shows a user being created.
152
ACE
NMS
snmp-server host 10.1.1.4 grand-poobah traps version 2c
snmp-server enable traps slb vserver
snmp-server trap-source vlan 110
ACEAP v1.01-16
ACE-initiated SNMP notifications can be sent to a NMS in the form of a trap or inform
message. The servers to which these messages will be sent are configured with the snmpserver host host_address community-string_username { informs | traps } version {1 {udpport} | 2c {udp-port} | 3 [auth | noauth | priv]}} command. Different types of trap
notifications are possible and are configured with the snmp-server enable traps
[notification_type] [notification_option] command. The notification_type and
notification_option parameters are defined as follows:
license: Sends SNMP license manager notifications. This keyword appears only in
the Admin context.
Slb: Sends server load balancing notifications. When you specify the slb keyword,
you can specify a notification_option value.
Snmp: Sends SNMP notifications. When you specify the snmp keyword, you can
specify a notification_option value.
Syslog: Sends error message notifications (Cisco Syslog MIB). Specify the level of
messages to be sent with the logging history level command.
virtual-context: Sends virtual context (ACE user context) change notifications. This
keyword appears only in the Admin context.
153
When you specify the snmp keyword, specify the authentication, coldstart,
linkdown, or linkup keyword to enable SNMP notifications. This selection
generates a notification if the community string provided in the SNMP request is
incorrect, or when a VLAN interface is either up or down. The coldstart keyword
appears only in the Admin context.
When you specify the slb keyword, specify the real or vserver keyword to enable
server load balancing notifications. This selection generates a notification if the
following state change occurs:
The real server changes state (up or down) due to user intervention, ARP
failures, or probe failures.
The virtual server changes state (up or down). The virtual server represents the
servers behind the content switch in the ACE to the outside world and consists
of the following attributes: the destination address (can be a range of IP
addresses), the protocol, the destination port, or the incoming VLAN.
Link up and link down messages can be sent in two different formats. By default, the ACE
appliance uses a Cisco format. You can use a standardized format by configuring the snmpserver trap link ietf command.
The interface that is to be used as the source IP address of SNMP traps is configured with the
snmp-server trap-source vlan number command.
154
ACEAP v1.01-17
The figure shows the different SNMP characteristics that can be configured in the ACE
appliance Device Manager.
155
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Management access to the ACE is controlled with the Modular
Policy CLI.
The ACE appliance is fully manageable through SNMP. The
Admin context allows full control of all contexts, while SNMP
actions directed to individual contexts affect only that context.
156
ACEAP v1.01-18
Lesson 5
Security Features
Overview
In this lesson, you will learn how to describe the ACE features that provide IP applicationbased security.
Objectives
Upon completing this lesson, you will be able to describe the ACE features that provide IP
application-based security. This includes being able to meet these objectives:
ACEAP v1.01-4
All flows through the Cisco 4710 Application Control Engine (ACE) appliance must be
allowed by an access control list (ACL). Any packets that are not part of an existing flow and
are not permitted by an ACL entry are discarded.
Note
158
The ACE creates sessions for User Datagram Protocol (UDP) and Internet Control Message
Protocol (ICMP) flows, even though they are connectionless protocols.
Destination
Client
Server
ACE
input
output
output
input
Source
Destination
Server
Client
ACEAP v1.01-5
ACLs can be used by various features. They can also be used as security ACLs to control
traffic through the ACE. Security ACLs are applied to an interface with the access-group
command in interface configuration submode. The direction specified for the ACL is relative to
the interface, that is, input refers to traffic being received on the interface. ACLs can also be
used to apply to all interfaces by configuring the access-group command in global
configuration mode. However, ACLs can only be applied globally in the input direction.
Note
Caution
No interface-specific input ACLs can be configured if an input ACL has been applied globally
to all interfaces.
Security Features
159
TCP/IP Fragmentation/Reassembly
This topic explains IP fragmentation processing.
IP Fragmentation
Max Packet:
1500 Bytes
Max Packet:
9000 Bytes
ACEAP v1.01-7
IP is supported over many different media types, many of which have different maximum
packet sizes. When an IP packet transits from one media type to another, it might be
determined that the packet is too big for the media used on the next hop. In that case, the IP
packet is fragmented into smaller packets. Normally, after a packet has been fragmented, it is
not reassembled until it is received by the target system.
The figure shows an example of an IP router attached to two networks with different packet
sizes. A packet from the right-hand interface, which supports 9000-byte packets, must be
fragmented into multiple 1500-byte packets for transit out the left-hand interface.
160
Fastpath MP
Fragmentation MP
ACEAP v1.01-8
If packets need fragmentation when they are being transmitted from the ACE appliance, the
fastpath process running on the dataplane multiprocessor detects this condition and sends the
packet to the fragmentation and reassembly processor. The individual fragments are then
returned to the fastpath process for transmit.
Security Features
161
Fastpath MP
Reassembly MP
ACEAP v1.01-9
162
ACEAP v1.01-10
Fragmentation and reassembly are controlled by configuration statements issued in the interface
configuration submode, as shown in the figure.
Security Features
163
TCP/IP Normalization
This topic explains TCP/IP normalization.
Hardware-Based IP Normalization
Always performed:
Following packets are dropped:
src IP == dest IP
Configurable:
icmp-guard
Minimum TTL
ACEAP v1.01-12
IP packet normalization is performed in hardware by the ACE appliance. Sanity checking of the
source and destination IP address fields is always performed according to the rules listed in the
figure. Further IP normalization is configurable and performs the following functions:
164
icmp-guard performs security checks on ICMP messages that pass through the ACE to
filter out unsolicited ICMP responses.
Packets with the Do Not Fragment flag can be allowed to pass, or the flag can be cleared by
the ACE appliance.
A minimum Time to Live (TTL) can be specified for packets received by the ACE.
The ACE appliance can verify that the interface that received an IP packet would be the
same interface on which a response packet would be transmitted.
Configurable:
reserved bits
allow/clear/drop
urg flag
allow/clear/drop
syn-data allow/drop
exceed-mss
allow/drop
random-seq-numdisable
ACEAP v1.01-13
Reserved bits in the TCP header can be allowed through as set; cleared to all zeroes; or
packets with reserved bits on can be dropped by the ACE appliance.
The Urgent flag can be allowed or cleared, or packets with the Urgent flag set can be
dropped.
Synchronize (SYN) packets can be allowed to have data, or dropped if they contain data.
A maximum MSS value can be configured, and packets with a larger MSS can be allowed
or dropped.
Security Features
165
Trusted Host
Attacker
Target
3
1 Attacker sends SYN with spoofed address
2 Target sends SYN-ACK
3 Attacker sends ACK with spoofed address
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-14
IP address spoofing attacks inject TCP packets into a network to create the illusion that the
target system is communicating with a trusted host. The first step in an IP address spoofing
attack is to send the packets necessary to complete the TCP connection handshake on behalf
of a trusted host. The attacker completes this step by transmitting a SYN packet followed by
an acknowledgment (ACK) packet, both with spoofed source addresses. For the ACK packet to
be successful in establishing the connection, it must contain the acknowledgment number
expected by the target system. This information is contained in the SYN-ACK packet that the
target sends out, but the attacker does not see this packet because it gets routed to the trusted
host being spoofed. If the target system uses a deterministic algorithm to select its initial
sequence number (ISN), the required acknowledgment number can be predicted.
After the attacker has established a TCP session with the target, it can send data to the target
application.
166
SYN: Seq = X
SYN: Seq = X
SYN-ACK: Seq = Y, Ack = X + 1
SYN-ACK: Seq = Z, Ack = X + 1
ACEAP v1.01-15
The ACE performs randomization of the sequence numbers returned by protected resources, as
shown in the figure. The client initiates a connection by sending a SYN packet with an ISN of
X, which the ACE passes on to the server. The server sends back a SYN_ACK with an ISN of
Y. The ACE modifies this SYN_ACK packet, replacing Y with a random ISN of Z.
ISN randomization is enabled by default on the ACE.
Control plane connections are exempted.
Security Features
167
SEQ=Startclient + Sentclient
ACK=StartACE + SentACE + 1
SEQ=Startclient + Sentclient
ACK=Startserver + Sentserver + 1
Adjust
Ack number
ACE Server
Adjust
Seq number
ACE Server
SEQ=Startserver + Sentserver
SEQ=StartACE + SentACE
ACK=Startclient + Sentclient + 1
ACK=Startclient + Sentclient + 1
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-16
The ACE has created two half-connections with different outbound ISNs, as discussed
previously. Each packet in the TCP connection must therefore be modified to change the
relevant sequence number fields. Inbound packets have an acknowledgment field based on the
outbound ISN and are adjusted by the ACE. The reverse adjustment is applied to the sequence
number field of all outbound packets. The formulas in the figure show the sequence and
acknowledgment numbers for any packet in the TCP connection.
168
Configuring Normalization
ACE
ACEAP v1.01-17
Security Features
169
Static NAT
Mapped IP Address
Real IP Address
Outside Interface
Inside Interface
Mapped IP Address
Real IP Address
Outside Interface
Inside Interface
ACEAP v1.01-19
The concepts of inside and outside interfaces are flexible in the ACE configuration model and
depend on which interface contains the real and mapped IP addresses. The real IP address is
always on the inside interface and the mapped IP address is always reachable on the outside
interface.
Static network address translation (NAT) defines a static mapping between one or more real IP
addresses and a corresponding set of mapped interfaces. A class map is used to identify traffic
from the real IP address. This class map is referenced in a policy map, which specifies the
outside interface in an action statement. This policy map is then associated with the inside
interface.
In the figure, a server on the inside is being mapped to an address on the outside. As noted,
traffic from the client carries the destination IP address NATed, and traffic from the server
carries the source IP address NATed.
170
Real IP Address:
198.133.219.25
10.20.42.15
Outside Interface
Inside Interface
VLAN 101
VLAN 201
class-map internal-addrs
match source-address 10.20.42.15 255.255.255.255
policy-map multi-match nat-internals
class internal-addrs
nat static 198.133.219.25 netmask 255.255.255.255 vlan 101
interface vlan 201
service-policy input nat-internals
ACEAP v1.01-20
A static NAT configuration is shown in the figure, which maps the address 10.20.42.15 to an
external address of 198.133.219.25.
Security Features
171
Dynamic NAT
Real IP Address
Mapped IP Address
Inside Interface
Outside Interface
Mapped IP Address
Inside Interface
Outside Interface
ACEAP v1.01-21
Dynamic NAT allows a real IP address on the inside interface to be dynamically translated to a
member of a pool of addresses on the outside interface. The procedure for configuring dynamic
NAT is similar to the process for configuring static NAT, with the additional step of defining a
NAT pool on the outside interface.
In the figure, the clients on the inside interface are dynamically translated to a member of the IP
address pool on the outside interface. Client-initiated traffic is source NATed, and serverinitiated traffic is not translated.
172
Mapped IP Address
Any
10.1.1.1-10.1.1.100
Inside Interface
Outside Interface
VLAN 101
VLAN 201
class-map client-traffic
match destination-address 198.133.219.25 255.255.255.255
policy-map multi-match nat-clients
class client-traffic
nat dynamic 1 vlan 201
interface vlan 101
service-policy input nat-clients
interface vlan 201
nat-pool 1 10.1.1.1 10.1.1.100 netmask 255.255.255.0 pat
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-22
The configuration in the figure translates the client IP address to an address in the range of
10.1.1.1-10.1.1.100 for any client connection initiated to the IP address of 198.133.219.25.
Security Features
173
nat-dyn
hit count
: 1
NAT operations can be verified with the show xlate command, which lists the active
translations. The effects of NAT on a connection can be viewed in the output of the show conn
command. The show service-policy command can be used to display statistics about the
operation of a policy map that is configured for NAT.
174
Mapped IP Address
Inside Interface
Outside Interface
ACEAP v1.01-24
The figure shows the algorithm for allocating IP addresses from a dynamic NAT pool.
Security Features
175
Mapped IP Address
Inside Interface
Outside Interface
ACEAP v1.01-25
Multiple NAT pools can be used to enhance the chances that a new NAT translation can be
created, as described in the figure.
176
Mapped IP Address
Inside Interface
Outside Interface
ACEAP v1.01-26
Security Features
177
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
IP access control lists must be configured to allow new flows
(except management) to be initiated through the ACE.
ACE reassembles IP fragments before passing them to real
servers.
ACE normalizes TCP/IP headers to comply with RFCs.
NAT and PAT are supported by the ACE appliance.
178
ACEAP v1.01-27
Lesson 6
Objectives
Upon completing this lesson, you will be able to describe the capabilities and configuration of
the ACE features used to provide load balancing of IP-based applications. This includes being
able to meet these objectives:
Load-Balancing Concepts
This lesson describes the concepts of server load balancing and explores the ACE capabilities
of these features.
Real Servers
Servers
ACEAP v1.01-4
The terminology used by the CSS is different, in that the CSS refers to real servers as
services.
In the figure, you can see the beginning of a load-balanced content server environment that has
three real servers.
180
Server Farm
Load Balancer
LB
ACEAP v1.01-5
To load balance connections across a collection of real servers with a load balancer, you can
group the servers into a collection called a server farm. The server farm consists of servers that
are all capable of responding to the same requests. In the process of creating a server farm, you
also define a load-balancing algorithm. This algorithm maps each request assigned to the server
farm to a particular real server contained in the server farm.
Note
The CSS does not use the server farm as a standalone concept. Rather, groups of real
servers (services, on the CSS) are added to a content rule that specifies how to handle a
particular kind of request.
In the figure, you can see the three real servers grouped together in a server farm and assigned a
load-balancing algorithm.
181
LB
LB
ACEAP v1.01-6
A load-balancing environment can have multiple server farms. Each server farm has its own
collection of servers and its own load-balancing algorithm. Different load-balancing algorithms
can be used in different server farms in the same load-balancing environment.
In the figure, you can add a second server farm with two real servers.
182
Load Balancer
LB
LB
LB
ACEAP v1.01-7
Server farms can overlap; in other words, the same server can be part of multiple server farms.
In this type of environment, the load-balancing algorithm used in one server farm does not
track the requests sent to a real server by the load-balancing algorithm of another server farm.
Thus, care should be taken in designing overlapping server farms.
In the figure, you can add a third server farm. This server farm contains two new real servers
and makes use of the resources of one of the real servers in the first server farm.
183
Load Balancer
LB
LB
LB
ACEAP v1.01-8
To provide seamless access to the resources hosted on the load-balanced real servers to clients,
you must present the clients with the image of a single server to which they send all their
requests. This virtual server is implemented by the load balancer and does not exist as a
standalone computer system. The virtual server is configured with a virtual IP (VIP) address
that is used as the destination of client requests. The load balancer receives a client request,
selects a server farm, uses the load-balancing algorithm defined in the server farm to select a
real server, and sends the client request to the real server. Data returned by the real server is
then modified so that it appears to come from the VIP and is returned to the client.
The load-balanced environment shown in the figure has a request coming from a client. The
selection algorithms have selected the right-hand real server of the bottom server farm. The
client request is sent to this server. The response is then processed by the load balancer to
return this data to the client sourced from the VIP.
184
Health Monitoring
Load Balancer
HM
HM
HM
X
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-9
Health monitoring is used by the load balancer to monitor the availability of each real server
used in a server farm. If the health monitoring function detects that a server is not available, the
server is taken out of service to keep future client requests from being directed to a server
unable to respond to that request. Each server farm can have different health monitoring
parameters configured. Health monitoring checks are sent to each real server in a server farm.
The sample environment shows the results of health monitoring. The right-hand server of the
bottom server farm has failed some health checks and is deemed to be out of service by health
monitoring. Future client requests are sent only to the left-hand server of this server farm while
this condition persists.
185
Farm A
2
4
ACE
ACEAP v1.01-10
Cisco load balancers can make load-balancing decisions based on Layer 4 information. This
decision is made on the first packet of a TCP connection (that is, the SYN packet from the
client) and on the first packet detected on a UDP flow. Layer 4 load-balancing decisions use the
load-balancing capabilities of a single server farm.
In the network shown in the figure, you see a server farm with two real servers. Incoming client
requests are being balanced between these servers at Layer 4.
186
Layer 4 Switching
Layer 4 information is always present in the first packet of the
flow:
IP protocol
Source and destination IP addresses
Source and destination L4 ports (for TCP/UDP)
Client Side
Server Side
1
ACEAP v1.01-11
IP: This field is used to differentiate between the higher-level protocols such as UDP and
TCP that are supported by IP.
Source and destination IP addresses: The IP address of the transmitting system and the
intended recipient.
Source and destination port: The port number being used on client or server. Well-known
port numbers are defined for most IP-based services (for example, port 80 is used for
HTTP, the transmitting system, and the intended recipient). Note that port numbers are
used to direct the IP traffic to a particular application process such as a web site.
Layer 4 content-switching decisions can be based on any of the Layer 4 fields listed in the
figure. With TCP connections, the Layer 4 information is consistent for all packets in the
connection. The Layer 4 information is often said to define a flow, which is the communication
path for a particular connection.
The figure shows a flow of packets coming from the client side of the network into an ACE.
The ACE examines the first packet in a new flow or connection, and a Layer 4 switching
decision is made for the flow as a whole. The content switch makes this decision and then
records the flow parameters and the switching decision. This table of switching decisions is
used to switch every subsequent packet in the flow. Information is removed from the switching
table when a connection is closed. In the case of Layer 4 switching of TCP packets, these
decisions are normally made on the basis of SYN and FIN packets and are done at TCP
connection setup and termination. Reset (RST) packets are also analyzed because they are used
to refuse a connection when it is requested or to abort an existing connection.
187
ACE
Farm C
Farm B
ACEAP v1.01-12
Cisco load balancers are capable of policy-driven load balancing, which extends the server
selection process. Load-balancing policies are used to select a server farm to process a client
request. After a server farm has been selected, the server farm load-balancing algorithm
completes the server selection.
The figure shows a hypothetical policy-driven load-balancing situation. There are three server
farms. Farm A consists of two real servers that are load balancing using the round-robin
algorithm. Farm B consists of three real servers that are also performing round-robin load
balancing. Farm C consists of only one server.
One virtual server IP is configured on the ACE, which all clients will be targeting with their
requests. This virtual server implements the following policies:
1. Authorized users get HTML files from Farm A.
2. Authorized users get image files from Farm B.
3. Unauthorized users get an error page from Farm C.
Each client, in order, is trying to retrieve a home page from the virtual server. The home page
contains two images that the clients will also retrieve. Clients 13 are authorized; client 4 is not
authorized.
Client 1
1. Client requests the home page.
2. Policy assigns the request to Farm A.
3. Farm A load balancing assigns the request to the left server.
4. Client requests image 1.
5. Policy assigns the request to Farm B.
188
Client 2
1. Client requests the home page.
2. Policy assigns the request to Farm A.
3. Farm A load balancing assigns the request to the right server.
4. Client requests image 1.
5. Policy assigns the request to Farm B.
6. Farm B load balancing assigns the request to the bottom server.
7. Client requests image 2.
8. Policy assigns the request to Farm B.
9. Farm B load balancing assigns the request to the top server.
Client 3
1. Client requests the home page.
2. Policy assigns the request to Farm A.
3. Farm A load balancing assigns the request to the left server.
4. Client requests image 1.
5. Policy assigns the request to Farm B.
6. Farm B load balancing assigns the request to the middle server.
7. Client requests image 2.
8. Policy assigns the request to Farm B.
9. Farm B load balancing assigns the request to the bottom server.
Client 4
1. Client requests the home page.
2. Policy assigns the request to Farm C.
3. There is only one server in Farm C, so the request is assigned to it.
189
Layer 7 Switching
Layer 5 to Layer 7 information is received only after TCP
connection setup and might span multiple packets:
HTTP URLs
Cookies
HTTP request headers
SYN
SYN_ACK
SYN
SYN_ACK
ACEAP v1.01-13
Layer 7 information is available only after application data has been transmitted, but
transmission requires that the TCP connection be fully functional. This leads to a dilemma: a
server must respond to the client to fully start the TCP connection before the client sends the
Layer 7 information that the content switch needs to select the server.
The content switch handles this dilemma by buffering client data and temporarily acting as a
server. To do this, the content switch responds to the incoming SYN packet with its own SYNACK. The content switch then buffers packets until it has enough Layer 7 information to make
a load-balancing decision.
After a destination server has been selected, the content switch must now make a connection to
the server on behalf of the client. To establish the TCP connection to the server, a SYN packet
is sent to the server, and then the ACE waits for the SYN-ACK packet to be sent from the
server. At this point, all buffered packets received from the client are sent to the server.
After the buffered packets have been sent, the two TCP connections can be spliced together by
the content switch. This splicing is done by receiving packets from one connection and
retransmitting them to the other.
Because there are two different TCP connections from the content switch, one to the client and
one to the server, there are probably two sets of sequence numbers in use, one on each
connection. The content switch translates the sequence numbers from one connection to the
other.
190
SEQ=Startclient + Sentclient
ACK=Startserver + Sentserver + 1
SEQ=Startserver + Sentserver
ACK=Startclient + Sentclient + 1
ACEAP v1.01-14
After a TCP connection is established, each packet has a valid sequence and acknowledgment
number. The sequence number is the number of the first byte of the data portion of the packet.
The sequence number is computed by adding the starting sequence number (established in the
SYN packet) with the number of bytes already transmitted. The acknowledgment number is the
number of the next byte of data expected from the other system. The acknowledgment number
is computed by adding 1 to the sum of the starting sequence number of the other systems, plus
the number of bytes received.
191
SEQ=Startclient + Sentclient
ACK=StartACE + SentACE + 1
SEQ=Startclient + Sentclient
ACK=Startserver + Sentserver + 1
Adjust
Ack no.
ACE Server
Adjust
Seq no.
ACE Server
SEQ=StartACE + SentACE
ACK=Startclient + Sentclient + 1
2007 Cisco Systems, Inc. All rights reserved.
SEQ=Startserver + Sentserver
ACK=Startclient + Sentclient + 1
ACEAP v1.01-15
When Layer 7 load balancing is configured, the ACE terminates the TCP connection from the
client. A second TCP connection is opened to the server selected by the load-balancing
algorithm. The SYN packet that is used to open the connection to the server uses the sequence
number received in the client SYN packet (Startclient in the figure). As a result, the sequence
numbers of all packets transmitted from the client through the ACE, and to the server, are
consistent. The acknowledged sequence numbers are therefore also consistent from the server
through the ACE to the client.
Both the ACE and the server independently determine sequence numbers for return traffic.
Because of the order in which sequence numbers are determined, the sequence numbers from
the server (Startserver in the figure) and the sequence number from the ACE to the client
(StartACE in the figure) are not consistent. Packets from the server to the client will have their
sequence numbers adjusted. Packets from the client to the server will have their
acknowledgment numbers adjusted.
192
Fastpath
TCP/HTTP/Inspect/L7
Proxy or reproxy:
Full TCP stack
processing
Full TCP state data
Layer 7 state data
Fastpath
Unproxy:
Smaller memory
requirements
Fewer MPs involved
1 million connections
ACEAP v1.01-16
The ACE appliance performs full TCP and Layer 7 processing on portions of a TCP stream as
required. When full TCP stack processing is not required, the connection is unproxied. In this
state, the only state information kept is that which the fastpath multipoint processor (MP) needs
in order to rewrite packets as they transit the appliance.
193
Fastpath
Fastpath
TCP/HTTP/Inspect/L7
: 0
: 1
Reproxied requests
: 1
, Headers removed
: 0
: 1
, Pipeline flushes
: 0
ACEAP v1.01-17
Statistics on the number of proxy, unproxy, and reproxy operations can be displayed with the
show stats http | include prox command, as shown in the figure.
194
Load-Balancing Algorithms
This topic describes the load-balancing algorithms available.
Load-Balancing Algorithms
Called Predictors in the ACE.
Select a real server within a server
farm.
Balance new connections.
LB
ACEAP v1.01-19
The load-balancing algorithms are referred to as Predictors in the ACE appliance and are
designed to select a real server to respond to an incoming connection. The load-balancing
algorithms do not necessarily result in identical load characteristics on each server, because the
load-balancing decision occurs at a more granular level. What is really balanced is the number
of connections to each of the real servers. If incoming connections are identical in
characteristics such as duration and processing power needed to respond to the request, this will
result in consistent server load throughout the server farm. Selection of the proper loadbalancing algorithm often requires some knowledge of the traffic and processing characteristics
of the application being load balanced.
195
Load Predictors
Load-balancing algorithms
Least connections with optional weight
and slow start
Existing Connections
New Connection
ACEAP v1.01-20
Load Predictors monitor one or more server load characteristics to pick the optimal or leastloaded server for a new connection.
The least connections load-balancing algorithm monitors the number of connections assigned to
a system as an approximation of the current load being placed on a server. This algorithm then
chooses the least-loaded server to receive the new connection. Servers in the server farm can
have a weighting factor applied to divide connection requests in a manner proportional to the
relative capabilities of each server.
The least connections load-balancing algorithm has an optional slow start capability for
services that have just been placed into service. This capability causes the algorithm to increase
the new connections to a server over a period of time. This allows a new server to take on more
load gradually, rather than being hit with every connection until it has the same load as other
servers in the server farm.
Health monitoring should be used with load Predictors to keep a failed server from blackholing all requests. For example, if a server fails, it will have the least number of connections.
This server will be selected for every new connection by the least connections load-balancing
algorithm.
In the server farm shown in the figure, you see three servers. The top server has a large number
of connections, the middle server has a moderate number of connections, and the bottom server
has few connections. A least connections Predictor will select the bottom server for a new
connection.
196
Existing Connections
New Connection
ACEAP v1.01-21
Traffic pattern Predictors are designed to spread traffic around based on characteristics of the
traffic being load balanced.
Round robin selects successive servers each time a new connection comes in. An optional
weight specification configures round robin to select a real server multiple times to allocate
connections proportionately to the specified weight. This is used in situations in which various
servers in the server farm have different connection-processing capacities.
The hash-based traffic pattern Predictors algorithmically analyze various characteristics of the
incoming requests and spread the traffic around accordingly. The source, destination, or both IP
addresses in the request packet can be hashed after applying an optional network mask. The
value of an HTTP cookie, an HTTP header field, or part or all of the URL can be hashed.
Without health monitoring, the failure of X number of servers in a server farm with N real
servers will cause X out of N connections to be assigned to dead servers; therefore, the use of
health monitoring is recommended.
The server farm shown in the figure is using the round-robin Predictor and has selected the
bottom server for the next connection.
197
ACEAP v1.01-23
The first step in configuring load balancing is the configuring of real servers. On the ACE, real
servers are configured with the rserver command, which creates a named host. Various
parameters can be specified about the host. The IP address and placing the system in service
with the inservice command are required for requests to be sent to the server.
Note
Taking the server out of service in the rserver configuration submode takes the server out of
service in all server farms of which it is a member.
In the configuration shown in the figure, you see two servers and place them in service.
Monitoring rservers
To view rserver state use the following command:
ACE/Lab-OPT-17# show rserver {(rserver_name) | detail}
ACE/Lab-OPT-17# show rserver EXT-HOST1 detail
rserver
: EXT-HOST1, type: HOST
state
: INACTIVE
description : weight
: 8
------------------------------------------connections----------real
weight state
current total
---+---------------------+------+------------+----------+------------------198
Inbound probe failed: The server has failed the in-band Health Probe agent.
In service: The server is in use as a destination for server load balancing client
connections.
Operation wait: The server is ready to become operational and is waiting for the
associated redirect virtual server to be in service.
Out of service: The server is not in use by a server load balancer as a destination for client
connections.
Probe failed: The server load balancing probe to this server has failed. No new
connections will be assigned to this server until a probe to this server succeeds.
Probe testing: The server has received a test probe from the server load balancer
Ready to test: The server has failed, and its retry timer has expired; test connections will
begin glowing to it soon.
Test wait: The server is ready to be tested. This state is applicable only when the server is
used for HTTP redirect load balancing.
Testing: The server has failed and has been given another test connection. The success of
this connection is not known.
Throttle DFP: DFP (Dynamic Feedback Protocol) has lowered the weight of the server to
throttle level; no new connections will be assigned to the server until DFP raises its weight.
Throttle max clients: The server has reached its maximum number of allowed clients.
Throttle max connections: The server has reached its maximum number of connections
and is no longer being given connections.
199
ACEAP v1.01-24
Grouping servers together into server farms is the next step in a load-balancing configuration.
Server farms are defined with the serverfarm command. In the server farm configuration
submode, various server farm-specific parameters can be entered, including the load-balancing
algorithm to be used and the hosts that are part of the server farm.
In the configuration shown, you see a server farm called ext-servers. You specify the default
load balancing algorithm with the predictor roundrobin command. You also specify that the
two servers defined earlier are part of the new server farm, and that each of these real servers is
in service, from the perspective of the server farm.
Note
200
Taking a server out of service in a server farm keeps it from receiving traffic, based on loadbalancing decisions made through a specific server farm. The system is still in service and
available to other server farms of which it is a member.
ACEAP v1.01-25
On the ACE, load balancing is a feature that is controlled through the Modular Policy CLI. The
actual load-balancing policy is defined in a policy map of type loadbalance, which allows you
to classify traffic based on HTTP characteristics and perform different actions for different
classifications. For Layer 4 load balancing, you are not interested in analyzing Layer 7
characteristics. Therefore, the load-balancing policy map uses class-default to specify the
actions to be taken on all traffic that is to be load balanced. In the figure, you have created a
policy map called slb-logic for this specific purpose.
A load-balancing policy map is associated with a traffic stream through an action specification
in a Layer 3 and 4 policy map, which must now be configured.
201
ACEAP v1.01-26
The first part of configuring the Layer 3 and 4 policy map is to create a Layer 3 and 4 class map
to classify the traffic that is to be load balanced. Traffic is matched using the match virtualaddress command to specify the IP address, optional network mask, and optional destination
port, which defines the VIP.
Caution
A match virtual-address command must be used rather than a match destinationaddress command. The match virtual-address command not only matches traffic, but also
has the side effect of defining a VIP.
In the figure, you have created a class map named external-vip, which will match web requests
to an IP address of 198.133.219.25.
202
ACEAP v1.01-27
Now you are ready to create a Layer 3 and 4 policy map. This policy map will specify the
actions for traffic destined to the VIP. Two commands are required to activate load balancing.
First, the loadbalance vip inservice command places the VIP defined by the matching class
map into service. Second, the loadbalance policy command associates traffic destined to the
matching VIP with the load-balancing policy map.
In the figure, you configure a policy map called client-vips, which load balances traffic
destined for the VIP previously defined in the external-vip class map. Load-balancing
decisions for this traffic are to be controlled by the policy in the slb-logic policy-map.
203
ACEAP v1.01-28
Finally, the Layer 3 and 4 policy map must itself be associated with a traffic stream by
specifying the service-policy input command on one or more interfaces.
Note
No traffic is allowed into the ACE unless it is permitted via an ACL. Management-type class
maps implicitly define an ACL when they are activated on an interface; however, this is not
true for Layer 3 and 4 type policy maps.
In the example, you define an access list called from-231 to allow traffic to the VIP. You
associate this access list and the Layer 3 and 4 policy map to the VLAN 231 interface. Now,
traffic can arrive for the VIP on VLAN 231 and be load balanced to one of the two real servers.
Note
204
You use this Layer 4 load-balancing configuration as a base configuration for many other
configurations in this course.
ACEAP v1.01-29
Class map and policy map statements are not saved in the ACE configuration files in the same
order in which you have configured them in the example. Listed in the figure are the relevant
portions of running-config showing the order in which resource definitions are stored in the
configuration files.
205
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Server load balancing is used to allow multiple real servers to
respond to client requests directed to a virtual IP address.
Several load-balancing algorithms are available to make real
server selection.
Load-balancing decisions can be made by analyzing the Layer 4
protocol fields.
206
ACEAP v1.01-30
Lesson 7
Health Monitoring
Overview
In this lesson, you will learn how to describe the health monitoring capabilities of the Cisco
4710 Application Control Engine (ACE) appliance.
Objectives
Upon completing this lesson, you will be able to describe the health monitoring capabilities of
the ACE appliance. This includes being able to meet these objectives:
ACEAP v1.01-4
Two different types of health monitoring can be performed by Cisco load balancers. Passive
health checks monitor the responses from real servers to the requesting client. If error
conditions are detected in the server response, the server is removed from service. Active health
monitoring functions in the load balancer send out periodic requests called probes to the real
servers. If the server does not respond as expected, the server is removed from service.
208
ACEAP v1.01-5
Two types of passive health checks are possible. These are in-band health monitoring, which is
supported on the content services switch (CSS), and HTTP return code monitoring, which is
supported on both the ACE and the CSS.
In-band health monitoring analyzes the TCP connection setup and teardown packets from the
server. If a server does not respond to a synchronization (SYN) packet or sends a reset (RST)
packet, an error is detected.
Return code monitoring analyzes the HTTP return codes present in server HTTP responses.
User-configured ranges of return codes are considered to be indicative of an error.
Health Monitoring
209
Probe defines what to attempt, what to look for, and the time
interval between each instance.
Probe can be attached to a real server or server farm.
ACEAP v1.01-6
Active health checks are implemented by defining probes. These probes define the
communication to be attempted with a server to analyze its fitness for service, and the results
that are expected from a healthy server. The communication defined by the probe is repeated at
regular intervals defined in the probe configuration. Probes can be associated with a particular
real server, or with a server farm. Probes associated with a server farm are active on each of the
real servers in the server farm.
210
HTTP
RADIUS
Echo TCP
HTTPS
SMTP
ECHO UDP
ICMP
TCP
Finger
IMAP
Telnet
FTP
POP3
UDP
ACEAP v1.01-7
There are 15 built-in probe types that can be configured. Additional probe processing can be
created by using TCL scripts. The built-in probe types are:
ICMP: Internet Control Message Protocol (ICMP) echo succeeds if an ICMP echo-reply is
received. The ICMP probe operates the same as the ping command.
Generic TCP probe: Open a connection with the server and disconnect with TCP FIN
(default) or TCP RST. Optional parameters allow a string to be sent over the connection
and the response compared to an expected response.
Generic UDP probe: Sends a User Datagram Protocol (UDP) packet. The probe is
considered successful if no ICMP error is received. For UDP ports that respond to packets,
the string to be sent in the probe, and the expected response, can be configured.
Echo tcp: Use the echo service via TCP to send a string to the server, which will echo it
back verbatim.
Echo udp: Use the echo service as shown in the figure, but via UDP.
Finger: Send a finger request to a server. The ID to be fingered can be configured in the
probe.
HTTP probe: Sends a GET/HTTP/1.1 request and compares the return code and the text
of the result to configured expectations.
HTTPS probe: Similar to the HTTP probe, except that the request is sent via an encrypted
Secure Sockets Layer (SSL) connection. Authentication parameters for the SSL session can
be configured in the probe.
FTP probe: A TCP connection is opened with an FTP server and then the FTP QUIT
command is issued. The status code returned by the FTP server is checked against expected
return codes.
Telnet probe: Makes a connection and verifies that a greeting is received from the server.
Health Monitoring
211
212
DNS probe: Makes a request to resolve the domain name configured in the probe and
verifies that the returned IP address is one that the probe is configured to expect.
SMTP probe: Connects to a Simple Mail Transfer Protocol (SMTP) server and sends an
SMTP HELO command. The status code of the response is compared against a list of
expected status codes.
POP3 probe: Makes a TCP connection to a Post Office Protocol (POP) server with
credentials configured in the probe. An optional command is sent to the POP server.
IMAP probe: Makes a TCP connection to an Internet Message Access Protocol (IMAP)
server with credentials configured in the probe. An optional command is sent to the IMAP
server.
RADIUS probe: Sends a query to a RADIUS server for a configured username, password,
and shared secret. Optionally, the ACE can be configured to specify another IP address in
the request as the address of the Network Access Server requesting the authentication.
Server Status
Faildetect # probes
failed
Passdetect # probes
succeeded
X
ACEAP v1.01-8
Two probe attributes control the transitioning of servers from in service to out of service.
The faildetect parameter defines the number of consecutive probe failures that, when reached,
will cause the server to be removed from service.
A server that is listed as out of service is probed periodically to determine if it has come back
into service. The passdetect parameter defines the number of successful probes that, when
reached, will lead to the server being placed back in service.
Health Monitoring
213
ACEAP v1.01-10
Probe configuration begins with the probe command, which is used to specify the probe type
and name. This command also places you in the probe configuration submode. In this submode,
you can configure the following probe attributes:
Description: This attribute is used to provide user documentation for the probe.
IP address: This attribute specifies an alternate address to which to send the probe. If this
command is not configured, the IP address of the real server being probed is used.
Port: This attribute specifies the TCP or UDP port number to be used in the request. Each
of the probe types has a default port number that matches the protocol used in the probe.
Faildetect: This attribute specifies the number of fail probes that result in a server being
taken out of service.
Passdetect: This attribute specifies the interval at which failed servers are probed for
possible placement back in service, as well as the number of successful probes necessary
before a server is returned to service.
Receive: This attribute specifies the number of seconds the ACE waits for a response to a
probe before the probe is deemed to have failed.
ICMP probes are defined with all the general probe configuration statements except the port
statement, which is not supported for ICMP probes.
214
IP address p.p.p.p
MAC pppp.pppp.pppp
Probe Definition
No ip address command
ip address p.p.p.p
ip address p.p.p.p routed
ssss.ssss.ssss
ssss.ssss.ssss
pppp.pppp.pppp
ACEAP v1.01-11
Destination IP and MAC address fields in packets sent by probes are populated with values
determined by the ip address command configured in the rserver being probed, and optionally
in the probe itself. The network diagram is used to illustrate the options. In the network shown
in the figure, you have a server with an IP address of s.s.s.s, which is configured as a real server
(rserver) on the ACE. The associated MAC address of this interface is ssss.ssss.ssss. The server
in the network has a second interface with an IP address of p.p.p.p, and a MAC address of
pppp.pppp.pppp, which will be used for some probing activities.
Address Resolution Protocol (ARP) entries are maintained for the IP address specified by the ip
address command in the rserver configuration. In the network shown in the figure, the rserver
is configured with an ip address s.s.s.s statement. ARPing for this address results in a MAC
address of ssss.ssss.ssss.
Health Monitoring
215
216
172.16.1.2
ip address 172.16.3.2
serverfarm fws
rserver fw1
rserver fw2
rserver fw3
172.16.2.2
10.0.1.2
172.16.3.2
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-12
Firewall load balancing (FWLB) can be used to deploy firewall protection for a network whose
bandwidth requirements exceed the capacity of a single firewall. Two ACE appliances are
required for FWLB: one external to the firewalls, and one internal. The rservers in an FWLB
configuration are the interfaces of the firewalls through which traffic is sent. Several health
monitoring levels are possible in an FWLB configuration.
The simplest health monitoring would be to have each ACE appliance probe the firewall
interface to which it is attached. This would require the firewalls to respond to these ping
requests. Success of a probe verifies that connectivity is functional between the firewall and the
ACE appliance and that the firewall is responding to the probed requests. Several failure modes
involving the firewall process and connectivity beyond the firewall are not detected with this
health monitoring technique.
More-advanced health monitoring is possible by configuring the ip address command in the
probe definition. A probe can be configured to mimic real client traffic that is expected to
transit the firewall. However, the packets generated by these probes should not be destined to
the firewall, but through the firewall. This is accomplished by generating packets that have the
destination IP address of a resource beyond the firewall, but which are switched at Layer 2 to
the firewall being probed.
This technique is illustrated in the figure, with a sample FWLB configuration using three
firewalls. The configuration statements shown would be configured on the external ACE to
probe the firewalls by sending a ping through each firewall to the server behind the firewalls.
Packets generated by this probe are sent with a destination MAC address determined by
ARPing the rserver (that is, the firewall) being probed. The destination IP address in the probe
packet is 10.0.1.2, which is the IP address of the back-end server. The resulting packets are
switched through the firewall being probed, and to the back-end server. Success of this probe
confirms that the external ACE is able to get traffic through the probed firewall to a resource,
and verifies connectivity on both sides of the firewall, as well as the forwarding process of the
firewall. By using a probe type that mimics real client traffic, a probe could also be used to
verify that the correct firewall rules are active on the firewall that is probed.
Health Monitoring
217
Probing DSR
rserver srv1
ip address 172.16.1.2
rserver srv2
ip address 172.16.1.3
serverfarm srvs
rserver srv1
VIP 198.133.219.25
rserver srv2
172.16.1.2
172.16.1.3
class-map match-all external-vip
2 match virtual-address 198.133.219.25
probe icmp pinger
Loopback
198.133.219.25
ip address 198.133.219.25
ACEAP v1.01-13
Servers deployed in a DSR configuration often require the use of probes configured with the ip
address command to effectively test server health. DSR configurations result in client requests
that are load balanced to an rserver, while maintaining a destination address of the virtual IP
(VIP) that the client used in the original request. Server applications are therefore configured to
expect traffic received to be destined to the VIP address.
Mimicking real client traffic with the ACE health monitoring probes can be done by
configuring the ip address command. Packets generated by a probe with this command have
the MAC address determined by the IP address configured in the rserver, while the destination
IP address is the VIP configured in the probe definition.
This technique is illustrated in the network and configurations shown in the figure. Probes
generated by the pinger probe have a destination address of 198.133.219.25, which is the VIP
address used by clients. When this probe is run for the defined rservers, the probe packets have
the destination address determined by ARPing the IP address of the rserver.
218
ACEAP v1.01-14
Domain Name System (DNS) probes add two more attributes to the general probe
configuration:
IP address: An IP address that is expected as the result of the DNS query. Multiple expect
address commands can be used to configure a list of possible valid responses.
Health Monitoring
219
ACEAP v1.01-15
The RADIUS probe also adds two more attributes to the general probe configuration options.
These are:
220
Credentials: This option defines the credentials to be used in the RADIUS request,
including the username/password combination for the user being authenticated, and the
servers secret key.
NAS IP address: This option defines the IP address used in the Network Access Server
(NAS) field of the RADIUS request. If this parameter is not configured, the IP address used
to send the request to the RADIUS server is used.
ACEAP v1.01-16
The UDP-based probes Echo UDP and generic UDP add two attributes to the general probe
configuration:
Send-data: This attribute defines the data to be sent in the UDP probe packet.
Expect regex: This attribute defines a regular expression used to match the results received
from the server.
Health Monitoring
221
ACEAP v1.01-17
The TCP-based probes Echo TCP, Finger, generic TCP, and Telnet add four more parameters
to the general probe configuration:
222
Open: This parameter defines the amount of time allowed for the server to respond to the
initial SYN packet.
Send-data: This parameter defines the data to be sent over the TCP connection after it is
established.
Expect regex: This parameter defines a regular expression used to match the returned
traffic.
Connection term forced: This parameter specifies that the TCP connection should be
aborted with a RST packet. If this command is omitted from the probe configuration, the
TCP connection is closed gracefully with FIN packets.
count <number>
receive <recv-timeout>
open <open-timeout>
send-data <expression>
expect regex <string> [offset <number>]
connection term forced
expect status <min-number> <max-number>
ACEAP v1.01-18
FTP probes add one attribute to the TCP-based probe configuration: a range of expected status
codes. Multiple expect status commands can be configured to cause multiple ranges of status
codes to cause the server to pass the probe. At least one expect status command must be
configured for the probe to succeed.
Health Monitoring
223
count <number>
receive <recv-timeout>
open <open-timeout>
send-data <expression>
expect regex <string> [offset <number>]
connection term forced
credentials <username> [<password>]
header <field-name> header-value <value>
request method {get | head} url <path>
expect status <min-number> <max-number>
hash <value>
ACEAP v1.01-19
HTTP probes add attributes to the general TCP probe configuration that are used to define the
HTTP request sent to the server:
224
Credentials: This attribute defines the username and password used to access passwordprotected resources.
Header: This attribute allows the specification of many HTTP header fields. Multiple
header commands can be configured in a single probe.
Request method: This attribute defines the HTTP request method used and the URL to be
requested from the web server.
Expected status: This attribute defines the range of HTTP status codes that indicate that
the probe was successful. Multiple expect status commands can be configured in one
probe.
Hash: A Message Digest 5 (MD5) hash value can be supplied for the page to be retrieved.
If this parameter is not configured in the probe configuration, the ACE computes the hash
on the first probe of a server. Subsequent probes compare the hash value to the configured
or remembered hash value.
count <number>
receive <recv-timeout>
open <open-timeout>
send-data <expression>
expect regex <string> [offset <number>]
connection term forced
credentials <username> [<password>]
header <field-name> header-value <value>
request method {get | head} url <path>
expect status <min-number> <max-number>
hash <value>
ssl cipher RSA_ANY | <cipher-suite>
ssl version SSLv2 | SSLv3 | TLSv1
ACEAP v1.01-20
HTTPS adds SSL-specific attributes to the HTTP probe, allowing the probe to use particular
SSL versions and cipher suites.
Health Monitoring
225
ACEAP v1.01-21
The IMAP probe adds several attributes to the base TCP probe configuration, which define the
IMAP user credentials, mailbox, and command to be sent to the server.
226
count <number>
receive <recv-timeout>
open <open-timeout>
send-data <expression>
expect regex <string> [offset <number>]
connection term forced
credentials <username> [<password>]
request method <command>
ACEAP v1.01-22
The POP probe adds two attributes to the base TCP probe configuration, which define the POP
user credentials and command to be sent to the server.
Health Monitoring
227
Activating Probes
ACE
probe icmp pinger
probe http get-index
request method get url /index.html
rserver host ext-host1
ip address 192.168.1.11
inservice
probe get-index
rserver host ext-host2
ip address 192.168.1.12
inservice
serverfarm host ext-servers
predictor roundrobin
probe pinger
rserver ext-host1
inservice
rserver ext-host2
inservice
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-23
Here you extend the real servers and server farm defined earlier by adding health monitoring.
First, you define two probes: an ICMP probe called pinger, and an HTTP probe called getindex. Server ext-host1 will be probed with the get-index probe. Both servers will be probed by
the pinger probe, because they are both members of the ext-servers server farm. A failure of
either probe directed to ext-host1 will mark the server as out of service.
228
ACEAP v1.01-24
Current probe status and statistics can be displayed with the show probe detail command, as
shown in the figure.
Health Monitoring
229
ACEAP v1.01-25
The ACE adds detail to the disconnect error messages with 49 different explanatory messages.
230
Show Commands
switch/context# show stats probe
+------------------------------------------+
+----------- Probe statistics -------------+
+------------------------------------------+
----- icmp probe ---Total probes sent
: 0
Total
Total probes passed
: 423
Total
Total connect errors
: 0
Total
Total RST received
: 0
Total
Total receive timeout
: 0
----Total
Total
Total
Total
Total
:
:
:
:
:
0
0
0
8
0
Total
Total
Total
Total
send failures
probes failed
conns refused
open timeouts
:
:
:
:
0
25
0
0
send failures
probes failed
conns refused
open timeouts
:
:
:
:
0
8
0
0
. . . . . .
ACEAP v1.01-26
Global statistics grouped by probe type can be displayed with the show stats probe command.
Health Monitoring
231
ACEAP v1.01-28
Passive health checks are configured for each server farm. The retcode min-code max-code
check count command is used to configure passive return code parsing. Only one range of
return codes can be configured per server farm. In the configuration shown, you count the
HTTP return codes between 400 and 404, inclusively.
232
: R1
-------------------------------------return code
action
total count
+------------+--------+------------+
400
count
401
count
402
count
403
count
404
count
ACEAP v1.01-29
The show serverfarm name retcode command can be used to display the return code statistics
maintained by the passive health check monitoring. In the figure, you see the results of the
previous configuration definitions.
Health Monitoring
233
Using TCL
TCL Interpreter Release 8.44.
256 script files can be loaded.
15 sample scripts provided.
Scripts must be loaded into memory to be used.
script file <index> <script-name>
ACEAP v1.01-31
The TCL 8.44 interpreter is built into the ACE operating system and is used to run TCL scripts,
which are used to implement scripted health probes. Scripts must be loaded into memory before
they are used by a scripted probe. This is accomplished with the script file command in
configuration mode. This command loads a script from the disk0: file system into one of the
256 available memory slots used for scripts. A script must be unloaded and reloaded to reflect
changes in the source script file.
Note
To load a script into memory, the script must be in the disk0: directory. The ACE does not
load script files that are in a disk0: subdirectory.
The Cisco Technical Assistance Center (TAC)-supported scripts that come with the ACE are:
234
CHECKPORT_STD_SCRIPT
ECHO_PROBE_SCRIPT
FINGER_PROBE_SCRIPT
FTP_PROBE_SCRIPT
HTTP_PROBE_SCRIPT
HTTPCONTENT_PROBE
HTTPHEADER_PROBE
HTTPPROXY_PROBE
IMAP_PROBE
LDAP_PROBE
MAIL_PROBE
POP3_PROBE
PROBENOTICE_PROBE
RTSP_PROBE
SSL_PROBE_SCRIPT
TFTP_PROBE
Health Monitoring
235
ACEAP v1.01-32
The configuration of scripted probes adds one attribute to the general probe configuration,
which specifies the script name and optional arguments to be passed to the script.
236
Show Commands
switch/context# show script code ECHO_PROBE_SCRIPT
#!name = ECHO_PROBE_SCRIPT
################################################################################
# scriptname : ECHO_PROBE
#
# Description :
#
Script sends a text string to echo server and expects server to echo back
Probe success only if server return the exact string sent by script
#
# ACE version:
#
1.0+
#
# Parameters:
#
[debugFlag]
#
# debugFlag - default 0. Do not turn on while multiple probe suspects configured
# Example config :
#
#
ACEAP v1.01-33
The contents of the script associated with a scripted probe can be displayed with the show
script code command. In the figure, you see the start of a script called
ECHO_PROBE_SCRIPT. This script is a sample script supplied with the ACE.
Health Monitoring
237
: ECHO_PROBE_SCRIPT
scripted probe : s1
instance
----------------------------------------probe-association(s) : (count=1)
---------------------------------rserver
: rs1
= 30001
Child PID
= 1109
Exit Message
Panic String
Internal error
= 2
= 0
= Never
ACEAP v1.01-34
Detailed run-time TCL interpreter information and statistics for a scripted probe can be
displayed with the show script script_name probe_name server_name command. In the figure,
you see the results of runs of the ECHO_PROBE_SCRIPT script being used by the s1 probe to
monitor the health of server rs1.
238
Description
30001
Probe successful
30002
30003
30004
30005
30006
Script error
30007
30008
30009
ACEAP v1.01-35
TCL scripted probes communicate their success and failure status to the health monitoring
component of ACE by setting an exit or return code when they finish. The table in the figure
shows the return codes interpreted by the ACE.
Health Monitoring
239
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Health monitoring is used to automatically remove failing real
servers from consideration.
Active health probes periodically test the health of real servers.
HTTP error code monitoring passively monitors the health of real
servers.
TCL scripts can be used to provide custom probe logic.
240
ACEAP v1.01-36
Lesson 8
Objectives
Upon completing this lesson, you will be able to identify the Layer 7 processing options used to
provide advanced application networking. This includes being able to meet these objectives:
ACEAP v1.01-4
The first step in configuring Layer 7 load balancing is to create the additional servers that will
be used to serve certain types of content. Here you add two new servers, PHP-HOST1 and
PHP-HOST2, to the Cisco 4710 Application Control Engine (ACE) appliance to complement
the servers you have already defined.
242
ACEAP v1.01-5
The new servers must now be placed into a new server farm, as shown in the figure.
243
ACEAP v1.01-6
Layer 7 load-balancing decisions are made by first classifying traffic based on HTTP
characteristics. This is done by specifying one or more class maps of type http loadbalance,
which will then be used in the load-balancing policy map. Layer 7 load-balancing policy maps
often use class-default, so normally the class-maps defined are only for those traffic
characteristics that you need to send to a different server farm from the default server farm.
In the example, you create a class map called DYN-CONTENT, which matches any URL that
ends in .php.
Note
244
match http commands can use regular expressions to match portions of the URL.
ACEAP v1.01-7
There can be at most 10 cookie or header fields inspected per Layer 3 and 4 class map.
245
ACEAP v1.01-8
As before, the order in which you define things does not match the order in which the
configuration is stored by the ACE. The Layer 7 configuration is shown in the figure as the
ACE would store it.
246
match-any
match-all
match-all
match-any
ACEAP v1.01-9
Layer 7 load-balancing class maps can be nested up to two levels. The nested class allows you
to classify traffic on an AND and OR set of criteria. Nested class maps are configured by using
the match class statement in the outer class map.
For example, CLASS2 in the figure matches any request in which the user agent header is
Firefox and the URL requested is either /news or /sports.
247
248
ACEAP v1.01-10
By default, all regular expressions are case sensitive, but this can be changed in the
parameter map and applies to all regex parsing in a class associated with the parameter
map.
If you want only certain regular expressions to use case-sensitive matching, use the range
regex operator. For example, use [Ii][Nn][Dd][Ee][Xx] instead of index.
ACEAP v1.01-11
URLs that contain encoded characters are decoded before being matched against regular
expressions. This process is also known as deobfuscation. The encoded characters that are
supported are single-byte characters encoded as %hh and unicode characters encoded as
%Uhhhh where h is any hexadecimal digit. Multibyte characters greater than 8 bits are
treated as 255. User-defined regular expressions should be configured to match the
canonical form of a URL. For example, configure match http url .*gif and not match http
url .*%67%69%66.
Regular expressions with wildcard rules of the form .*keyword.* are expensive from a
memory perspective. Eleven such rules can exhaust available regular expression memory.
More-sophisticated syntax can sometimes be used to combine two rules, for example, using
.*[ab].* to replace the rules .*a.* and .*b.*.
249
Web
Server
SYN
SYN_ACK
ACK
GET /a.gif HTTP 1.1
ACK
HTTP/1.1 200 OK
ACK
GET /b.gif HTTP 1.1
ACK
ACK
HTTP/1.1 200 OK
Continuation
FIN_ACK
FIN_ACK
ACK
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-13
A persistent connection is used to retrieve multiple resources from a web server without
incurring the overhead of setting up multiple TCP connections. Requests are sent individually
and responded to individually.
The figure illustrates this interaction as follows:
The server replies with the requested data. In this example, a small image fits within one
packet.
Instead of closing the TCP connection, the client sends another GET request.
The server replies with the requested data. Note that this time, the requested data spans two
packets.
The client has no more objects to request, and initiates the TCP close.
Note
250
This client is not using HTTP 1.1 pipelining. Therefore, the client waits for the entire reply to
a GET request before sending out a new request.
Web
Server
SYN
SYN_ACK
ACK
GET /a.gif HTTP 1.1
GET /b.jpg HTTP 1.1
ACK
HTTP/1.1 200 OK
HTTP/1.1 200 OK
Continuation
ACK
FIN_ACK
FIN_ACK
ACK
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-14
Request pipelining takes persistent connections to an additional level. With request pipelining,
multiple requests are sent by the client before the server responds to any of them. These
multiple requests can all be sent in one TCP packet.
The figure illustrates the same requests as the previous figure. However, in this example, the
requests are pipelined as follows:
The server replies with the requested data. In this example, the data is a small image that
fits within one packet.
The server replies with the data requested in the next request. In this example, the data is a
larger image that spans two packets.
The client has no more objects to request, and initiates the TCP close.
251
ACEAP v1.01-15
Persistent and pipeline requests can be load balanced as separate requests. This is configured by
creating an HTTP parameter map that specifies the persistence-rebalance command. This
parameter map is then used to modify HTTP processing in a Layer 3 and 4 policy map by
specifying an appl-parameter http advanced-options action statement.
Persistent rebalance causes the ACE to handle pipelined and persistent connections as follows:
Multiple persistent HTTP requests on the same TCP connection are balanced to
(potentially) different rservers, if persistence rebalance is configured.
Pipelined requests are buffered and later parsed after completing transmit of the previous
response. In other words, the requests are unpipelined.
To set the maximum number of bytes to parse for cookies, HTTP headers, and URLs, use the
set header-maxparse-length bytes command in HTTP parameter map configuration mode.
The bytes argument specifies the maximum number of bytes to parse for the total length of all
cookies, HTTP headers, and URLs. Enter an integer from 1 to 65535. The default is 2048 bytes.
The length-exceed {continue | drop} command specifies the action to take if the header
exceeds the max parse length. The options are:
continue: Continue load balancing. When you specify this keyword, the persistencerebalance command is disabled if the total length of all cookies, HTTP headers, and URLs
exceeds the maximum parse length value.
In the example shown in the figure, you have modified the Layer 4 load-balancing
configuration to include persistent rebalancing.
252
: 0
, HTTP requests
: 7
Reproxied requests
: 0
, Headers removed
: 0
HTTP chunks
: 0
, Pipelined requests
: 2
ACEAP v1.01-16
The show stats http | include requests command can be used to display statistics relevant to
persistent rebalance.
253
Server Reuse
This topic explains the reuse of ACE-to-server connections.
TCP Reuse
TCP1
ACE-TCP1 Pool1
TCP2
ACE-TCP2 Pool2
TCP3
ACEAP v1.01-18
TCP connections from the ACE to real servers that are being used for HTTP requests can be
reused by other clients after the full HTTP response has been received. The ACE performs this
function by using HTTP 1.1 persistence for connections to the real servers. TCP connections
are opened in response to client requests and are assigned to a connection pool when the client
has finished using the connection. Subsequent client requests can then be sent to the server over
the open connection. Clients are assigned to connections in the connection pool by matching
the TCP options of the new client connection to the server connection with the best fit of TCP
options.
HTTP 1.1 persistence headers are analyzed in the incoming client request. Any connection
close headers are dropped, and a connection keepalive header is added as necessary. This sets
the request up to maintain the connection with the server.
HTTP responses are analyzed so that the end of the response can be identified. For resources of
fixed length, the content-length header is parsed and the response bytes are counted until the
full response has been received. For resources of variable length that are chunk encoded, the
chunk-encoding fields are analyzed to find the end of the response.
A server connection is placed in a connection pool for reuse after the full HTTP response has
been received, and the client either closes the connection or is rebalanced. Connection pools are
maintained on a per-server, per-server-farm basis.
254
ACEAP v1.01-19
Persistent and pipeline requests can be load balanced as separate requests. This is configured by
creating an HTTP parameter map that specifies the persistence-rebalance command. This
parameter map is then used to modify HTTP processing in a Layer 3 and 4 policy map by
specifying an appl-parameter http advanced-options action statement.
255
: 1
, HTTP requests
: 4
: 0
, Headers removed
: 1
Headers inserted
: 1
, HTTP redirects
: 0
ACEAP v1.01-20
The show stats http | include reuse and show np np_number me-stats -s icm | grep Reuse
commands can be used to display statistics relevant to TCP reuse.
256
Session Persistence
This topic explains session persistence.
Select
2
3
Buy
Empty ?!?
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-22
Many web applications require multiple interactions between the client and the server. The
challenge with these applications is distinguishing which client is which, when a request is
received by the server. Often the solution is to establish a session ID that is transmitted by the
client with each request. This session ID is then used by the server to retrieve stored
information about former interactions with this client.
Load-balancing applications, such as the ACE, create a potential problem with this approach to
multiple interactions. For example, the shopper in the figure is using an e-commerce
application to purchase an item from a website. Simple round-robin load balancing can result in
the following sequence of interactions:
1. The shopper retrieves a page with details about a product of interest. Load balancing
assigns this request to the top server. The server creates a session ID and sends it along
with the rest of the response to the client.
2. The shopper presses the Buy Now button. The resulting request contains the session ID and
is assigned to the middle server. A record is created in the shopping cart database,
associating the item selected to the session ID. A page is built and returned to the client
with confirmation of the buy decision and checkout link.
3. The shopper presses the checkout link. The resulting request is assigned to the bottom
server. This server uses the session ID in the client request to retrieve information about
what items are in the shopping cart. Finding no entries in the shopping cart database, the
server includes an indication to the client that the cart is empty.
Note
The session ID can be carried in various places, including cookies and the URL.
257
Session Persistence
That
was
easy.
Browse
Select
Buy
2
1
ACEAP v1.01-23
The solution to the shopping cart problem and similar problems is session persistence, also
known as stickiness. Stickiness modifies the content-switching decision process. When a
connection first matches certain configured criteria, an entry is made in the sticky database by
the ACE. This entry stores the connection criteria that were matched and the results of the loadbalancing decision. Stickiness criteria can be matched on traffic in either direction. For
example, if a cookie is being used for stickiness, the ACE can match the set cookie portion of
the response from the server or in the cookie portion of the request from the client.
The shopper in the diagram is using an e-commerce application to purchase an item from a
website. With stickiness, the following sequence of interactions can result:
1. The shopper retrieves a page with details about a product of interest. Load balancing
assigns this request to the top server. The server creates a session ID and sends it with the
rest of the response to the client. The ACE detects the session ID and creates an entry that
associates the session ID with the top server in the sticky database.
2. The shopper presses the Buy Now button. The resulting request contains the session ID.
The ACE finds the session ID in the sticky database, and the request is assigned to the top
server. A record is created in the shopping cart database associating the item selected to the
session ID. A page is built and returned to the client with confirmation of the buy decision
and a checkout link.
3. The shopper presses the checkout link. Again, the ACE finds the session ID in the sticky
database, and the request is assigned to the top server. This server uses the session ID in the
client request to retrieve the list of items in the shopping cart and continues with the
transaction.
Note
258
Sticky Methods
Physical
Header
Version
IP Hdr
TCP Hdr
20 bytes 20 bytes
Header
Length
HTTP Request
Type of Service
Identification
Time to Live (TTL)
Fragment Offset
Protocol
Header Checksum
Source IP Address
Destination IP Address
Source Port Number
Header
Length
Reserved (6
bits)
Flags (6 bits)
TCP checksum
Window Size
Urgent Pointer
HTTP Request/Header
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-24
Three different methods of stickiness can be configured with the ACE appliance:
HTTP header stickiness: Tracks the value of an HTTP header field in the HTTP request
Cookie stickiness: Tracks the values of cookies in the HTTP request and response
259
ACEAP v1.01-25
260
If entries are removed from a context through changes in the resource management
definitions, then the oldest sticky database entries are removed. This can take some time.
timeout sticky-time
timeout activeconns
replicate sticky
serverfarm name1 [backup name2 [sticky] [aggregate-state]]
ACEAP v1.01-26
Within a context, multiple sticky groups can be defined, each of which uses a different sticky
group method. All three sticky methods share some common parameters, which are configured
in the sticky group.
The attributes are:
Timeout: Configured with the timeout sticky-time command, that is, the number of
minutes that a sticky entry is stored in the sticky database. The timer for an entry is reset
each time a new connection or a new HTTP GET is received.
Sticky replication: Configured with the replication sticky command, this attribute causes
the ACE to replicate sticky database entries to a standby ACE appliance.
Server farm: Configured with the serverfarm command. This command specifies the
server farm to which requests are load balanced after they have been processed by the
sticky feature. If a sticky database entry is found, the client request is sent accordingly. If
no sticky database entry is found, the request is load balanced as normal, and the loadbalancing decision is saved in the sticky database. The backup option specifies a backup
server farm to be used if the primary server farm is down. A backup server farm with the
sticky option specifies that sticky processing is performed for connections sent to the
backup server farm, as well. Finally, the aggregate-state option specifies that the status of
the primary server farm is tied to all the real servers in both the primary and backup server
farms.
261
ACEAP v1.01-27
IP address sticky groups are configured with the sticky ip-netmask command that is shown in
the figure. IP address sticky groups are configured to track entries in the sticky database by
source IP address, destination IP address, or source-destination IP address pair. Static entries
can be configured with the static client command. Three variations of the command syntax are
shown in the figure and correspond to the type of address tracking configured for the sticky
group.
262
Header Sticky
ACE
ACEAP v1.01-28
Header sticky groups are configured with the sticky http-header command that is shown in the
figure. The value in the header field specified by the header_name parameter is hashed and
tracked in the sticky database. An offset and length can be specified to limit the hash function
to a portion of the header value with the header offset command. The offset values can be from
0999, and the length value can be from 11000.
263
Cookie Sticky
ACE
ACEAP v1.01-29
Cookie sticky groups are configured with the sticky http-cookie command, as shown in the
figure. The value in the cookie specified by the cookie_name parameter is hashed and tracked
in the sticky database. An offset and length can be specified to limit the hash function to a
portion of the header value with the cookie offset command. The offset values can be from 0
999, and the length value can be from 11000.
Secondary cookies that are located in the URL query also can be used for sticky with the same
or different cookie name.
To use cookie sticky for servers that do not generate cookies, the ACE appliance can be
configured to insert a set-cookie header in the HTTP response with the cookie insert command.
264
ACEAP v1.01-30
In the figure, an IP address-based sticky group called ext-sticky, which caches load-balancing
decisions to the EXT-SERVERS server farm, is defined.
After being defined, sticky groups are activated by specifying the sticky group name in the
sticky-serverfarm action statement in loadbalance policy-map. In the figure, the Layer 4
load-balancing configuration is enhanced by adding an IP source address-based sticky group.
Note
The sticky-serverfarm action statement replaces the serverfarm action statement normally
found, as shown in the figure in the modified line of the SLB-LOGIC policy map.
Caution
A maximum of ten total cookie and/or header names can be processed for each Layer 3 and
4 class map. In other words, each VIP can use up to ten cookie and/or header fields for
Layer 7 load-balancing and sticky decisions.
265
: HTTP-COOKIE
timeout
: 3600
sticky-entry
flags
timeout-activeconns : FALSE
rserver-instance
time-to-expire
---------------------+--------------------------------+-------------+-------+
587583818767988700
-
R1:0
215961
: 1
, HTTP redirects
: 0
ACEAP v1.01-31
The contents of the sticky database can be displayed with the show sticky database command.
If cookie sticky is being used with cookie insert, the number of cookies that are inserted can be
displayed with the show status http command. In the figure, this command is piped through
the include filter to limit the output to the line of interest.
266
Protocol Inspection
This topic explains Layer 7 protocol inspection.
Inspection in ACE
ACE
ACEAP v1.01-33
Protocol-specific inspection functions on the ACE appliance can be used to analyze or modify
application data that is contained in an IP packet. Compliance with RFCs can also be enforced,
as well as filtering for user-defined interactions, which are denied if attempted.
267
HTTP Inspection
This topic describes HTTP inspection.
HTTP Inspection
ACE
HTTP inspection:
Ensures RFC compliance
Supports policy-based request filtering
ACEAP v1.01-35
268
ACEAP v1.01-36
The figure shows an HTTP inspection configuration. This configuration uses the class map
called HTTP-DONTS to classify HTTP requests of trace and point-to-point tunneling protocols
for further processing. The HTTP-SECURITY policy map uses the reset command to cause the
ACE appliance to reset any connection that attempts to use one of the commands from the
HTTP-DONTS class map. The HTTP-SECURITY policy map also demonstrates the use of an
inline match strict-http statement.
269
FTP Inspection
ACE
ACEAP v1.01-38
Several of the FTP commands and responses contain embedded IP-address and port
specifications for one of the communicating partners. If the FTP connection is translated by
network address translation (NAT) by way of the ACE appliance, either because of an explicit
NAT configuration or an implicit configuration because of FTP server load balancing, these
embedded addresses must be adjusted based on the NAT tables. The FTP inspection function
performs these translations.
FTP is also somewhat unusual in that the connection from the client is a control connection that
is not used for the bulk of data transfer. Rather, a second data connection is initiated from the
server to the client when a file transfer is started. Because these connections happen
dynamically, it is difficult to account for them in statically defined access control lists (ACLs)
in a way that does not compromise security. The FTP inspection function addresses this issue
by preparing for the secondary data connection when the relevant FTP commands and
responses are processed.
270
ACEAP v1.01-39
The figure shows the Layer 4 load-balancing configuration that was modified to load balance
FTP server connections. The changes to the VIP and ACL definitions reflect the new port of
interest. The inspect ftp statement that engages the FTP inspection function for any
connections through the FTP VIP has also been added.
271
ACEAP v1.01-40
If the strict option is enabled, each ftp command and response sequence is tracked for the
following anomalous activity:
Truncated command: The number of commas in the PORT and PASV reply commands
is checked to see if it is five. If it is not five, the PORT command is assumed to be
truncated, and the TCP connection is closed.
Incorrect command: Checks the ftp command to see if it ends with <CR><LF>
characters, as required by the RFC. If it does not, the connection is closed.
Size of retr and stor commands: These are checked against a fixed constant of 256. If the
size is greater, an error message is logged and the connection is closed.
Command spoofing: The PORT command is always sent from the client. The TCP
connection is denied if a PORT command is sent from the server.
Reply spoofing: The PASV reply command (227) is always sent from the server. The TCP
connection is denied if a PASV reply command is sent from the client. This prevents the
security hole when the user executes quote 227 xxxxx a1, a2, a3, a4, p1, p2.
Invalid port negotiation: The negotiated dynamic port value is checked to see if it is less
than 1024. Port numbers in the range from 1 to 1024 are reserved for well-known
connections. If the negotiated port falls in this range, the TCP connection is freed.
Command pipelining: The number of characters present after the port numbers in the
PORT and PASV reply command is cross checked with a constant value of 8. If it is more
than 8, the TCP connection is closed.
Strict FTP inspection hides the nonexistence of a user ID that is specified in the FTP user
command by changing any 530 responses from the FTP server to 331 responses. This keeps the
FTP client from being able to verify the existence of a user on the FTP server.
Additionally, each ftp command can be further filtered by specifying an FTP inspection policy
that determines which ftp commands are allowed and which are denied. If a denied command
is issued, the connection is closed.
272
ACEAP v1.01-41
The figure shows a strict FTP inspection configuration. This configuration uses the class map
called deletes to classify FTP requests of dele and rmd for further processing. The no-deletes
policy map uses the deny command to cause the ACE appliance to reset any connection that
attempts to use one of the commands from the deletes class map. The no-deletes policy map
also demonstrates the use of an inline match statement and the mask command to mask the
reply that the FTP server generates when it receives the ftp syst command.
273
RTSP Inspection
ACE
ACEAP v1.01-43
Real Time Streaming Protocol (RTSP) is used by RealAudio, RealNetworks, Apple QuickTime
4, RealPlayer, and Cisco IP/TV connections.
RTSP applications use the well-known port 554 (Cisco IP/TV uses 8554 as well) with TCP,
and rarely with User Diagram Protocol (UDP), as a control channel. ACE inspection supports
only TCP in conformity with RFC 2326. The TCP control channel is used to negotiate the data
channels that are used to transmit audio and video traffic, depending on the transport mode that
is configured on the client. The supported Real Data Transports (RDTs) are rtp/avp,
rtp/avp/udp, x-real-rdt, x-real-rdt/udp, and x-pn-tng/udp. The RTSP inspection feature parses
SETUP response messages with a status code of 200 and opens pinholes for data channels.
Because RFC 2326 does not require that the client and server ports must be in the SETUP
response message, the ACE appliance keeps state information to remember the client ports in
the SETUP message. For example, QuickTime places the client ports in the SETUP message,
and then the server responds with only the server ports.
The ACE appliance does not offer RTSP inspections support for multicast RTSP, RTSP over
UDP, or HTTP cloaking in which RTSP messages are embedded in HTTP transactions.
274
ACEAP v1.01-44
The figure shows a basic RTSP inspection configuration. The RTSP inspection feature is
activated for the traffic matched by the rtsp-conns class map by specifying the inspect rtsp
command statement in the Layer 3 and 4 policy map.
275
ICMP Inspection
ACE
ACEAP v1.01-45
The Internet Control Message Protocol (ICMP) inspection feature creates connection state
information for ICMP packets. ICMP response packets are checked against the outstanding
requests to ensure that only a single response packet is received for each request. Responses
that are not part of an established connection are discarded.
276
ACEAP v1.01-46
Unsolicited ICMP error messages are generated by intermediate and end nodes when they are
unable to process a packet. The ICMP error packet contains an embedded copy of the packet
that generated the error. The default processing performed by the ACE appliance translates the
source IP address in the ICMP error packet with the destination address that is specified in the
embedded packet, causing all ICMP errors to appear to come from the destination IP address.
ICMP error inspection extends the processing applied to ICMP error messages by allocating
additional NAT entries as needed to translate the address of all intermediate or end nodes that
are generating ICMP errors. The ICMP errors are also checked against the connection entry that
is relevant to the embedded packet, to filter out ICMP error messages that are not part of
existing connections.
277
ACEAP v1.01-47
ICMP inspection is configured by using the inspect icmp [error] action statement in a Layer 3
and 4 policy map. An example of this type of configuration is shown in the figure.
278
DNS Inspection
ACE
ACEAP v1.01-48
The Domain Name System (DNS) inspection feature matches DNS responses to requests on a
one-to-one basis. Records that are returned in a DNS response are translated based on the NAT
configuration of the ACE appliance. Maximum packet length and security length checks are
also performed on the DNS response. The security checks include enforcing a maximum label
length of 63 bytes, enforcing a maximum domain name length of 255 bytes, and detection of
compression loops in the DNS response.
DNS queries that are handled by DNS inspection do not time out like regular UDP connections.
The ACE tears down the connection associated with the query as soon as the reply to the DNS
query has been received.
Note
Only forward lookups are translated by NAT. PTR records received in response to a reverse
lookup are not translated.
279
ACEAP v1.01-49
Because DNS Inspections are not load balanced, only a Layer 4 class map and policy map are
required.
The maximum-length keyword is optional; if it is omitted, the ACE does not check the DNS
packet size.
280
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Load-balancing decisions can be made by analyzing the Layer 7
application data.
Persistent and pipelined client connections can be rebalanced by
the ACE.
Server reuse allows ACE-to-server connections to be used for
multiple client requests.
Session persistence, also known as stickiness, is used to allow
multitransaction HTTP applications to be load balanced.
Layer 7 protocol inspection provides additional application-level
security.
HTTP inspection can be used to control HTTP transactions
according to user policy.
FTP application inspection is supported.
Several protocols can be inspected for security purposes.
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-50
281
282
Lesson 9
Processing Secure
Connections
Overview
In this lesson, you will learn how to describe the ACE support for SSL protocol processing.
Objectives
Upon completing this lesson, you will be able to describe the ACE support for SSL protocol
processing. This includes being able to meet these objectives:
Symmetric Encryption
Jane
John
ACEAP v1.01-4
Symmetric encryption algorithms rely on a shared key that each communicating party has
access to. This same key is used by both the encryption and decryption algorithms. The
symmetrical usage of keys between Jane and John is shown in the figure. Jane and John have
shared encryption keys. Her key encrypts her plaintext, and his key decrypts her encrypted
message, and vice versa, to accomplish symmetric encryption.
The biggest challenge with symmetric encryption is the management of the shared keys. If
multiple, mutually exclusive secure channels of communication are required, multiple shared
keys need to be deployed. The shared key to be used between two parties must be distributed in
a fashion that limits access to the key to the two intended recipients.
Consider an e-commerce company. Communications with each individual customer should be
secured in such a way that other customers can not decrypt them. This requires that a shared
key be generated for each customer. Users on the Internet often want to become customers on
the spur of the moment. This requires a shared key to be generated and distributed to the
customer on demand. Distribution of this key is something that must be secured, and yet secure
communications are not possible until the customer has the key.
284
Asymmetric Encryption
Jane
John
Key 1
Key 2
Key 1
Key 2
ACEAP v1.01-5
Asymmetric encryption algorithms use a pair of keys. Data encrypted with one key can be
decrypted only with the other key of the pair.
In the figure, Jane uses Key 1 to encrypt data. Now that her data is encrypted, only Key 2 can
decrypt it; John, the holder of Key 2, decrypts her message. The asymmetrical usage of keys is
revealed when John encrypts a message to Jane using Key 2. After the message is encrypted,
only Key 1 can be used to decrypt it. Jane uses Key 1 and decrypts the message.
Compared with symmetric encryption, asymmetric encryption security requires more
processing power.
285
Key 2
Key 2
Key 1
ACEAP v1.01-6
The figure shows an Internet server that needs to communicate securely with multiple clients.
This can be accomplished with asymmetric encryption. The server uses Key 1 for its encryption
and decryption activities. The clients use Key 2 for their encryption and decryption activities.
The use of an asymmetric encryption algorithm leads to the following conditions:
Key 1 in the figure is referred to as the servers private key, and Key 2 is known as the public
key. The names of the keys lead to the two names for this use of asymmetric encryption:
public/private encryption, and the more common public key encryption.
Public keys can be dispersed in any fashion without concern for privacy; private keys must be
closely secured.
286
Digital Signatures
Hash
Hash
Signature
Equal?
ACEAP v1.01-7
Digital signatures are used to verify that data has not been modified. This is done without
encrypting the data being signed.
Hashing functions are one-way algorithms that take a message of any size and reduce it to a
fixed-length hash value, also often known as the digest or fingerprint of the original message.
Sufficiently intricate functions that create a sufficiently long hash value produce fingerprints
that are nearly unique. Common hashing algorithms include MD5 (Message Digest 5) and
SHA1 (Secure Hashing Algorithm 1). MD5 fingerprints are 128 bits long; SHA1 fingerprints
are 160 bits long.
Data to be signed is first run through a hash function. The message digest is then encrypted
with the private key of the signing agency. This encrypted message digest is referred to as a
digital signature. The digital signature and the original data are then usually packaged together
in one data file to be transmitted or stored.
A signed data file can be verified by a receiver. First, the receiver uses the same hash function
as the signer to computer a digest of the signed data. The signers public key is then used to
decrypt the signature, which is compared to the computed message digest. If the computed and
decrypted message digests are equal, the message has not been modified since it was signed.
287
Certificates
Certificate Authority
Company Docs
Application
Key Pair
Certificate
Signing
Request
Public Key
Private Key
Public Key
Common name
Domain name
Location
E-mail
Server
Private Key
Certificate
Application
Validation
Process
Certificate
Signing
Request
Public Key
Common name
Domain name
Location
E-mail
Certificate
SSL Server
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-8
Asymmetric encryption provides the assurance that any communicating process that uses a
public key is interacting with a process that has access to the corresponding private key. An
issue not addressed by these algorithms is the identity of the entity that gave you the public key
in the first place. Someone could hand you a disk with a public key for www.irs.gov and point
you to another site that he or she owned and ask you for your tax information. You would be
assured that you were talking to the holder of the corresponding private key, but how do you
know that the private key is really held by the IRS?
The solution to the problem is the use of digital certificates, which are digital documents
attesting to the binding of a public key to an individual or other entity.
Digital certificates consist of a public key that has been signed by one or more certificate
authorities (CAs). Multiple signatures signify a hierarchy of CAs where each CA has a
certificate signed by a higher-level CA. For a certificate to be useful, the signatures must
eventually trace back to a CA that both communicating partners trust implicitly. These CAs are
usually referred to as trusted root CAs.
In the web browser market, each browser has a collection of trusted root CA certificates built
into the executable image of the browser. If a certificate is received from a server that is signed
at the highest level by one of the root certificates, the browser trusts the identity of the
organization running the server; otherwise, the user is prompted for a trust/no-trust decision.
Commercial CAs have a fairly rigorous verification process that they engage in when a
certificate is requested. This process verifies that the public key that they are being asked to
certify is owned by the entity requesting the certification.
The ACE Appliance can hold a maximum of 4096 certificates, and can hold the same number
of key pairs. The ACE supports all major digital certificates from CAs including VeriSign,
Entrust, Netscape iPlanet, Windows 2000 Certificate Server, Thawte, Equifax, and Genuity.
288
Certificate
Server Hello
Random
Number
Generator
RSA
Encrypt
SAasdfkjw1340+jakjb//alkjt
Shared Secret
Encrypt & Decrypt
Data Exchange
SAasdfkjw1340+jakjb//alkjt
SAasdfkjw1340+jakjb//alkjt
Shared Secret
Encrypt & Decrypt
Data
Data
Client Browser
Private Key
Data
Data
RSA
Encrypt
Server
ACEAP v1.01-9
Secure Sockets Layer (SSL) uses a combination of public key encryption and symmetric
encryption.
Public key encryption is used to initiate an SSL session using the following steps:
Step 1
Step 2
The server sends a server hello set of packets to the client. Included in these packets
is a copy of the servers certificate.
Step 3
The client uses the certificate to verify the identity of the server.
Step 4
The client creates a session key to be used as a shared key for symmetric encryption.
Step 5
The client encrypts the session key with the servers public key.
Step 6
The server decrypts the session key with its private key.
Step 7
289
ACEAP v1.01-10
The ACE appliance supports SSL version 3 and Transport Layer Security (TLS) version 1. A
client capable of both SSL versions 2 and 3 might send a version 2/3 hybrid client hello, which
the ACE supports. However, the ACE does not create an SSL version 2 connection, but replies
to the hybrid hello with an SSL version 3 server hello.
The ACE appliance supports 100,000 simultaneous SSL connections. These connections are
fully proxied throughout the life of the connection because every packet needs to be processed
by the full TCP and SSL stack. The ACE can be licensed to process up to 7500 transactions per
second (TPS); by default, it supports 1000 TPS.
SSL cipher suites can be weighted to affect cipher selection. The cipher suites supported by the
ACE appliance are:
290
rsa-export1024-with-des-cbc-sha
rsa-export1024-with-rc4-56-md5
rsa-export1024-with-rc4-56-sha
rsa-export-with-des40-cbc-sha
rsa-export-with-rc4-40-md5
rsa-with-3des-ede-cbc-sha
rsa-with-aes-128-cbc-sha
rsa-with-aes-256-cbc-sha
rsa-with-des-cbc-sha
rsa-with-rc4-128-md5
rsa-with-rc4-128-sha
SSL Termination
ACE
Encrypted
Unencrypted
ACEAP v1.01-12
SSL termination is the ACE terminology for deploying the ACE appliance as an SSL offload
device. When configured for SSL termination, the ACE appliance terminates the SSL
connection from the client, decrypts the request from the client, and sends it as plaintext to the
real servers. Notice that the real servers are selected through the normal load-balancing
functions of the ACE appliance. Responses from the real servers are received by the ACE in
plaintext, encrypted, and sent back over the SSL connection to the client.
291
SSL Initiation
ACE
Encrypted
Unencrypted
ACEAP v1.01-13
SSL initiation is used to implement a network design that is often called back-end SSL, in
which the interaction between the client and the ACE is in plaintext, and the traffic between the
ACE and the real servers is encrypted SSL traffic. In SSL initiation, the ACE appliance takes
the role of the SSL client when dealing with the real servers.
292
End-to-End Encryption
ACE
Encrypted
Unencrypted
ACEAP v1.01-14
End-to-end encryption combines SSL termination and SSL initiation in one ACE configuration.
This deployment model is often used when highly sensitive data needs to be load balanced
based on Layer 7 criteria but the data is not allowed to exist on any network segment as
plaintext. In this situation, the data is unencrypted only within the ACE appliance.
293
ACEAP v1.01-16
The supported key sizes are shown in the figure, as are the supported file formats for
certificates. Note that the ACE can not generate certificates on its own, but relies on external
public key infrastructure (PKI) resources for this process.
294
ACEAP v1.01-17
A public and private key pair is generated with the crypto generate key command, which
specifies the number of bits in the key and the file in which the key is stored. Including the
nonexportable keyword marks, the resulting key file is ineligible to be exported off the ACE
appliance and stored on an external file server.
295
ACEAP v1.01-18
Parameters for the certificate signing request (CSR) are configured with the crypto csr-params
param_name command. Within the CSR parameter configuration submode, the details of the
CSR are defined as follows:
296
Country name where the SSL site resides is defined with the country-name command.
Country name is a required attribute.
State or province name where the SSL site resides is defined with the state command. State
name is a required attribute.
A locality within a state where the SSL site resides is defined with the locality command.
Locality name is an optional attribute.
The name of the organization that owns the SSL site is defined with the org-name
command. Organizational name is an optional attribute.
The name of the unit within the organization that is responsible for the SSL site is defined
with the org-unit command. Organizational unit name is an optional attribute.
The domain name or host name of the SSL site is defined with the common-name
command. Common name is a required attribute.
The serial number for the certificate to be issued is defined with the serial-number
command. Serial number is a required attribute. Note that the CA might overwrite this
serial number when they generate the certificate.
The e-mail address of a person responsible for this SSL site is defined with the email
command. E-mail is an optional attribute.
ACEAP v1.01-19
The CSR is generated with the crypto generate csr csr_parameter key_file command. As
shown in the figure, the CSR is sent to the terminal as a result of this command. The CSR can
then be copied to another system and sent to the CA for processing.
297
ACEAP v1.01-20
PKI files can be imported into the ACE from other systems with the crypto import command
and can be stored on external servers with the crypto export command. The syntax of both
commands is shown in the figure. The nonexportable keyword on the crypto import command
specifies that the crypto file can not be exported from the ACE with the crypto export
command.
298
ACEAP v1.01-21
The list of crypto files currently stored on the ACE appliance can be displayed with the show
crypto files command. Sample output is shown in the figure.
299
ACEAP v1.01-22
PKI files can be deleted with the crypto delete command. They can also be displayed on the
console with the show crypto command.
300
ACEAP v1.01-24
SSL termination is configured by first defining an SSL termination point with the ssl-proxy
service name command. Within the configuration submode for the SSL proxy, configuration
statements associate the private key and the certificate files with the SSL proxy.
SSL processing is then activated by using the ssl-proxy server name action statement in a
Layer 3 and 4 policy map.
In the figure, the sample Layer 4 load-balancing configuration has been changed to accept
connections from clients via SSL. These connections are then decrypted on the ACE and load
balanced to the back-end servers.
301
crypto
cert
cert
cert
chaingroup InternalCAcerts
rootCA.pem
ouCA.pem
deptCA.pem
ACEAP v1.01-25
A chain of certificates can be sent to the SSL peer in addition to the server certificate defined in
the SSL proxy. This chain of certificates is often used to supply root and intermediate CA
certificates when an internal CA structure is used by an enterprise.
The chain group is configured with the crypto chaingroup command. In the chain group
configuration submode, the individual certificates in the chain group are specified with the cert
command. You do not need to enter the chain group in any particular order, because the ACE
determines the proper order from the certificate files. A chain group can be displayed with the
show crypto chaingroup command.
The figure shows the SSL proxy configuration from the preceding figure, to which a certificate
chain group has been added.
302
ACEAP v1.01-26
SSL parameters for the SSL proxy service can be changed by creating an SSL parameter map
and associating it with the SSL proxy service. Within the parameter map, particular ciphers can
be specified as allowable with the cipher command. The version command can be used to limit
connections to ssl3, tls1, or all versions of SSL. In the figure, the SSL proxy has been changed
to accept only SSL v3 connections.
303
ACEAP v1.01-27
SSL initiation is configured by first defining an SSL service point with the ssl-proxy service
command. For SSL initiation, there are often no other parameters to specify in the
configuration submode for the service point.
The service point is then used to encrypt data that has been load balanced to back-end servers
by specifying the ssl-proxy client action statement in the load-balancing policy map.
In the figure, the Layer 4 load-balancing configuration has been extended to encrypt
connections to the real servers with SSL.
304
ACEAP v1.01-28
305
Client authentication
Session cache reuse
Integration with high availability
SSL statistics and SSL MIB
SSL rehandshake based on time or data transferred
URL rewrite
HTTP header insertion
Federal Information Processing Standard (FIPS) 140-2
ACEAP v1.01-29
The figure identifies the functions that did not make it into ACE SSL v1.0.
306
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Digital encryption technologies are used to provide secure HTTP
communications.
SSL support on the ACE can be used to provide SSL offload,
back-end SSL encryption, and end-to-end encryption.
PKI resources such as key pairs and certificates are required to
support SSL.
SSL services on the ACE appliance are provided through SSL
proxies.
ACEAP v1.01-30
307
308
Lesson 10
Deploying Application
Acceleration and Optimization
Overview
In this lesson, you will learn how to describe the major application acceleration and
optimization features and their benefits in the overall design.
Objectives
Upon completing this lesson, you will be able to describe the major application acceleration
and optimization features and their benefits in the overall design. This includes being able to
meet these objectives:
CONTROL
PLANE
Serial Port
PCI-X
BUS
1 Gb Ethernet
1 Gb Ethernet
1 Gb Ethernet
DATA PLANE
1 Gb Ethernet
ACEAP v1.01-4
310
Request scheduling
Buffer management
Content manipulation
IF: 127.1.0.128
S2
C2
Client
C1
Vip1
Vip2
Vipn
Vipn
IF
S1
Rserver
IF: 127.1.0.192
IF
DP Components
ACEAP v1.01-5
Following are the basics of the static (executed during initiating period) and dynamic elements
of the hidden configuration:
Static:
Internal interface
Access control list (ACL) to permit traffic from control plane to data plane on this interface
VIP plus Layer 7 parameter maps to parse the SAM private header
Dynamic:
Any Layer 4 policy configured by the user is pasted on the internal IF.
When an L7 Optimize policy is configured, a class map with the highest priority is added to
it with match patterns that must go to AVS.
311
FlashForward
This topic describes the FlashForward feature of the ACE appliance.
HTTP 200 OK
GET Index.html
HTTP 200 OK
GET Foo.gif
HTTP 200 OK
HTTP 200 OK
GET Foo.js
GET Foo.jpg
GET Foo.css
HTTP 200 OK
GET bar.gif
HTTP 200 OK
HTTP 200 OK
2007 Cisco Systems, Inc. All rights reserved.
GET bar.js
ACEAP v1.01-7
When a user agent, usually a web browser, retrieves a web page (an HTML object page), many
other objects might need to be retrieved as well. After the initial HTTP GET, six other HTTP
GET requests are issued by the user agent in this example. The requests can be individual TCP
connections or part of a persistent, pipelined HTTP request. Each time the server sends an
HTTP response to a user agent GET, the server sends the requested object with an HTTP
response code of 200 OK and optional Cache-control: timeout value indicating the freshness of
the object.
312
HTTP 200 OK
IMS Index.html
IMS Foo.gif
IMS Foo.js
IMS Foo.jpg
IMS Foo.css
IMS bar.js
ACEAP v1.01-8
On a return visit to a site, the browser sends an HTTP GET request to the server, but if the page
was cached, the user agent sends an HTTP If-Modified-Since request to the server. If the object
is still fresh, as determined by the server, the server sends a 304 Not Modified message. If the
server determines that the object is stale, the server sends the new fresh object with a 200 OK
message and an optional Cache-control: timeout value.
This process of verifying the freshness of an object is carried out for each object referenced on
the HTML object page. For pages with a large number of objects, verifying the freshness of
each object can create significant overhead, even if the object is cached.
313
GET Index.html
GET Index.html
HTTP 200 OK
HTTP 200 OK
Unmodified
HTML
page
Unmodified
HTML
page
ACEAP v1.01-9
1. The first user to visit a site with FlashForward enabled sends a request normally.
2. The ACE proxies the request to the server.
3. The server responds to the ACE with the HTML object page.
4. The ACE parses the page looking for cached FlashForward objects. Because this is the first
visitor, there are no cached objects. The ACE passes the HTML page unaltered to the client
user agent.
314
Foo.gif
Foo.js
Foo.jpg
Foo.css
bar.gif
bar.js
GET
GET
GET
GET
GET
GET
Foo.gif
Foo.js
Foo.jpg
Foo.css
bar.gif
bar.js
200 OK Foo.gif
200 OK Foo.js
200 OK Foo.jpg
200 OK Foo.css
200 OK bar.gif
200 OK bar.js
200 OK Foo.gif
200 OK Foo.js
200 OK Foo.jpg
200 OK Foo.css
200 OK bar.gif
200 OK bar.js
ACEAP v1.01-10
1. The first user sends an HTTP GET to the site for all the objects from the HTML object
page.
2. The ACE proxies the request to the server.
3. The server responds to the ACE with the objects.
4. The ACE caches the cacheable objects (defined by the ACE configuration). Because this is
still the first visitor, the ACE passes the objects unaltered to the client user agent.
315
GET Index.html
GET Index.html
HTTP 200 OK
HTTP 200 OK
Unmodified
HTML
page
Modified
HTML
page
ACEAP v1.01-11
The server responds to the request normally, and the ACE intercepts the response.
The ACE responds with the HTML container page, which has been modified to reference
the new objects references. If delta optimization is also configured, the HTML container
page is a dynamic HTML page with an embedded JavaScript:
The object name is appended with an MD5 hash based on the object as well as a
version number:
316
The client receives the response and parses the HTML container page for objects to
request.
The ACE intercepts the request and forwards the transformed request objects (minus
FlashForward alterations) from the server, according to the refresh rate configuration.
Assuming that the object in the ACE cache is fresh, the ACE responds to the client with the
requested object.
GET Cisco_FF_MD5_Foo.gif
GET Cisco_FF_MD5_Foo.js
GET Cisco_FF_MD5_Foo.jpg
GET Cisco_FF_MD5_Foo.css
GET Cisco_FF_MD5_bar.gif
GET Cisco_FF_MD5_bar.js
200 OK Cisco_FF_MD5_Foo.gif
200 OK Cisco_FF_MD5_Foo.js
200 OK Cisco_FF_MD5_Foo.jpg
200 OK Cisco_FF_MD5_Foo.css
200 OK Cisco_FF_MD5_bar.gif
200 OK Cisco_FF_MD5_bar.js
IMS Foo.gif
IMS Foo.js
IMS Foo.jpg
IMS Foo.css
IMS bar.gif
IMS bar.js
304 NM Foo.gif
304 NM Foo.js
304 NM Foo.jpg
304 NM Foo.css
200 OK bar.gif
200 OK bar.js
The figure shows browser behavior with FlashForward for the first visit after the cache is
primed.
317
GET Index.html
GET Index.html
HTTP 200 OK
Unmodified
HTML
page
ACEAP v1.01-13
On subsequent visits:
318
The server creates and delivers the HTML page to the ACE.
The ACE parses the HTML for all the references to embedded objects and checks whether
the objects are cached locally on the ACE. If the object is cached locally, the ACE issues
an HTTP IMS request to the server:
If the server responds with an HTTP 304 Not Modified message, the ACE uses its
previously created object reference in the HTML container document.
If the server responds with an HTTP 200 OK message (and the modified object), the
ACE caches the object, renames it, changes the expiration date, and adds the new
object name to the HTML container document.
The ACE sends the modified server HTML container document to the client.
Any previously cached FlashForward objects that are referenced do not have an
HTTP GET issued because they have a long expiration date.
Any new FlashForward objects that are referenced are fetched from the ACE.
Object reference:<img
src=/images/Foo.jpg>
Unmodified
HTML
page
IMS Foo.gif
IMS Foo.js
IMS Foo.jpg
IMS Foo.css
IMS bar.gif
IMS bar.js
HTTP 200 OK
Index.html
Transformed object
reference: <img src=
/images/Foo_CISCO
_AAC_FLASHFORWAR
D_vtmmi14xg2fvmlkxsx
uk0ty1xd_V02.jpg>
Modified
HTML
page
304 NM Foo.gif
304 NM Foo.js
200 OK Foo.jpg
304 NM Foo.css
304 NM bar.gif
304 NM bar.js
ACEAP v1.01-14
The figure shows browser behavior with FlashForward for subsequent visits.
319
GET Foo_CISCO_AAC_FLASH
FORWARD_vtmmi14xg2fvmlkx
sxuk0ty1xd_V02..jpg
200 OK Foo_CISCO_AAC_
FLASHFORWARD_vtmmi14xg
2fvmlkxsxuk0ty1xd_V02..jpg
ACEAP v1.01-15
The figure shows browser behavior with FlashForward for subsequent visits.
320
ACEAP v1.01-16
There are two elements to configure in the FlashForward optimization feature. The figure
shows configuring the FlashForward for a VIP to be associated with a Layer 3 and 4 class map.
The elements of the FlashForward configurations are:
Class map: This looks at the Layer 7 elements to classify the traffic.
Action list: This defines which feature to use. (If you are including delta optimization, that
is included here as well.)
Parameter map: This is required and must match statements associated with it.
321
ACEAP v1.01-17
This is the second of two elements to configure in the FlashForward optimization feature. The
figure shows configuring the FlashForwardObject, which defines which objects are eligible for
FF optimization. The elements of the FlashForwardObject configurations are:
Class map: This classifies the object types that qualify for flashforward.
Parameter map: This is required and must match statements associated with it.
Action list: This defines which feature to use. (If you are including delta optimization, that
is included here as well.)
The ACE 4710 appliance Device Manager easy optimization configuration uses the following
objects by default in the class map:
match
match
match
match
match
match
match
match
match
match
match
match
match
match
match
match
match
322
http
http
http
http
http
http
http
http
http
http
http
http
http
http
http
http
http
url
url
url
url
url
url
url
url
url
url
url
url
url
url
url
url
url
.*jpg
.*jpeg
.*jpe
.*png
.*vbs
.*xsl
.*xml
.*pdf
.*swf
.*gif
.*css
.*js
.*class
.*jar
.*cab
.*txt
.*ps
FlashForward Config
class-map type http loadbalance match-all CM-FF
match http url .*
class-map type http loadbalance match-all CM-FFObject
match http url .*gif
match http url .*jpg
action-list type optimization http ACTION-LIST-1
flashforward
action-list type optimization http ACTION-LIST-2
flashforward-object
parameter-map type optimization http ParMap-FF
parameter-map type optimization http ParMap-FFObject
cache ttl max 60
cache ttl min 0
policy-map type optimization http first-match PolMap-Opt
class CM-FF
action ACTION-LIST-1 parameter ParMap-FF
class CM-FFObject
action ACTION-LIST-2 parameter ParMap-FFObject
policy-map multi-match CLIENT-VIPS
class VIP-170
optimize http policy PolMap-Opt
ACEAP v1.01-18
Container page referencing the embedded object must match an application class that
includes OptimizationPolicy FlashForward.
Embedded object must have been previously requested and currently present in the AVS
cache.
Embedded object must not have been explicitly marked noncacheable by the original
server. The following directives allow control over caching headers sent by client or server:
RequestCachePolicy, ResponseCachePolicy, RequestHeader, and ResponseHeader.
Reference to an embedded object must not change with each request of the container page.
323
Delta Optimization
This topic explains the use of delta optimization and its benefits.
Server Response
ACEAP v1.01-20
324
Delta optimization calculates and sends the difference between two visits to an dynamic
HTML page.
Delta optimization creates a JavaScript base file that contains the entire first visit to the
page.
Delta optimization uses a JavaScript on the delta page to reassemble the entire page.
IFRAME
SCRIPT
OBJECT
325
Server Response
ACEAP v1.01-21
Two modes:
DeltaOptimize
PerUserDeltaOptimize
Delta is generated against a base file, which is shared among all users:
Base files are created based on the URL without query parameters:
The same URL can generate different pages based on query parameters or cookies.
Several different URLs might generate pages that are similar enough to warrant a
single base file.
326
This reduces the overall size of the deltas and increases performance.
ACEAP v1.01-22
Clients browser must send the FGNCDN cookie set to the JavaScript.
Delta optimize must be turned on at the global level in fgn.conf with the DeltaOptimize
On setting.
URL must match an application class that has OptimizationPolicy that includes
DeltaOptimize.
Page must be larger than 1024 bytes and be adjustable using MinCondensablePage.
Page must be smaller than 250,000 bytes and be adjustable using MaxCondensablePage.
327
The following conditions must also be met for the page to be successfully delta optimized:
328
Page must not return a ContentType listed in mimetypes.conf with the directive
NoDeltaOptimizeMimeType.
Page should contain fewer than five UTF-8 characters. This is configurable using
UTF8Detection and UTF8Threshold.
Size of the delta must not exceed 50 percent of the base file size. This is tunable by
adjusting RebaseDeltaPercent.
Client must be able to retrieve the correct base file from the AVS.
In environments with multiple AVSs, some type of persistence method must ensure that the
delta request and the base file request are sent to the same AVS.
Smart Redirect
This topic explains the use of smart redirect and its benefits.
Smart Redirect
ACE Response
Server Response
<HTML>
<META HTTP-EQUIV="Refresh"
CONTENT="5;URL=http://www.cisco.com">
</HEAD>
<BODY>
This will be redirected to another page.
</HTML>
ACEAP v1.01-24
Some applications automatically redirect client browsers from one page to another using
HTML meta tags. HTML meta tag-based page redirection can cause poor download times
because the browser issues IMS for each embedded object on the redirected page.
Smart redirect enables the ACE to automatically and transparently convert HTML meta tagbased redirections into more-efficient HTTP header-based redirections.
Features of smart redirect:
329
Fast Redirect
This topic explains the use of fast redirect and its benefits.
Fast Redirect
1
2
302 Response
Client Request
200 Response
200 Response
ACEAP v1.01-26
Fast redirect intercepts HTTP 302 responses from the origin server and makes a second request
on behalf of the client for the redirect URL, fetches it, and sends it to the client. This applies
only to redirects within the same domain.
Steps of a fast redirect:
1. Client requests URL.
2. Server responds with HTTP 302 Redirect to a server in the same domain as the origin
server.
3. ACE fast redirects a request to the server in the origin servers response.
4. The redirection server responds to the ACE.
5. The ACE forwards the content from the redirection server seamlessly to the client.
330
FlashConnect
This topic explains the use of FlashConnect and its benefits.
FlashConnect
ACE Response
Server Response
<html>
<img src="img1.jpg">
<img src="img2.jpg">
<img src="img3.jpg">
<img src="img4.jpg">
</html>
<html>
<img src="http://fgn00.flashconnect.myhost/img1.jpg">
<img src="http://fgn01.flashconnect.myhost/img2.jpg">
<img src="http://fgn02.flashconnect.myhost/img3.jpg">
<img src="http://fgn03.flashconnect.myhost/img4.jpg">
</html>
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-28
FlashConnect dynamically renames embedded objects by adding a prefix and changing the
hostname, making the objects appear to reside on different hosts although they might all reside
on a single host. FlashConnect makes the browser open separate connections to the origin
server for each object, which increases the network performance because the objects are
retrieved in parallel, rather than serially.
FlashConnect:
Makes the browser believe that it is communicating with multiple web servers
331
Is the requested
object larger than
250K?
Or
Is this object
marked as expired
or noncacheable?
ACE Response
ACEAP v1.01-30
Dynamic HTML content that is larger than the maximum condensable page size (250 KB)
The browser requests an object, and the ACE fetches it on behalf of the browser request.
The ACE constructs and inserts an entity tag (ETag) header by using an MD5 hash of the
content, and then it sends the object to the browser. The ETag is a request header that you
can use to identify different versions of the same object.
All subsequent requests from the browser include this ETag header, which the ACE
compares with a recomputed MD5 hash of the latest origin server content as follows:
If the content has not changed, the ACE returns a 304 (Not Modified) response and
no data
If the content has changed, the ACE retrieves the new content and resets the ETag.
332
ACEAP v1.01-32
Adaptive dynamic caching allows the ACE to answer requests for dynamic or personalized
information, reducing the burden on servers and databases. Adaptive dynamic caching enables
the ACE to cache dynamic content such as Active Server Pages (ASP) scripts.
Adaptive dynamic caching features include:
Cache parameterization: A response can be differentiated by more than the URL and its
query parameters. Cookie values, HTTP header values, and the HTTP method used can
also be used as parameters. Cache parameterization can be applied to both static and
dynamic caching.
Expanded expiration rules: Time to Live of cached content can be based on the server
load; for example, increasing the TTL when the server load increases.
Delta cache: Stores the delta content in a cache (a memory-only object) when the original
HTML content is in the dynamic cache and a rebase has not occurred.
Adaptive dynamic caching can be configured for use with multiple or single users.
333
Compression Overview
This topic describes HTTP compression.
ACEAP v1.01-34
334
HTTP Compression
HTTP standards-based
Compression algorithms for deflate and gzip
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-35
Request Processing
Browser supported
Modify:
Response Processing
Compress response:
335
Enabling Compression
ACE
ACEAP v1.01-36
336
Compression Parameters
ACE
ACEAP v1.01-37
337
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
WAA features support up to 10,000 concurrent client requests.
FlashForward provides decreased page download time, decreased network
congestion, and decreased number of requests to the original server.
Delta optimization calculates and sends the difference between two visits to a
dynamic HTML page. It creates a JavaScript base file, which contains the entire
first visit to the page, and uses a JavaScript on the delta page to reassemble the
entire page.
Smart redirect allows the ACE to convert HTML redirects to HTTP 302 redirects.
Fast redirect intercepts HTTP 302 responses from the origin server and sends the
redirect URL to the client as a 200 OK response.
FlashConnect multiplexes HTTP responses, helping to circumvent TCP limitations.
Just-in-time object acceleration is used when objects are too large to use delta
optimization or are noncacheable.
Adaptive Dynamic Cache protects overutilized servers by dynamically adapting the
age of noncacheable objects
HTTP compression is HTTP standards based. Compression algorithms include
deflate and zip.
338
ACEAP v1.01-38
Lesson 11
High Availability
Overview
In this lesson, you will learn how to describe the high-availability features of the ACE
appliance, which are used to provide reliable application networking services.
Objectives
Upon completing this lesson, you will be able to describe the high-availability features of the
ACE appliance, which are used to provide reliable application networking services. This
includes being able to meet these objectives:
Redundancy
This topic describes the ACE redundancy model.
Redundancy Model
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
ACEAP v1.01-4
The Cisco 4710 Application Control Engine (ACE) appliance provides redundancy through a
pair of ACE appliances. A fault-tolerant group is created for a pair of contexts, one on each
appliance. Within the pair of contexts, one context is active and the other context is on standby.
The ACE appliance hosting the active context can be controlled on a per-context basis.
340
Fault-Tolerant VLAN
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
ACEAP v1.01-5
The fault-tolerant VLAN is configured in the Admin context and is used for redundancy control
traffic between the ACE contexts. This traffic includes heartbeats, configuration, and state
synchronization.
High Availability
341
Query VLAN
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
Data VLAN.
Standby ACE pings the active ACE.
Successful ping transitions to STANDBY_COLD.
ACEAP v1.01-6
One of the data VLANs can also be designated as a query VLAN. The standby ACE appliance
monitors the fault-tolerant VLAN for heartbeats. Loss of these heartbeats signals the possibility
that the active ACE appliance has failed. If a query VLAN has been configured, the standby
ACE appliance verifies the failure of the primary ACE appliance by sending a ping on the
query VLAN. If this ping fails, the standby ACE appliance transitions to active. If the ping
succeeds, the standby ACE appliance concludes that the primary ACE appliance is still active
and that the fault-tolerant VLAN has failed.
342
Configuration Synchronization
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
ACEAP v1.01-7
Script files
Licenses
High Availability
343
Object Tracking
This topic explains object tracking.
Object Tracking
Operational state of external resources can be tracked:
State of critical interfaces
Health of an external host
Interface
Active
RELINQUISH
ACEAP v1.01-9
Object tracking gives the ACE appliance the ability to track network resources external to the
appliance. If a tracked resource fails, the active appliance can be configured to turn control over
to the standby ACE appliance. Thus the redundant ACE pair can optimize the assignment of
active processing functionality in response to external network events.
344
Interface Tracking
ft track interface vlan-example
track-interface vlan 204
peer track-interface vlan 204
priority 150
ft
ACE:
Track
VLAN
ACEAP v1.01-10
You can not delete an interface if the ACE is using the interface for tracking. Also, you can
not configure the fault-tolerant VLAN for tracking.
High Availability
345
Host Tracking
ft track interface server
track-host 12.10.40.54
peer track-host 12.10.40.54
probe PINGER priority 15
peer probe PINGER priority 15
priority 150
peer priority 100
ACE:
track
Host
X
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-11
Host tracking uses health monitoring probes to verify the status of a server. Multiple probes can
be configured. Each probe is assigned a priority that is decremented from the appliance priority
if a probe fails.
346
Failover
This topic explains the failover recovery process.
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
VMAC assigned to all floating IPs:
Alias IP addresses on Layer 3 interfaces
VIPs
ACEAP v1.01-13
Virtual MAC (VMAC) addresses are assigned to IP addresses that will be moved from one
ACE appliance to another in case of a failover. These IP addresses are often referred to as
floating IP addresses and consist of all alias IP addresses configured on Layer 3 interfaces, and
all VIPs. The VMAC assigned is based on the fault-tolerant group ID, which allows
redundancy support for VLANs that are shared among multiple contexts. VMACs are assigned
only if fault tolerance is configured.
The active ACE appliance responds to packets destined to a VMAC and answers Address
Resolution Protocol (ARP) requests for the floating IP addresses. The standby ACE blocks
packets destined to a VMAC and does not respond to ARP requests for the floating IP
addresses.
High Availability
347
Failover Actions
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
ACEAP v1.01-14
A failover of an ACE context to the redundant ACE appliance might require that switches
attached to VLANs to which the ACE is connected move the MAC address to a different
interface. To expedite these changes, the ACE appliance takes several actions. A dummy
multicast is sent out on every VLAN on which a VMAC has been assigned. This multicast
packet uses the VMAC as the source MAC address. Gratuitous ARPs are generated for all
floating IP addresses. If the ACE appliance has interfaces in bridged mode, it also generates an
ARP request for the gateway IP address and bridges the ARP response.
348
State Replication
This topic explains state replication between ACE appliances.
Connection Replication
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
ACEAP v1.01-16
Connection replication sends connection entries from the active ACE appliance to the standby
ACE appliance. Connection entries for proxied connections are not eligible for replication. No
special replication is necessary for NAT xlates or load-balancing state information because all
the required information can be derived from the connection entries. The standby ACE
appliance creates connection entries as it receives them via replication, giving it the full state
information needed to continue processing after a failover. Connection replication contains
both bulk and periodic replication processes, which both send one connection entry per
replication packet.
If an ACE appliance transitions from active to standby as a result of object tracking, it flushes
all its connection entries and allows the new active ACE appliance to bulk sync its connection
entry table back to the original appliance.
High Availability
349
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
show conn
show rserver
show serverfarm
ACEAP v1.01-17
The results of connection replication can also be seen on the standby appliance through the use
of the show conn, show rserver, and show serverfarm commands.
350
Sticky Replication
ACE-1
Active
Active
Standby Standby
FT VLAN
D
Active
ACE-2
FT
FT
FT
FT
group 1 group 2 group 3 group 4
ACEAP v1.01-18
Replication of sticky database entries is handled by a separate process. Sticky database entries
are also synchronized with both bulk and periodic sync processes. Replication of sticky
database entries must be enabled in the sticky group configuration. Multiple sticky entries are
transmitted in each sticky replication packet.
High Availability
351
Fault-Tolerance Configuration
This topic describes the information used to create appliance and context fault-tolerant
configurations.
ACE-1
FT VLAN
ACE-2
QUERY
VLAN
B
C
D
Active Standby Standby
A
B
C
Standby Standby Active
FT
group 1
FT
group 2
D
Active
FT
FT
group 3 group 4
ACEAP v1.01-20
Configuration of fault-tolerant and query VLAN interfaces is shown in the figure. This is
configured in the Admin context.
352
ACE-1
FT VLAN
QUERY
VLAN
ACE-2
B
C
D
Active Standby Standby
A
B
C
Standby Standby Active
FT
group 1
FT
group 2
D
Active
FT
FT
group 3 group 4
ACEAP v1.01-21
The ft peer is associated with the peer via FT VLAN. This is configured in the Admin context.
High Availability
353
ACE-1
FT VLAN
ACE-2
QUERY
VLAN
B
C
D
Active Standby Standby
A
B
C
Standby Standby Active
FT
group 1
FT
group 2
D
Active
FT
FT
group 3 group 4
ACEAP v1.01-22
There can be up to 251 fault-tolerant groups, one for every context available. Each group can
have a maximum of two member contexts (one active context and one standby context). The
higher-priority peer is active. When modifying an active fault-tolerant group, no inservice the
group first.
This must be configured for all contexts that require fault tolerance and can be configured in
the Admin context or in each individual context.
354
ACE-1
FT VLAN
QUERY
VLAN
ACE-2
B
C
D
Active Standby Standby
A
B
C
Standby Standby Active
FT
group 1
FT
group 2
D
Active
FT
FT
group 3 group 4
ACEAP v1.01-23
Configuration of fault-tolerant config sync is shown in the figure. This can be configured in all
contexts.
High Availability
355
Forcing a Failover
A
Active
ACE-1
FT VLAN
ACE-2
QUERY
VLAN
B
C
D
Active Standby Standby
A
B
C
Standby Standby Active
FT
group 1
FT
group 2
D
Active
FT
FT
group 3 group 4
ACEAP v1.01-24
The syntax required to force a switchover between active and standby contexts is shown in the
figure. This is useful for maintenance windows or to verify that standby context is working
correctly. Before issuing this command for a group, no preempt must be added to the faulttolerant group config if preempt is currently configured. This command can be issued from the
exec mode of the Admin context or from the exec mode of other contexts (in individual
contexts, the group-id argument is not needed; you can only failover that context).
356
ACE-1
FT VLAN
ACE-2
QUERY
VLAN
B
C
D
Active Standby Standby
A
B
C
Standby Standby Active
FT
group 1
FT
group 2
D
Active
FT
FT
group 3 group 4
ACEAP v1.01-25
High Availability
357
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
:
1
FSM_PEER_STATE_COMPATIBLE
MAINT_MODE_OFF
60
12.1.1.1
12.1.1.2
Not Configured
0.0.0.0
100
10
13
2838
9
1644
0
0
0
COMPATIBLE
COMPATIBLE
1
ACEAP v1.01-27
The show ftp peer detail command is used to display information detailing the state of a faulttolerant peer.
358
: 1
Configured Status
: in-service
Maintenance mode
: MAINT_MODE_OFF
My State
: FSM_FT_STATE_ACTIVE
My Config Priority
: 120
My Net Priority
: 120
My Preempt
: Enabled
Peer State
: FSM_FT_STATE_STANDBY_HOT
: 50
: 50
Peer Preempt
: Enabled
Peer Id
: 1
: Thu Feb
No. of Contexts
: 1
Context Name
: Admin
Context Id
: 0
2 06:38:13 2006
switch/Admin#
ACEAP v1.01-28
The show ft group detail command displays detailed information about a fault-tolerant group.
Of particular interest are the config and net priority entries. The config priority is the appliance
priority that was configured, and the net priority reflects any adjustments to the appliance
priority as a result of object tracking. Priorities for this appliance and the fault-tolerant peer are
shown in the figure.
High Availability
359
: 431
: 435
: 0
: 5
: 0
: 1
: 0
ACEAP v1.01-29
The show ft stats command displays statistics reflecting the handling of heartbeats.
360
: 1
Status
: in-service
Maintenance mode
: MAINT_MODE_OFF
My State
: FSM_FT_STATE_ACTIVE
My Config Priority
: 120
My Net Priority
: 120
My Preempt
: Enabled
Context Name
: Admin
Context Id
: 0
Track type
: TRACK_INTF
Vlan Id
: 40
State
: TRACK_UP
Priority
: 50
Transitions
: 1
ACEAP v1.01-30
The show ft track detail command is used to display the detail status of an object tracking
entry. In the figure you see object tracking details for VLAN 40, which is currently active.
High Availability
361
: 1
Status
: in-service
Maintenance mode
: MAINT_MODE_OFF
My State
: FSM_FT_STATE_ACTIVE
My Config Priority
: 120
My Net Priority
: 70
My Preempt
: Enabled
Context Name
: Admin
Context Id
: 0
Track type
: TRACK_INTF
Vlan Id
: 40
State
: TRACK_DOWN
Priority
: 50
Transitions
: 2
ACEAP v1.01-31
The results of VLAN 40 failing can be seen in the figure in the show ft track detail output.
Notice that the interface is listed as down and that the net priority has been changed
accordingly.
362
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
ACE redundant pairs can be active or standby on a per-context
basis.
Object tracking allows active functions to be moved to a standby
ACE appliance in response to other network events.
The failover process followed by the ACE allows an ACE failure to
be transparent to the rest of the network.
State information is replicated between a pair of redundant ACEs.
Fault-tolerant configurations can be applied to the appliance, to
individual contexts, and to sticky sessions
Fault-tolerant state information can be displayed with show
commands.
ACEAP v1.01-32
High Availability
363
364
Lesson 12
Objectives
Upon completing this lesson, you will be able to describe a methodology used to design and
configure multiple ACE features. This includes being able to meet these objectives:
Router
Servers
ACEAP v1.01-4
Design of an ACE-based network solution starts with an analysis of the processing that is
required of the Cisco 4710 Application Control Engine (ACE) appliance. Of specific
importance are the traffic processing requirements, the management requirements, and the
related network topology.
366
Sample Network
Internet
Servers
Users
Mail server:
SMTP from Internet
POP and SMTP from users
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-5
The figure shows the process used to design an ACE-based solution. The network is attached to
the Internet with a perimeter firewall. Behind this firewall is a router to which two LAN
segments are attached, one for servers and one for corporate personnel.
The company using this network has a pair of identical servers that are to be deployed in
support of an e-commerce site. This site allows users to browse a catalog and place electronic
orders. FTP is also used on each of these servers to allow customers to upload sample files and
to download support files. Security is required on the FTP server to ensure that FTP clients can
not delete files. A mail server is also present, which is both the Post Office Protocol (POP) mail
server for incoming mail, and the outgoing Simple Mail Transfer Protocol (SMTP) server.
367
Document Flows
Internet
Servers
Users
Client
Server
Protocols / Processing
HTTP LB
Internet or internal
users
Web/file servers
Internet mail
servers
Mail server
SMTP
Mail server
Internet mail
servers
SMTP
Internal users
Mail server
Server admin
Servers
POP, SMTP
Telnet, HTTP, FTP, SSH, RDP, SNMP, X
ACEAP v1.01-6
The network analysis begins with identifying the traffic flows that require processing by the
ACE appliance. The figure shows the client and server endpoints for each type of flow. For
each pair of endpoints, the figure shows the protocols to be used and the ACE processing that
should be applied. The network diagram shows the paths of the flows listed.
368
Servers
Users
Manager
Resource
Network admins
ACE
Server admins
Web/file servers
Server admins
VIPs
Helpdesk
VIPs
Function
ACEAP v1.01-7
The figure shows management requirements including the user group performing management
functions, the resource being managed, and the management functions that need to be
permitted.
369
VLAN 5
Internet
Servers
VLAN 20
Users
VLAN
IP
192.168.1.0/24
10
10.0.1.0/24
10.0.2.0/24
10.0.3.0/24
20
Segment
Router Firewall
ACEAP v1.01-8
The topology of the network that connects clients to the servers for which network services are
provided must be documented at Layer 2 and Layer 3. This is done by documenting all the
VLANs and IP subnets involved.
370
Router
ACE
Servers
ACEAP v1.01-10
The process of designing an ACE-based solution includes determining the number of ACE
contexts to use. After the number of contexts has been determined, you can design topological
changes to the network. Some guidelines to consider in determining the number of ACE
contexts include:
Always use at least one non-Admin context for functional configuration. This allows a
second functional context to be added as needed, without the need to move production
configuration from Admin to another context.
Identify the network segments where multiple flows to be processed are in transit.
Contexts can be effectively allocated to points in the network topology where the flows in
transit have common processing and management requirements.
You can split contexts, as a mechanism to segment the size of a configuration file, if the
network topology allows.
371
Transit Segments
VLAN 10
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
Client
Server
Protocols / Processing
HTTP LB
Internet or internal
users
Web/file servers
Internet mail
servers
Mail server
SMTP
Mail server
Internet mail
servers
SMTP
Internal users
Mail server
POP, SMTP
Server admin
Servers
ACEAP v1.01-11
The figure shows a single transit segment in the network. All the flows that require ACE
processing transit from the router to the Servers subnet.
372
Transit
Nontransit
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-12
Common requirements allow use of a single context. Note that this context must be a routed
context. In the process of adding this context, you must add a VLAN and an IP subnet between
the router and the ACE.
373
Transit
Nontransit
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-13
This topology would handle situations where routed mode is not an option or where functional,
management, resource, redundancy, or ease of configuration considerations make a single
context unacceptable. You need to add two VLANs between the router and the ACE contexts.
Additionally, if either context is to be configured in routed mode, you must add an IP subnet
between the router and the routed mode ACE contexts.
374
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Users
Transit
Nontransit
VLAN
IP
192.168.1.0/24
10
10.0.1.0/24
10.0.2.0/24
10.0.3.0/24
10.0.0.0/24
20
100
Segment
Router Firewall
ACEAP v1.01-14
The network can be adequately served by a single ACE context, given the single transit
segment and the network requirements. In the figure, the decision has also been made to use a
routed mode ACE configuration. This requires changes to the Layer 2 and Layer 3 topologies,
resulting in the additional VLAN and IP subnet shown.
Caution
Changing the network topology probably requires changes to the configuration of other
network devices. In the network in the figure, the router needs to statically route the Servers
subnet to the ACE appliance.
375
Router
ACE
Servers
ACEAP v1.01-16
Designing the specific ACE features can be done in a step-by-step process. This process can be
applied to each interface in each context in turn:
376
Identify the flows for which the packets from the client will be received on the interface in
question. This determines which flows will have connections initiated from this interface.
Identify additional connections that might now flow through the ACE because of the
topology changes made to deploy the ACE.
For each flow, define the criteria that classify traffic as part of the flow.
Define the access control lists (ACLs) needed to allow new connections for the flow to be
received by the ACE appliance.
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
Client
Server
Internet or
internal users
Web/file servers
Internet mail
servers
Mail server
SMTP
Internal users
Mail server
POP, SMTP
Server admin
servers
Management
users
ACE
Protocols / Processing
HTTP LB
HTTPS SSL offload, LB
FTP LB, no delete
The network analysis continues with identifying the client flows received on VLAN 100, which
is the VLAN between the ACE and the router. Most of the table in the figure is a subset of the
master flow table created during the network analysis. The flow documented in the bottom row
is a new flow that accounts for management traffic. This flow is added to VLAN 100 because
that is the ingress interface for new management connections.
377
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Users
Transit
Nontransit
Client
Server
Classification Criteria
match virtual-address 10.0.0.14 tcp eq http
match virtual-address 10.0.0.14 tcp eq https
match virtual-address 10.0.0.14 tcp eq ftp
match FTP request-method dele or rmd
Internet or internal
users
Web/file servers
Internet mail
servers
Mail server
n/a
Internal users
Mail server
n/a
Server admin
Servers
n/a
Management users
ACE
ACEAP v1.01-18
Criteria are now created for each classification that will be used to identify the flows. At this
point in the process, 10.0.0.14 has been selected as the VIP address to be used. Notice that
some flows have no classification criteria. This is because these flows need to flow through the
ACE, but there is no ACE-specific processing that needs to be applied to the flows.
378
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Users
Transit
Nontransit
Client
Server
Internet or
internal users
Web/file servers
Internet mail
servers
Mail server
n/a
Internal users
Mail server
n/a
Server admin
Servers
n/a
Management
users
ACE
You now add to the analysis by designing the processing actions for each classification. At this
point you also name some resources, such as the server farms.
379
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Users
Transit
Nontransit
Client
Server
ACL Entries
permit tcp any host 10.0.0.14 eq http
Internet or internal
users
Web/file servers
Internet mail
servers
Mail server
Internal users
Mail server
Server admin
Servers
Management users
ACE
ACEAP v1.01-20
Here you need the IP address of the mail server. It is 10.0.1.45. Notice that you do not have a
specific ACL entry for internal users to reach the mail server for outgoing mail; this is because
these connections are already allowed by the ACL entry for the Internet mail servers to mail
server flow. You do not need to define an additional ACL entry for management users because
the management policy map builds this implicitly.
380
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
Client
Server
Mail server
Internet mail
servers
Internal servers
Internet servers
Protocols / Processing
SMTP
HTTP, FTP (outbound)
ACEAP v1.01-21
You now repeat the same analysis steps for the other interface, VLAN 10. Again, you have an
additional flow. This time, you need to account for HTTP and FTP connections used by system
administrators to retrieve resources from the Internet while they are logged into the servers for
maintenance. This flow must be accounted for, because it will now transit through the ACE.
381
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Users
Transit
Nontransit
Client
Server
Classification Criteria
Mail server
Internet mail
servers
n/a
Internal servers
Internet servers
n/a
ACEAP v1.01-22
There is no ACE-specific processing to be done on connections from the servers, and therefore
no classification criteria or processing actions.
382
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
Client
Server
Mail server
Internet mail
servers
Internal servers
Internet servers
ACL Entries
permit tcp any any eq smtp
permit tcp any any eq http
permit tcp any any eq ftp
ACEAP v1.01-23
The figure shows the ACL entries needed for traffic from VLAN 10.
383
Real Servers
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
rserver host
ip address
inservice
rserver host
ip address
inservice
WEBFILE1
10.0.1.50
WEBFILE2
10.0.1.51
ACEAP v1.01-25
Implementing the sample design begins with defining the real servers. For this task you need
the IP addresses, which are 10.0.1.50 and 10.0.1.51, as shown in the figure.
384
Server Farms
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
ACEAP v1.01-26
You have two different server farms that use the same real servers, but direct traffic to two
different ports. Because you are doing Secure Sockets Layer (SSL) offload process for some,
but not all, of your load-balanced HTTP traffic, you need to redirect both types of incoming
connections to port 80 on your servers. This requires you to define a separate server farm
utilizing the FTP port to load balance your incoming FTP traffic.
385
Class Maps
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
ACEAP v1.01-27
The classification list for each VLAN can be translated into a class map, as shown in the figure.
386
Layer 7 Processing
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
ACEAP v1.01-28
Next in your implementation, you configure the Layer 7 processing. This includes the SSL
proxies, inspection policies, and load-balancing policies.
387
Layer 4 Processing
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
ACEAP v1.01-29
Layer 7 processing is activated by your Layer 3 and 4 policy map, which is now defined.
388
Management Access
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
ACEAP v1.01-30
389
ACLs
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Users
Transit
Nontransit
access-list from-outside
access-list from-outside
access-list from-outside
access-list from-outside
access-list from-outside
eq pop
access-list from-outside
255.255.255.0
extended
extended
extended
extended
extended
permit
permit
permit
permit
permit
tcp
tcp
tcp
tcp
tcp
any host
any host
any host
any host
10.0.0.0
10.0.0.14
10.0.0.14
10.0.0.14
10.0.1.45
255.0.0.0
eq http
eq https
eq ftp
eq smtp
host 10.0.1.45
ACEAP v1.01-31
You define ACL entries for all the traffic that is allowed to start a new connection through the
ACE appliance.
390
Interfaces
VLAN 10
VLAN 100
VLAN 5
Internet
Servers
VLAN 20
Flow
Transit
Users
Nontransit
interface vlan10
ip address 10.0.1.1 255.255.255.0
access-group input from-servers
interface vlan100
ip address 10.0.0.2 255.255.255.0
access-group input from-outside
service-policy input WEBFILE-VIPS
service-policy input MANAGEMENT-ACCESS
ACEAP v1.01-32
Finally, you associate the ACLs and policy maps with the appropriate interfaces.
391
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
Various types of requirements must be collected to
design a multifeature ACE implementation.
ACE context design allows functionality to be
partitioned based on topological and functional
requirements.
ACE feature design selects the characteristics of the
ACE features needed to fulfill network requirements.
Configuring multiple integrated features builds a
comprehensive network solution.
392
ACEAP v1.01-33
Lesson 13
Summary
This topic summarizes the key points that were discussed in this lesson.
Summary
The ACE appliance has four times the capacity of the CSS and
adds SSL, application firewall, and WAA functionality.
The ACE appliance attaches to the network via physical Gigabit
Ethernet links, and the contexts attach to the network using
VLANs.
The Modular Policy CLI is used to configure most of the traffichandling functions of the ACE appliance.
The ACE Device Manager GUI automatically sets up the
management class map, policy map, and service policy. SNMP
can be used to manage the ACE appliance.
ACE security features can be used to protect application hosting
resources from network attacks.
The ACE appliance load balances by analyzing Layer 4 attributes
of the client request.
2007 Cisco Systems, Inc. All rights reserved.
ACEAP v1.01-2
The Cisco 4710 Application Control Engine (ACE) appliance provides intelligent network
services through load-balancing, Secure Sockets Layer (SSL) processing, and application
firewall features. The features for the ACE appliance, design and operational considerations,
and configuration activities were discussed in this module.
Summary (Cont.)
The ACE appliance can track the state of real servers and server
farms using health monitoring probes.
Layer 7 load balancing, inspection, and modification is available
for several application protocols.
The ACE appliance supports encryption and decryption of SSL
transactions.
The ACE appliance supports 1 Gb/s of compression and web
application acceleration features.
ACE appliances deployed in pairs provide redundancy.
Multiple features can be integrated to provide several types of
processing at the same time.
394
ACEAP v1.01-3
Lesson 14
Self-Check
Use the questions here to review what you learned in this lesson. The correct answers and
solutions are found in the Lesson Self-Check Answer Key.
Q1)
What are the three types of network functions that are performed by the ACE module?
(Choose three.) (Source: Introducing ACE)
A)
B)
C)
D)
E)
F)
G)
Q2)
Q3)
Application firewall
Intrusion detection
Dynamic IP routing
Load balancing
Perimeter firewall
SSL encryption and decryption
VPN termination
true
false
Which two of the following policy-map types are associated with an interface with the
service-policy input command? (Choose two.) (Source: Modular Policy CLI)
A)
B)
C)
D)
E)
FTP inspection
HTTP load balancing
HTTP inspection
Layer 3 and 4
Management access
Q4)
Which interface is the inside interface for Network Address Translation? (Source:
Security Features)
A)
B)
C)
D)
Q5)
How many server farms are used in Layer 4 load balancing? (Choose two.) (Source:
Layer 4 Load Balancing)
E)
F)
G)
H)
Q6)
one
two
three
four
What is the first step in designing an ACE deployment? (Source: Integrating Multiple
Features)
A)
B)
C)
D)
396
SSL v2
SSL v3
TLS V1
How many ACE modules are deployed for a fault-tolerant environment? (Source: High
Availability)
A)
B)
C)
D)
Q10)
FTP
HTTP
NNTP
SMNP
SMTP
Which two of the following SSL/TLS versions are supported by the ACE module?
(Choose two.) (Source: Processing Secure Connections)
A)
B)
C)
Q9)
one
two
three
any number
Q8)
one
two
three
any number
How many server farms are used in Layer 7 load balancing? (Choose two.) (Source:
Layer 7 Protocol Processing)
A)
B)
C)
D)
Q7)
Client interface
Depends on the NAT configuration
DMZ
Server interface
Analyzing Requirements
Designing Topology changes
Determining number of contexts
Documenting Flows
A, D, F
Q2)
Q3)
D, E
Q4)
Q5)
Q6)
Q7)
A, B
Q8)
B, C
Q9)
Q10)
Self-Check
397
398