You are on page 1of 68

Riverbed Certified Solutions Professional (RCSP) Study Guide

Exam 199-01 for RiOS v5.0

August, 2009 Version 2.0.2

RCSP Study Guide

COPYRIGHT 2007-2009 Riverbed Technology, Inc. ALL RIGHTS RESERVED All content in this manual, including text, graphics, logos, icons, and images, is the exclusive property of Riverbed Technology, Inc. (Riverbed) and is protected by U.S. and international copyright laws. The compilation (meaning the collection, arrangement, and assembly) of all content in this manual is the exclusive property of Riverbed and is also protected by U.S. and international copyright laws. The content in this manual may be used as a resource. Any other use, including the reproduction, modification, distribution, transmission, republication, display, or performance, of the content in this manual is strictly prohibited. TRADEMARKS RIVERBED TECHNOLOGY, RIVERBED, STEELHEAD, RiOS, INTERCEPTOR, and the Riverbed logo are trademarks or registered trademarks of Riverbed. All other trademarks mentioned in this manual are the property of their respective owners. The trademarks and logos displayed in this manual may not be used without the prior written consent of Riverbed or their respective owners. PATENTS Portions, features and/or functionality of Riverbed's products are protected under Riverbed patents, as well as patents pending. DISCLAIMER THIS MANUAL IS PROVIDED BY RIVERBED ON AN "AS IS" BASIS. RIVERBED MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED, AS TO THE INFORMATION, CONTENT, MATERIALS, OR PRODUCTS INCLUDED OR REFERENCED IN THE MANUAL. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, RIVERBED DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. Although Riverbed has attempted to provide accurate information in this manual, Riverbed assumes no responsibility for the accuracy or completeness of the information. Riverbed may change the programs or products mentioned in this manual at any time without notice, but Riverbed makes no commitment to update the programs or products mentioned in this manual in any respect. Mention of non-Riverbed products or services is for information purposes only and constitutes neither an endorsement nor a recommendation. RIVERBED WILL NOT BE LIABLE UNDER ANY THEORY OF LAW, FOR ANY INDIRECT, INCIDENTAL, PUNITIVE OR CONSEQUENTIAL DAMAGES, INCLUDING, BUT NOT LIMITED TO, LOSS OF PROFITS, BUSINESS INTERRUPTION, LOSS OF INFORMATION OR DATA OR COSTS OF REPLACEMENT GOODS, ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL OR ANY RIVERBED PRODUCT OR RESULTING FROM USE OF OR RELIANCE ON THE INFORMATION PRESENT, EVEN IF RIVERBED MAY HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. CONFIDENTIAL INFORMATION The information in this manual is considered Confidential Information (as defined in the Reseller Agreement entered with Riverbed or in the Riverbed License Agreement currently available at www.riverbed.com/license, as applicable).

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Table of Contents
Preface ..................................................................................................................................................................................................................... 4 Certification Overview ............................................................................................................................................................................................ 4 Benefits of Certification......................................................................................................................................................................................... 4 Exam Information.................................................................................................................................................................................................. 4 Certification Checklist ........................................................................................................................................................................................... 5 Recommended Resources for Study.................................................................................................................................................................... 5 RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE .............................................................................................................. 7 I. General Knowledge ............................................................................................................................................................................................. 7 Optimizations Performed by RiOS........................................................................................................................................................................ 7 TCP/IP ................................................................................................................................................................................................................ 12 Common Ports.................................................................................................................................................................................................... 12 RiOS Auto-discovery Process ............................................................................................................................................................................ 13 Enhanced Auto-Discovery Process .................................................................................................................................................................... 14 Connection Pooling............................................................................................................................................................................................. 15 In-path Rules ...................................................................................................................................................................................................... 15 Peering Rules ..................................................................................................................................................................................................... 16 Steelhead Appliance Models and Capabilities ................................................................................................................................................... 17 II. Deployment ....................................................................................................................................................................................................... 19 In-path................................................................................................................................................................................................................. 20 Out-of-Band (OOB) Splice .................................................................................................................................................................................. 21 Virtual In-path ..................................................................................................................................................................................................... 23 Policy-Based Routing (PBR)............................................................................................................................................................................... 23 WCCP Deployments........................................................................................................................................................................................... 24 Advanced WCCP Configuration ......................................................................................................................................................................... 27 Server-Side Out-of-Path Deployments ............................................................................................................................................................... 28 Asymmetric Route Detection .............................................................................................................................................................................. 30 Connection Forwarding....................................................................................................................................................................................... 31 Simplified Routing (SR) ...................................................................................................................................................................................... 32 Data Store Synchronization ................................................................................................................................................................................ 33 CIFS Prepopulation ............................................................................................................................................................................................ 33 Authentication and Authorization........................................................................................................................................................................ 33 SSL ..................................................................................................................................................................................................................... 34 Central Management Console (CMC) ................................................................................................................................................................ 35 Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client) ....................................................................................... 36 Interceptor Appliance.......................................................................................................................................................................................... 37 III. Features ............................................................................................................................................................................................................ 40 Feature Licensing ............................................................................................................................................................................................... 40 HighSpeed TCP (HSTCP) .................................................................................................................................................................................. 40 MX-TCP .............................................................................................................................................................................................................. 42 Quality of Service................................................................................................................................................................................................ 42 PFS (Proxy File Service) Deployments .............................................................................................................................................................. 45 NetFlow............................................................................................................................................................................................................... 51 IPSec .................................................................................................................................................................................................................. 53 Operation on VLAN Tagged Links...................................................................................................................................................................... 53 IV. Troubleshooting .............................................................................................................................................................................................. 54 Common Deployment Issues.............................................................................................................................................................................. 54 Reporting and Monitoring ................................................................................................................................................................................... 56 Troubleshooting Best Practices.......................................................................................................................................................................... 59 2 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide


V. Exam Questions ............................................................................................................................................................................................... 61 Types of Questions............................................................................................................................................................................................. 61 Sample Questions .............................................................................................................................................................................................. 61 VI. Appendix .......................................................................................................................................................................................................... 65 Acronyms and Abbreviations .............................................................................................................................................................................. 65

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Preface
This Riverbed Certification Study Guide is intended for anyone who wants to become certified in the Riverbed Steelhead products and Riverbed Optimization System (RiOS). The Riverbed Certified Solutions Professional (RCSP) program is designed to validate the skills required of technical professionals who work in the implementation of Riverbed products. This study guide provides a combination of theory and practical experience needed for a general understanding of the subject matter. It also provides sample questions that will help in the evaluation of personal progress and provide familiarity with the types of questions that will be encountered in the exam. This publication does not replace practical experience, nor is it designed to be a stand-alone guide for any subject. Instead, it is an effective tool that, when combined with education activities and experience, can be a very useful preparation guide for the exam.

Certification Overview
The Riverbed Certified Solutions Professional certificate is granted to individuals who demonstrate advanced knowledge and experience with the RiOS product suite. The typical RCSP will have taken a Riverbed approved training class such as the Steelhead Appliance Deployment & Management course in addition to having hands-on experience in performing deployment, troubleshooting, and maintenance of RiOS products in small, medium, and large organizations. While there are no set requirements prior to taking the exam, candidates who have taken a Riverbed authorized training class and have at least six months of hands-on experience with RiOS products have a significantly higher chance of receiving the certification. We would like to emphasize that solely taking the class will not adequately prepare you for the exam. To obtain the RCSP certification, you are required to pass a computerized exam available at any Pearson VUE testing center worldwide.

Benefits of Certification
1. Establishes your credibility as a knowledgeable and capable individual in regard to Riverbed's products and services. 2. Helps improve your career advancement potential. 3. Qualifies you for discounts and/or benefits for Riverbed sponsored events and training. 4. Entitles you to use the RCSP certification logo on your business card.

Exam Information
Exam Specifications Exam Number: 199-01 Exam Name: Riverbed Certified Solutions Professional Version of RiOS: Up to RiOS version 5.0 for the Steelhead appliances and the Central Management Console, and Interceptor 2.0 and Steelhead Mobile 2.0 Number of Questions: 65 Total Time: 75 minutes for exam, 15 minutes for Survey and Tutorial (90 minutes total) Exam Provider: Pearson VUE Exam Language: English only. Riverbed allows a 30-minute time extension for English exams taken in non-English speaking countries for students that request it. English speaking countries are Australia, Bermuda, Canada, Great Britain, Ireland, New Zealand, Scotland,
4 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

South Africa, and the United States. A form will need to be completed by the candidate and submitted to Pearson VUE. Special Accommodations: Yes (must submit written request to Pearson VUE for ESL or ADA accommodations; includes time extensions and/or a reader) Offered Locations: Worldwide (over 5000 test centers in 165 countries) Pre-requisites: None (although taking a Riverbed training class is highly recommended) Available to: Everyone (partners, customers, employees, etc) Passing Score: 700 out of 1000 (70%) Certification Expires: Every 2 years (must recertify every 2 years, no grace period) Wait Between Failed Attempts: 72 hours. No retakes allowed on passed exams. Cost: $150.00 (USD) Number of Attempts Allowed: Unlimited (though statistics are kept)

Certification Checklist
As the RCSP exam is geared towards individuals who have both the theoretical knowledge and hands on experience with the RiOS product suite, ensuring proficiency in both areas is crucial towards passing the exam. For individuals starting out with the process, we recommend the following steps to guide you along the way: 1. Building Theoretical Knowledge The easiest way to become knowledgeable in deploying, maintaining, and troubleshooting the RiOS product suite is to take a Riverbed authorized training class. To ensure the greatest possibility of passing the exam, it is recommended that you review the RCSP Study Guide and ensure your familiarity with all topics listed, prior to any examination attempts. 2. Gaining Hands-on Experience While the theoretical knowledge will get you partway there, it is the hands-on knowledge that can get you over the top and enable you to pass the exam. Since all deployments are different, providing an exact amount of experience required is difficult. Generally, we recommend that resellers and partners perform at least five deployments in a variety of technologies prior to attempting the exam. For customers, and alternatively for resellers and partners, starting from the design and deployment phase and having at least six months of experience in a production environment would be beneficial. 3. Taking the Exam The final step in becoming an RCSP is to take the exam at a Pearson VUE authorized testing center. To register for any Riverbed Certification exam, please visit http://www.pearsonvue.com/riverbed.

Recommended Resources for Study


Riverbed Training Courses Information on Riverbed Training can be found at: http://www.riverbed.com/services/training/. Steelhead Appliance Deployment & Management Steelhead Appliance Operations & L1/L2 Troubleshooting Steelhead Mobile Installation & Configuration Central Management Console Configuration & Operations Interceptor Appliance Installation & Configuration
5

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Steelhead Appliance Advanced Deployment & Troubleshooting

Publications Recommended Reading (In No Particular Order) This study guide Riverbed documentation o Steelhead Management Console User's Guide o Steelhead Command-Line Interface Reference Guide o Steelhead Appliance Deployment Guide o Steelhead Appliance Installation Guide o Bypass Card Installation Guide o Steelhead Mobile Controller Users Guide o Steelhead Mobile Controller Installation Guide o Central Management Console User's Guide o Central Management Console Installation Guide o Interceptor Appliance User's Guide o Interceptor Appliance Installation Guide Other Reading (URLs Subject to Change) http://www.ietf.org/rfc.html o RFC 793 (Original TCP RFC) o RFC 1323 TCP extensions for high performance o RFC 3649 (HighSpeed TCP for Large Congestion Windows) o RFC 3742 (Limited Slow-Start for TCP with Large Congestion Windows) o RFC 2474 (Differentiated Services Code Point) http://www.caida.org/tools/utilities/flowscan/arch.xml (NetFlow Protocol and Record Headers) http://ubiqx.org/cifs/Intro.html (CIFS) Microsoft Windows 2000 Server Administrators Companion by Charlie Russell and Sharon Crawford (Microsoft Press, 2000) Common Internet File System (CIFS) Technical Reference by the Storage Networking Industry Association (Storage Networking Industry Association, 2002) TCP/IP Illustrated, Volume I, The Protocols by W. R. Stevens (Addison-Wesley, 1994) Internet Routing Architectures (2nd Edition) by Bassam Halabi (Cisco Press, 2000)

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

RIVERBED CERTIFIED SOLUTIONS PROFESSIONAL STUDY GUIDE


The Riverbed Certified Solutions Professional exam, and therefore this study guide, covers the Riverbed products and technologies through RiOS version 5.0 only (Interceptor 2.0 and Steelhead Mobile 2.0 as well).

I. General Knowledge
Optimizations Performed by RiOS
Optimization is the process of increasing data throughput and network performance over the WAN using Steelhead appliances. An optimized connection exhibits bandwidth reduction as it traverses the WAN. The optimization techniques RiOS utilizes are: Data Streamlining Transport Streamlining Application Streamlining Management Streamlining

You should be familiar with the differences in these streamlining techniques for the RCSP test. This information can be found in the Steelhead Appliance Deployment Guide. Transaction Acceleration (TA) TA is composed of the following optimization mechanisms: A connection bandwidth-reducing mechanism called Scalable Data Referencing (SDR) A Virtual TCP Window Expansion (VWE) mechanism that repacks TCP payloads with references that represent arbitrary amounts of data, thus increasing the client-data per WAN TCP window A latency reduction and avoidance mechanism called Transaction Prediction

SDR and TP can work independently or in conjunction with one another depending on the characteristics and workload of the data sent across the network. The results of the optimization vary, but often result in throughput improvements in the range of 10 to 100 times over unaccelerated links. Scalable Data Referencing (SDR) Bandwidth optimization is delivered through SDR. SDR uses a proprietary algorithm to break up TCP data streams into data chunks that are stored in the hard disk (data store) of the Steelhead appliances. Each data chunk is assigned a unique integer label (reference) before it is sent to the peer Steelhead appliance across the WAN. If the same byte sequence is seen again in the TCP data stream, then the reference is sent across the WAN instead of the raw data chunk. The peer Steelhead appliance uses this reference to reconstruct the original data in the TCP data stream. Data and references are maintained in persistent storage in the data store within each Steelhead appliance. Because SDR checks data chunks byte-by-byte there are no consistency issues even in the presence of replicated data. How Does SDR Work? When data is sent for the first time across a network (no commonality with any file ever sent before), all data and references are new and are sent to the Steelhead appliance on the other side of the network. This new data and the accompanying references are compressed using conventional algorithms so as to improve performance, even on the first transfer.
2007-2009 Riverbed Technology, Inc. All rights reserved. 7

RCSP Study Guide

Over time, more data crosses the network (revisions of a document for example). Thereafter, when these new requests are sent across the network, the data is compared with references that already exist in the local data store. Any data that the Steelhead appliance determines already exists on the far side of the network are not sentonly the references are sent across the network. As files are copied, edited, renamed, and otherwise changed or moved (as well as web pages being viewed or email sent), the Steelhead appliance continually builds the data store to include more and more data and references. References can be shared by different files and by files in different applications if the underlying bits are common to both. Since SDR can operate on all TCP-based protocols, data commonality across protocols can be leveraged so long as the binary representation of that data does not change between the protocols. For example, when a file transferred via FTP is then transferred using WFS (Windows File System), the binary representation of the file is basically the same and thus references can be sent for that file. Lempel-Ziv (LZ) Compression SDR and compression are two different features and can be controlled separately. However, LZ compression is the primary form of data reduction for cold transfers. The Lempel-Ziv compression methods are among the most popular algorithms for lossless storage. Compression is turned on by default. In-path rules can be used to define which optimization features will be used for which set of traffic flowing through the Steelhead appliance. TCP Optimizations & Virtual Window Expansion (VWE) As Steelhead appliances are designed to optimize data transfers across wide area networks, they make extensive use of standards-based enhancements to the TCP protocol that may not be present in the TCP stack of many desktop and server operating systems. This includes improved transport capability for networks with high bandwidth delay products via the use of HighSpeed TCP, MX-TCP, or TCP Vegas for lower bandwidth links, partial acknowledgements, and other more obscure but throughput enhancing and latency reducing features. VWE allows Steelhead appliances to repack TCP payloads with references that represent arbitrary amounts of data. This is possible because Steelhead appliances operate at the Application Layer and terminate TCP, which gives them more flexibility in the way they optimize WAN traffic. Essentially, the TCP payload is increased from its normal window size to an arbitrarily large amount dependent on the compression ratio for the connection. Because of this increased payload, a given application that relies on TCP performance (for example, HTTP or FTP) takes fewer trips across the WAN to accomplish the same task. For example, consider a client-toserver connection that may have a 64KB TCP window. In the event that there is 256KB of data to transfer, it would take several TCP windows to accomplish this in a network with high latency. With SDR however, that 256KB of data can be potentially reduced to fit inside a single TCP window, removing the need to wait for acknowledgements to be sent prior to sending the next window, and thus speed the transfer. Transaction Prediction Application-level latency optimization is delivered through the Transaction Prediction module. Transaction Prediction leverages an intimate understanding of protocol semantics to reduce the chattiness that would normally occur over the WAN. By acting on foreknowledge of specific protocol request-response mechanisms, Steelhead appliances streamline the delivery of data that
8 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

would normally be delivered in small increments through large numbers of interactions between the client and server over the WAN. As transactions are executed between the client and server, the Steelhead appliance intercepts each transaction, compares it to the database of past transactions, and makes decisions about the probability of future events. Based on this model, if a Steelhead appliance determines there is a high likelihood of a future transaction occurring, it performs that transaction, rather than waiting for the response from the server to propagate back to the client and then back to the server. Dramatic performance improvements result from the time saved by not waiting for each serial transaction to arrive prior to making the next request. Instead, the transactions are pipelined one right after the other. Of course, transactions are executed by Steelhead appliances ahead of the client only when it is safe to do so. To ensure data integrity, Steelhead appliances are designed with knowledge of the underlying protocols to know when it is safe to do so. Fortunately, a wide range of common applications have very predictable behaviors and, consequently, Transaction Prediction can enhance WAN performance significantly. When combined with SDR, Transaction Prediction can improve WAN performance up to 100 times. Common Internet File System (CIFS) Optimization CIFS is a proposed standard protocol that lets programs make requests for files and services on remote computers over the Internet. CIFS uses the client/server programming model. A client program makes a request of a server program (usually in another computer) for access to a file or to pass a message to a program that runs in the server computer. The server takes the requested action and returns a response. CIFS is a public or open variation of the Server Message Block (SMB) protocol developed and used by Microsoft. In the Steelhead appliance, CIFS optimization is enabled by default. Typically, you would only disable CIFS optimization to troubleshoot the system. Overlapping Opens Due to the way certain applications handle the opening of files, file locks are not properly granted to the application in such a way that would allow a Steelhead appliance to optimize access to that file using Transaction Prediction. To prevent any compromise to data integrity, the Steelhead appliance only optimizes data to which exclusive access is available (in other words, when locks are granted). When an opportunistic lock (oplock) is not available, the Steelhead appliance does not perform application-level latency optimizations but still performs SDR and compression on the data as well as TCP optimizations. The CIFS overlapping opens feature remedies this problem by having the server-side Steelhead handle file locking operations on behalf of the requesting application. If you disable this feature, the Steelhead appliance will still increase WAN performance, but not as effectively. Enabling this feature on applications that perform multiple opens of the same file to complete an operation will result in a performance improvement (for example, CAD applications). NOTE: For the Steelhead appliance to handle the locking properly, all transactions on the file must be optimized by that Steelhead appliance. Therefore, if a remote user opens a file which is optimized using the overlapping opens feature, and a second user opens the same file they might receive an error if the file fails to go through a Steelhead appliance or if it does not go through the Steelhead appliance (for example, certain applications that are sent over the LAN). If this occurs, you should disable overlapping opens optimizations for those applications.

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Messaging Application Programming Interface (MAPI) Optimization MAPI optimization is enabled by default. Only uncheck this box if you want to disable MAPI optimization. Typically, you disable MAPI optimization to troubleshoot problems with the system. For example, if you are experiencing problems with Microsoft Outlook clients connecting to Exchange, you can disable MAPI latency acceleration (while continuing to optimize with SDR for MAPI). Read ahead on attachments Read ahead on large emails Write behind on attachments Write behind on large emails Fails if user authentication set too high (downgrades to SDR/TCP acceleration only, no Transaction Prediction)

MAPI Prepopulation Without MAPI prepopulation, if a user closes Microsoft Outlook or switches off the workstation the TCP sessions are broken. With MAPI prepopulation, the Steelhead appliance can start acting as if it is the mail client. If the client closes the connection, the client-side Steelhead appliance will keep an open connection to the server-side Steelhead appliance and the server-side Steelhead appliance will keep the connection open to the server. This allows for data to be pushed through the data store before the user logs on to the server again. The default timer is set to 96 hours, after that, the connection will be reset. Optimized MAPI connections held open after client exit (acts like the client left the PC on); think of it as virtual client Keep reading mail until timeout No one is ever reconnected to the prepopulation session (including the original user) No need for more Client Access Licenses (CALs); no agents to deploy Can configure frequency check and timeout or to disable it Enables transmission during off times even in consolidated environments The feature can be disabled independently from other MAPI optimizations

HTTP Optimization A typical web page is not a single file that is downloaded all at once. Instead, web pages are composed of dozens of separate objectsincluding .jpg and .gif images, JavaScript code, cascading style sheets, and moreeach of which must be requested and retrieved separately, one after the other. Given the presence of latency, this behavior is highly detrimental to the performance of web-based applications over the WAN. The higher the latency, the longer it takes to fetch each individual object and, ultimately, to display the entire page. RiOS v5.0 and later optimizes web applications using: Parsing and Prefetching of Dynamic Content
10

URL Learning
2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Removal of Unfetchable Objects HTTP Metadata Responses Persistent Connections

More information can be found in the Steelhead Appliance Management Console Users Guide. NFS Optimization You can configure Steelhead appliances to use Transaction Prediction to perform applicationlevel latency optimization on NFS. Application-level latency optimization improves NFS performance over high latency WANs. NFS latency optimization optimizes TCP connections and is only supported for NFS v3. You can configure NFS settings globally for all servers and volumes, or you can configure NFS settings that are specific to particular servers or volumes. When you configure NFS settings for a server, the settings are applied to all volumes on that server unless you override settings for specific volumes. Read-ahead and read caching (checks freshness with modify date) Write-behind Metadata prefetching and caching Convert multiple requests into one larger request Special symbolic link handling

Microsoft SQL Optimization Steelhead appliance MS SQL protocol support includes the ability to perform prefetching and synthetic pre-acknowledgement of queries on database applications. By default, rules that increase optimization for Microsoft Project Enterprise Edition ship with the unit. This optimization is not enabled by default, and enabling MS SQL optimization without adding specific rules will rarely have an effect on any other applications. MS SQL packets must be carried in TDS (Tabular Data Stream) format for a Steelhead appliance to be able to perform optimization. You can also use MS SQL protocol optimization to optimize other database applications, but you must define SQL rules to obtain maximum optimization. If you are interested in enabling the MS SQL feature for other database applications, contact Riverbed Professional Services. Oracle Forms Optimization The Oracle Java Initiator (Jinitiator) or Oracle Forms is a browser plug-in program that accesses Oracle E-Business application content and Oracle forms applications directly within a web browser. The Steelhead appliance decrypts, optimizes, and then re-encrypts Oracle Forms native and HTTP mode traffic. Use Oracle Forms optimization to improve Oracle Forms traffic performance. Oracle Forms does not need a separate license and is enabled by default. However, you must also set an in-path rule to enable this feature.

2007-2009 Riverbed Technology, Inc. All rights reserved.

11

RCSP Study Guide

TCP/IP
General Operation Steelhead appliances are typically placed on two ends of the WAN as close to the client and server as possible (no additional WAN links between the end node and the Steelhead appliance). By placing Steelhead appliances in the network, the TCP session between client and server can be intercepted, therefore a level of control over the TCP session can be obtained. TCP sessions have to be intercepted in order to be optimized; therefore the Steelhead appliances must see all traffic from source to destination and back. For any given optimized session, there are three distinct sessions. There is a TCP connection between the client and the client-side Steelhead appliance, between the server and the server-side Steelhead appliance, and finally a connection between the two Steelhead appliances.

Common Ports
Ports Used by RiOS Port Type 7744 Data store sync port 7800 In-path port 7801 NAT port 7810 Out-of-path port 7820 Failover port for redundant appliances 7830 Exchange traffic port 7840 Exchange Director NSPI traffic port 7850 Connection Forwarding (neighbor) port 7860 Interceptor Appliance 7870 Steelhead Mobile Interactive Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List) Port 7 23 37 107 179 513 514 1494, 2598 3389 5631 Type TCP ECHO Telnet UDP/Time Remote Telnet Service Border Gateway Protocol Remote Login Shell Citrix MS WBT, TS/Remote Desktop PC Anywhere
2007-2009 Riverbed Technology, Inc. All rights reserved.

12

RCSP Study Guide

Port 5900 - 5903 6000

Type VNC X11

Secure Ports Commonly Passed Through by Default on Steelhead Appliances (Partial List) Port 22/TCP 49/TCP 443/TCP 465/TCP 563/TCP 585/TCP 614/TCP 636/TCP 989/TCP 990/TCP 992/TCP 993/TCP 995/TCP Type ssh tacacs https smtps nntps imap4-ssl sshell ldaps ftps-data ftps telnets imaps pop3s

1701/TCP l2tp 1723/TCP pptp 3713/TCP tftp over tls

RiOS Auto-discovery Process


Auto-discovery is the process by which the Steelhead appliance automatically intercepts and optimizes traffic on all IP addresses and ports. By default, auto-discovery is applied to all IP addresses and the ports which are not secure, interactive, or Riverbed well-known ports. Packet Flow The following diagram shows the first connection packet flow for traffic that is classified as to be optimized for the original auto-discovery protocol. The TCP SYN sent by the client is intercepted by the Steelhead appliance. A TCP option is attached in the TCP header; this allows the remote Steelhead appliance to recognize that there is a Steelhead appliance on the other side of the network. When the server-side Steelhead appliance sees the option (also known as a TCP probe) it responds to the option by sending a TCP SYN/ACK back. After auto-discovery has taken place, the Steelhead appliances continue to set up the TCP inner session and the TCP outer sessions.

2007-2009 Riverbed Technology, Inc. All rights reserved.

13

RCSP Study Guide


Client
IP(C)IP(S):SYN IP(C)IP(S):SYN+Probe Probe result is cached for 10 sec IP(S)IP(C):SYN/ACK+Probe rsp (SH2) IP(SH1)IP(SH2):SYN IP(SH2)IP(SH1):SYN/ACK IP(SH1)IP(SH2):ACK Setup Information IP(C)IP(S):SYN IP(S)IP(C):SYN/ACK Connect Result IP(S)IP(C):SYN/ACK IP(C)IP(S):ACK Connect result is cached until failure Connection Pool: 20x IP(C)IP(S):ACK Announces service port (default = TCP port 7800)

SH1

SH2

Server

TCP Option The TCP option used for auto-discovery is 0x4C which is 76 in decimal format. The client-side Steelhead appliance attaches a 10 byte option to the TCP header; the server-side Steelhead appliance attaches a 14 byte option in return. Note that this is only done in the initial discovery process and not during connection setup between the Steelhead appliances and the outer TCP sessions.

Enhanced Auto-Discovery Process


In RiOS v4.0.x or later, enhanced auto-discovery (EAD) is available. Enhanced auto-discovery automatically discovers the last Steelhead appliance in the network path of the TCP connection. In contrast, the original auto-discovery protocol automatically discovers the first Steelhead appliance in the path. The difference is only seen in environments where there are three or more Steelhead appliances in the network path for connections to be optimized. Enhanced auto-discovery works with Steelhead appliances running the original auto-discovery protocol. Enhanced auto-discovery ensures that a Steelhead appliance only optimizes TCP connections that are being initiated or terminated at its local site, and that a Steelhead appliance does not optimize traffic that is transiting through its site.

14

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide


Client
IP(C)IP(S):SYN SEQ1 IP(C)IP(S):SYN SEQ1 +Probe IP(S)IP(C):SYN/ACK Notification: not the last SH IP(S)IP(C):SYN/ACK+Probe rsp (S-SH) Connect result is cached until failure acknum Connection Result IP(S)IP(C):SYN/ACK IP(C)IP(S):ACK

SH1

SH2

Server
We are still using 0x4c but we now use two of them (back-to-back) Notification is being sent to SH1

IP(C)IP(S):SYN SEQ2 + Probe

Probe result is cached for 10 sec

IP(SH1)IP(SH2):SYN IP(SH2)IP(SH1):SYN/ACK IP(SH1)IP(SH2):ACK IP(S)IP(C):SYN/ACK IP(C)IP(S):ACK Setup Information Listening on Service Port 7800

20x

Connection Pooling
General Operation By default, all auto-discovered Steelhead appliance peers will have a default connection pool of 20. The connection pool is a user configurable value which can be configured for each Steelhead appliance peer. The purpose of connection pooling is to avoid the TCP handshake for the inner session between the Steelhead appliances across the high latency WAN. By pre-creating these sessions between peer Steelhead appliances, when a new connection request is made by a client, the client-side Steelhead appliance can simply use the connections in the pool. Once a connection is pulled from the pool, a new connection is created to take its place so as to maintain the specified number of connections.

In-path Rules
General Operation In-path rules allow a client-side Steelhead appliance to determine what action to perform when intercepting a new client connection (the first TCP SYN packet for a connection). The action taken depends on the type of in-path rule selected and is outlined in detail below. It is important to note that the rules are matched based on source/destination IP information, destination port, and/or VLAN, and are processed from the first rule in the list to the last (top down). The rules processing stops when the first rule matching the parameters specified is reached, at which point the action selected by the rule is taken. Steelhead appliances have three passthrough rules by default, and a fourth implicit rule to auto-discover remote Steelhead appliances. They attempt to optimize traffic if the first three rules are not matched by traffic. The three default passthrough rules include port groupings matching interactive traffic (i.e., Telnet, VNC, RDP), encrypted traffic (i.e., server-side Steelhead), and Riverbed related used ports (i.e., 7800, 7810). Different Types and Their Function Pass Through. Pass through rules identify traffic that is passed through the network unoptimized. For example, you may define pass through rules to exclude subnets from
2007-2009 Riverbed Technology, Inc. All rights reserved. 15

RCSP Study Guide

optimization. Traffic is also passed through when the Steelhead appliance is in bypass mode. (Passthrough might occur because of in-path rules, because the connection was established before the Steelhead appliance was put in place, or before the Steelhead service was enabled.) Fixed-Target. Fixed-target rules specify out-of-path Steelhead appliances near the target server that you want to optimize. Determine which servers you want the Steelhead appliance to optimize (and, optionally which ports), and add rules to specify the network of servers, ports, port labels, and out-of-path Steelhead appliances to use. Fixed-target rules can also be used for in-path deployments for Steelhead appliances not using EAD. Auto Discover. Auto-discovery is the process by which the Steelhead appliance automatically intercepts and optimizes traffic on all IP addresses and ports. By default, autodiscovery is applied to all IP addresses and the ports which are not secure, interactive, or default Riverbed ports. Defining in-path rules modifies this default setting. Discard. Packets for the connection that match the rule are dropped silently. The Steelhead appliance filters out traffic that matches the discard rules. This process is similar to how routers and firewalls drop disallowed packets; the connection-initiating device has no knowledge of the fact that its packets were dropped until the connection times out. Deny. When packets for connections match the deny rule, the Steelhead appliance actively tries to reset the connection. With deny rules, the Steelhead appliance actively tries to reset the TCP connection being attempted. Using an active reset process rather than a silent discard allows the connection initiator to know that its connection is disallowed.

Peering Rules
Applicability and Conditions of Use Peering Rules Configuring peering rules defines what to do when a Steelhead appliance receives an autodiscovery probe from another Steelhead appliance. As such, the scope of a peering rule is limited to a server-side Steelhead appliance (the one receiving the probe). Note that peering rules on an intermediary Steelhead appliance (or server-side) will have no effect in preventing optimization with a client-side Steelhead appliance if it is using a fixed-target rule designating the intermediary Steelhead appliance as its destination (since there is no auto-discovery probe in a fixed-target rule). The following example shows where you might wish to use peering rules:
Client Site A Steelhead1 Site B Steelhead2 Site C Steelhead3 Server 2

WAN 1

WAN 2

Server 1

Server1 is on the same LAN as Steelhead2 so connections from the client to Server1 should be optimized between Steelhead1 and Steelhead2. Concurrently, Server2 is on the same LAN as Steelhead3 and connections from the client to Server2 should be optimized between Steelhead1 and Steelhead3.
16 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

You do not need to define any rules on Steelhead1 or Steelhead3 Add peering rules on Steelhead2 to process connections normally going to Server1 and to pass through all other connections so that connections to Server2 are not optimized by Steelhead2 A rule to pass through inner connections between Steelhead1 and Steelhead3 is already in place by default (by default connection to destination port 7800 is included by port label RBT-Proto)

This configuration causes connections going to Server1 to be intercepted by Steelhead2, and connections going to anywhere else to be intercepted by another Steelhead appliance (for example, Steelhead3 for Server2). Overcoming Peering Issues Using Fixed-Target Rules If you do not enable automatic peering or define peering rules as described in the previous sections, you must define: A fixed-target rule on Steelhead1 to go to Steelhead3 for connections to Server2 A fixed-target rule on Steelhead3 to go to Steelhead1 for connections to servers in the same site as Steelhead1 If you have multiple branches that go through Steelhead2, you must add a fixed-target rule for each of them on Steelhead1 and Steelhead3

Steelhead Appliance Models and Capabilities


Model Specifications (subject to change)

Steelhead Appliance Ports A Steelhead appliance has Console, AUX, Primary, and WAN and LAN ports. The Primary and AUX ports cannot share the same network subnet The Primary and In-path interfaces can share the same network subnet You must use the Primary port on the server-side for out-of-path deployment
17

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

You can not use the Auxiliary port except for management If the Steelhead appliance is deployed between two switches, both the LAN and WAN ports must be connected with straight-through cables

Interface Naming Conventions The interface names for the bypass cards are a combination of the slot number and the port pairs (<slot>_<pair>, <slot>_<pair>). For example, if a four-port bypass card is located in slot 0 of your appliance, the interface names are: lan0_0, wan0_0, lan0_1, and wan0_1 respectively. Alternatively, if the bypass card is located in slot 1 of your appliance, the interface names are: lan1_0, wan1_0, lan1_1, and wan1_1 respectively. The maximum number of copper LAN-WAN pairs (total paths) is ten; two built-in with a fourport card, six with two six-port cards, and then two for a four-port card for a maximum of ten pairs.

18

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

II. Deployment
Deployment Methods Physical In-path In a physical in-path deployment, the Steelhead appliance is physically in the direct path network traffic will take between clients and servers. The clients and servers continue to see client and server IP addresses and the Steelhead appliance bridges unoptimized traffic from its LAN facing side to its WAN facing side (and vice versa). Physical in-path configurations are suitable for any location where the total bandwidth is within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. It is generally one of the simplest deployment options and among the easiest to maintain. Logical In-path In a logical in-path deployment, the Steelhead appliance is logically in the path between clients and servers. In a logical in-path deployment, clients and servers continue to see client and server IP addresses. This deployment differs from a physical in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server. Commonly used technologies for redirection are: Layer-4 switches, Web Cache Communication Protocol (WCCP), and Policy-based Routing (PBR). Server-Side Out-of-Path A server-side out-of-path deployment is a network configuration in which the Steelhead appliance is not in the direct or logical path between the client and the server. Instead, the serverside Steelhead appliance is connected through the Primary interface and listens on port 7810 to connections coming from client-side Steelhead appliances. In an out-of-path deployment, the Steelhead appliance acts as a proxy and does not perform NAT of the clients IP address as with in-path deployments (to allow the server to see the original client IP address), but will instead source NAT to the Primary interface address on the Steelhead appliance that is in server-side out-of-path. A server-side out-of-path configuration is suitable for data center locations when physical in-path or logical in-path configurations are not possible. With server-side out-of-path, client IP visibility is no longer available to the server (due to the NAT) and optimization initiated from the server side is not possible (since there is no redirection of the outbound connections packets to the Steelhead appliance). Physical Device Cabling Steelhead appliances have multiple physical and virtual interfaces. The Primary interface is typically used for management purposes, data store synchronization (if applicable), and for server-side out-of-path configurations. The Primary interface can be assigned an IP address and connected to a switch. You would use a straight-through cable for this configuration. The LAN and WAN interfaces are purely L1/L2. No IP addresses can be assigned. Instead, a logical L3 interface is created. This is the In-path interface and it is designated a name on a per slot and port basis (in LAN/WAN pairs). A bypass card (or in-path card) in slot0 with just one LAN and one WAN interface will have a logical interface called inpath0_0. In-path interfaces for a 4-port card in slot1 will get inpath1_0 and inpath1_1, representing the pair or LAN/WAN ports respectively. Inpath1_0 will represent LAN1_0 and WAN1_0. Inpath1_1 will represent LAN1_1 and WAN1_1.
2007-2009 Riverbed Technology, Inc. All rights reserved. 19

RCSP Study Guide

For a physical in-path deployment, when connecting the LAN and WAN interface to the network, both of them are to be treated as a router. When connecting to a router, host, or firewall, a crossover cable needs to be used. When connecting to a switch, a straight-through cable has to be used. The Steelhead appliance supports auto-MDIX (medium dependent interface crossover), however when using the wrong cables you run the risk of breaking the connection between the components the Steelhead appliances placed in-between, especially in bypass. These components may not support auto-MDIX. For a virtual in-path deployment the WAN interface needs to be connected. The LAN interface does not need to be connected and will be shut down automatically as soon as the virtual in-path option is enabled in the Steelhead appliances configuration. For server-side out-of-path deployments only the Primary interface needs to be connected.

In-path
In-path Networks Physical in-path configurations are suitable for locations where the total bandwidth is within the limits of the installed Steelhead appliance or serial cluster of Steelhead appliances. The Steelhead appliance can be physically connected to access both ports and trunks. When the Steelhead appliance is placed on a trunk, the In-path interface has to be able to tag its traffic with the correct VLAN number. The supported trunking protocol is 802.1q (Dot1Q). A tag can be assigned via the GUI or the CLI. The CLI command for this is:
HOSTNAME (config) # in-path interface inpathx_x vlan <id>

Inter-Steelhead appliance traffic will use this VLAN (except in Full Transparent connections as explained below). There are several variations of the in-path deployment. Steelhead appliances could be placed in series to be redundant. Peering rules based on a peer IP address will have to be applied to both Steelhead appliances to avoid peering between each other. When using 4-port cards, and thus multiple in-path IP addresses, all addresses will have to be defined to avoid peering. A serial cluster is a failover design that can be used to mitigate the risk of possible network instabilities and outages caused by a single Steelhead appliance failure (typically caused by excessive bandwidth as there is no longer data reduction occurring). When the maximum number of TCP connections for a Steelhead appliance is reached, that appliance stops intercepting new connections. This allows the next Steelhead appliance in the cluster the opportunity to intercept the new connections, if it has not reached its maximum number of connections. The in-path peering rules and in-path rules are used so that the Steelhead appliances in the cluster know not to intercept connections between themselves. Appliances in a failover deployment process the peering rules you specify in a spill-over fashion. A keepalive method is used between two Steelhead appliances to monitor each others status and set a master and backup state for both Steelhead appliances. It is recommended to assign the LAN-side Steelhead appliance to be the master due to the amount of passthrough traffic from Steelhead to client or server. Optionally, data stores can be synchronized to ensure warm performance in case of a failure. In case the Steelhead appliances are deployed in parallel of each other, measures need to be taken to avoid asymmetrical traffic from being passed through without optimization. This usually occurs when two or more routing points in the network exist where traffic is spread over the links simultaneously. Connection Forwarding can be used to exchange flow information between
20 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

the Steelhead appliances in the parallel deployment. Multiple Steelhead appliances can be bundled together. WAN Visibility Modes WAN visibility pertains to how packets traversing the WAN are addressed. RiOS v5.0 offers three types of WAN visibility modes: correct addressing, port transparency, and full address transparency. You configure WAN visibility on the client-side Steelhead appliance (where the connection is initiated). The server-side Steelhead appliance must also support multiple WAN visibility (RiOS v5.0 or later). Correct Addressing Correct addressing uses Steelhead appliance IP addresses and port numbers in the TCP/IP packet header fields for optimized traffic in both directions across the WAN. This is the default setting. This is correct as the devices which are communicating (the TCP endpoints) are the Steelhead appliances, so their IP addresses/ports are reflected in the connection. Port Transparency Port address transparency preserves your server port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. Traffic is optimized while the server port number in the TCP/IP header field appears to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. Use port transparency if you want to manage and enforce QoS policies that are based on destination ports. If your WAN router is following traffic classification rules written in terms of client and network addresses, port transparency enables your routers to use existing rules to classify the traffic without any changes. Port transparency enables network analyzers deployed within the WAN (between the Steelhead appliances) to monitor network activity and to capture statistics for reporting by inspecting traffic according to its original TCP port number. Port transparency does not require dedicated port configurations on your Steelhead appliances. NOTE: Port transparency only provides server port visibility. It does not provide client and server IP address visibility, nor does it provide client port visibility. Full Transparency Full address transparency preserves your client and server IP addresses and port numbers in the TCP/IP header fields for optimized traffic in both directions across the WAN. It also preserves VLAN tags. Traffic is optimized while these TCP/IP header fields appear to be unchanged. Routers and network monitoring devices deployed in the WAN segment between the communicating Steelhead appliances can view these preserved fields. If both port transparency and full address transparency are acceptable solutions, port transparency is preferable. Port transparency avoids potential networking risks that are inherent to enabling full address transparency. For details, see the Steelhead Appliance Deployment Guide. However, if you must see your client or server IP addresses across the WAN, full transparency is your only configuration option.

Out-of-Band (OOB) Splice


What is the OOB Splice? An OOB splice is an independent, separate TCP connection made on the first connection between two peer Steelhead appliances used to transfer version, licensing and other OOB data between peer Steelhead appliances. An OOB connection must exist between two peers for
2007-2009 Riverbed Technology, Inc. All rights reserved. 21

RCSP Study Guide

connections between these peers to be optimized. If the OOB splice dies all optimized connections on the peer Steelhead appliances will be terminated. The OOB connection is a single connection existing between two Steelhead appliances regardless of the direction of flow. So if you open one or more connections in one direction, then initiate a connection from the other direction, there will still be only one connection for the OOB splice. This connection is made on the first connection between two peer Steelhead appliances using their in-path IP addresses and port 7800 by default. The OOB splice is rarely of any concern except in full transparency deployments.
Case Study

In the example below, the Client is trying to establish connection to Server-1:


SFE-2 10.3.0.2 Server-2 10.3.0.10 Server-1 10.2.0.10

10.3.0.1 10.1.0.1 Client 10.1.0.10 CFE-1 10.1.0.2 FW-1 1.1.1.1 WAN 2.2.2.2 10.2.0.1 FW-2 SFE-1 10.2.0.2

Issue 1: After establishing inner connection, the Client will try to establish an OOB connection to the Server-B. It will address it by the IP address reported by Steelhead (SFE-1) which is in probe response (10.2.0.2). Clearly, the connection to this address will fail since 10.2.x.x addresses are invalid outside of the firewall (FW-2). Resolution 1: In the above example, there is one combination of address and port (IP:port) we know about, the connection the client is destined for which is Server-1. The client should be able to connect to Server-1. Therefore, the OOB splice creation code in sport can be changed to create a transparent OOB connection from the Client to Server-1 if the corresponding inner connection is transparent. How to Configure There are three options to address the problem of the OOB splice connection established mentioned in Issue 1 above. In a default configuration the out-of-band connection uses the IP addresses of the client-side Steelhead and server-side Steelhead. This is known as correct addressing and is our default behavior. However, this configuration will fail in the network topology described above but works for the majority of networks. The command below is the default setting in a Steelhead appliances configuration.
in-path peering oobtransparency mode none

In the network topology discussed in Issue 1, the default configuration does not work. There are two oobtransparency modes that may work in establishing the peer connections; destination and full. When destination mode is used, the client uses the first server IP and port pair to go through the Steelhead appliance with which to connect to the server-side Steelhead appliance and the client-side Steelhead IP and port number chosen by the client-side Steelhead appliance. To change to this configuration use the following CLI command:
22 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide in-path peering oobtransparency mode destination

In oobtransparency full mode, the IP of the first client is used and a pre-configured on the clientside Steelhead appliance to use port 708. The destination IP and port are the same as in destination mode, i.e., that of the server. This is the recommended configuration when VLAN transparency is required. To change to this configuration use the following CLI command:
in-path peering oobtransparency mode full

To change the default port used the by the client-side Steelhead appliance when oobtransparency mode full is configured, use the following CLI command:
in-path peering oobtransparency port

It is important to note that these oobtransparency options are only used with full transparency. If the first inner-connection to a Steelhead was not transparent, the OOB will always use correct addressing.

Virtual In-path
Introduction to Virtual In-path Deployments In a virtual in-path deployment, the Steelhead appliance is virtually in the path between clients and servers. Traffic moves in and out of the same WAN interface. This deployment differs from a physical in-path deployment in that a packet redirection mechanism is used to direct packets to Steelhead appliances that are not in the physical path of the client or server. Redirection mechanisms: Layer-4 Switch. You enable Layer-4 switch (or server load-balancer) support when you have multiple Steelhead appliances in your network to manage large bandwidth requirements. PBR (Policy-Based Routing). PBR enables you to redirect traffic to a Steelhead appliance that is configured as virtual in-path device. PBR allows you to define policies to redirect packets instead of relying on routing protocols. You define policies to redirect traffic to the Steelhead appliance and policies to avoid loop-back. WCCP (Web Cache Communication Protocol). WCCP was originally implemented on Cisco routers, multi-layer switches, and web caches to redirect HTTP requests to local web caches (version 1). Version 2, which is supported on Steelhead appliances, can redirect any type of connection from multiple routers or web caches and different ports.

Policy-Based Routing (PBR)


Introduction to PBR PBR is a router configuration that allows you to define policies to route packets instead of relying on routing protocols. It is enabled on an interface basis and packets coming into a PBRenabled interface are checked to see if they match the defined policies. If they do match, the packets are routed according to the rule defined for the policy. If they do not match, packets are routed based on the usual routing table. The rules can redirect the packets to a specific IP address. To avoid an infinite loop, PBR must be enabled on the interfaces where the client traffic is arriving and disabled on the interfaces corresponding to the Steelhead appliance. The common best practice is to place the Steelhead appliance on a separate subnet. One of the major issues with PBR is that it can black hole traffic (drop all TCP connections to a destination) if the device it is redirecting to fails. To avoid black holing traffic, PBR must have a
2007-2009 Riverbed Technology, Inc. All rights reserved. 23

RCSP Study Guide

way of tracking whether the PBR next hop is available. You can enable this tracking feature in a route map with the following Cisco router command:
set ip next-hop verify-availability

With this command, PBR attempts to verify the availability of the next hop using information from CDP. If that next hop is unavailable, it skips the actions specified in the route map. PBR checks availability in the following manner: 1. When PBR first attempts to send to a PBR next hop, it checks the CDP neighbor table to see if the IP address of the next hop appears to be available. If so, it sends an Address Resolution Protocol (ARP) request for the address, resolves it, and begins redirecting traffic to the next hop (the Steelhead appliance). 2. After PBR has verified the next hop, it continues to send to the next hop as long as it obtains answers from the ARP request for the next hop IP address. If the ARP request fails to obtain an answer, it then rechecks the CDP table. If there is no entry in the CDP table, it no longer uses the route map to send traffic. This verification provides a failover mechanism. In more recent versions of the Cisco IOS software, there is a feature called PBR with Multiple Tracking Options. In addition to the old method of using CDP information, it allows methods such as HTTP and ping to be used to determine whether the PBR next hop is available. Using CDP allows you to run with older IOS 12.x versions.

WCCP Deployments
Introduction to WCCP The WCCP protocol is a stateful language that the router and Steelhead appliance can use to redirect traffic to the Steelhead appliance in order for it to optimize. Several functions will have to be covered to make it stateful and scalable. Failover, load distribution, and negotiation of connection parameters will all have to be communicated throughout the cluster that the Steelhead appliance and router form upon successful negotiation. The protocol has four messages to encompass all of the above functions: HERE_I_AM. Sent by Steelhead appliances to announce themselves. I_SEE_YOU. Sent by WCCP enabled routers to respond to announcements. REDIRECT_ASSIGN. Sent by the designated Steelhead appliance to determine flow distribution. REMOVAL_QUERY. Sent by router to check a Steelhead appliance after missed HERE_I_AM messages. When you configure WCCP on a Steelhead appliance: Routers and Steelhead appliances are added to the same service group. Steelhead appliances announce themselves to the routers. Routers respond with their view of the service group. One Steelhead will be the designated CE (caching engine) and tells the routers how to redirect traffic among the Steelhead appliances in the service group. How Steelhead Appliances Communicate with Routers Steelhead appliances can use one of the following methods to communicate with routers: Unicast UDP. The Steelhead appliance is configured with the IP address of each router. If additional routers are added to the service group, they must be added on each Steelhead appliance. Multicast UDP. The Steelhead appliance is configured with a multicast group. If additional routers are added, you do not need to add or change configuration settings on the Steelhead appliances.
24 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Redirection By default, all TCP traffic is redirected, optionally a redirect-list can be defined where only the contents of the redirect-list are redirected. A redirect-list in a WCCP configuration refers to an ACL that is configured on the router to select the traffic that will be redirected. Traffic is redirected using one of the following schemes: GRE (Generic Routing Encapsulation). Each data packet is encapsulated in a GRE packet with the Steelhead appliance IP address configured as the destination. This scheme is applicable to any network. L2 (Layer 2). Each packet MAC address is rewritten with a Steelhead appliance MAC address. This scheme is possible only if the Steelhead appliance is connected to a router at Layer 2. Either. The either value uses L2 firstif Layer 2 is not supported, GRE is used. This is the default setting. You can configure your Steelhead appliance to not encapsulate return packets. This allows your WCCP Steelhead appliance to negotiate with the router or switch as it if were going to send grereturn packets, but to actually send l2-return packets. This configuration is optional but recommended when connected at L2 directly. The command to override WCCP packet return negotiation is wccp l2-return enable. Be sure the network design permits this. Load Balancing and Failover WCCP supports unequal load balancing. Traffic is redirected based on a hashing scheme and the weight of the Steelhead appliances. Each router uses a 256-bucket Redirection Hash Table to distribute traffic for a Service Group across the member Steelhead appliances. It is the responsibility of the Service Group's designated Steelhead appliance to assign each router's Redirection Hash Table. The designated Steelhead appliance uses a WCCP2_REDIRECT_ASSIGNMENT message to assign the routers' Redirection Hash Tables. This message is generated following a change in Service Group membership and is sent to the same set of addresses to which the Steelhead appliance sends WCCP2_HERE_I_AM messages. A router will flush its Redirection Hash Table if a WCCP2_REDIRECT_ASSIGNMENT is not received within five HERE_I_AM_T seconds of a Service Group membership change. The HASH algorithm can use several different input fields to come up with an 8 bit output (which is the bucket value). Default input fields are source and destination IP address of the packet that is redirected. Source and destination TCP port or any combination can be used. The weight determines the percentage of traffic a Steelhead appliance in a cluster gets, the hashing algorithm determines which flow is redirected to which Steelhead appliance. The default weight is based on the Steelhead appliance model number. The weight is heavier for models that support more connections. You can modify the default weight if desired. With the use of weight you can also create an active/passive cluster by assigning a weight of 0 to the passive Steelhead appliance. This Steelhead appliance will only get traffic when the active Steelhead appliance fails. Assignment and Redirection Methods The assignment method refers to how a router chooses which Steelhead appliance in a WCCP service group to redirect packets to. There are two assignment methods: the Hash assignment method and the Mask assignment method. Steelhead appliances support both the Hash assignment and Mask assignment methods. HASH
2007-2009 Riverbed Technology, Inc. All rights reserved. 25

RCSP Study Guide

Redirection using Hash assignment is a two-stage process. In the first stage a primary key is formed from the packet which is defined by the Service Group and is hashed to yield an index. This index number will then be placed into a Redirection Hash Table. In the Redirection Hash Table a packet has either an unflagged web-cache, unassigned bucket, or a flagged packet. In the event the packet has an unflagged web-cache, the packet is redirected to that web-cache. If the bucket is unassigned the packet is forwarded normally. However, if the bucket is flagged indicating a secondary hash then a secondary key is formed (as defined by the Service Group description). This key is hashed to yield an index number which in turn is placed into the Redirection Hash Table. If this secondary entry contains a web-cache index then the packet is directed to that web-cache. If the entry is unassigned the packet is forwarded normally. MASK The first phase of Mask assignment is defining the mask itself. The mask can be up to seven bits and can be applied to the SRC TCP port, DST TCP port, source IP address or DST IP address or a combination of the four attributes but may not exceed seven bits. Depending on the amount of bits selected different number of buckets are created and assigned to the different Steelhead appliances in the service group. As traffic traverses the router a bitwise AND operation is performed between the mask and the IP address/TCP port depending on the mask defined. The traffic is assigned to the different buckets based on the results of the AND operation. Mask IP address/TCP port pairs are processed in an order they are received and in turn are compared to the seven bits. From Internet-Draft WCCP version 2 (http://www.wrec.org/Drafts/draft-wilson-wrec-wccp-v200.txt ): Note that in all of the mask fields of this element a zero means "Don't care.
0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Address Mask | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Destination Address Mask | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Port Mask | Destination Port Mask | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Source Address Mask. The 32-bit mask to be applied to the source IP address of the packet. Destination Address Mask. The 32-bit mask to be applied to the destination IP address of the packet. Source Port Mask. The 16-bit mask to be applied to the TCP/UDP source port field of the packet. Destination Port Mask. The 16-bit mask to be applied to the TCP/UDP destination port field of the packet.

It may not be obvious for the details here but there is a priority bit order when using Mask. The above diagram reads from most significant to least significant bottom left to top. In other words, the priority bits will be source port, destination port, destination address, and source address. This is helpful in knowing in the event of troubleshooting which bucket a specific resource is allocated.

26

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

For more information regarding Hash or Mask assignment, refer to the Steelhead Appliance Deployment Guide and the whitepaper WCCP Mask Assignment provided on the Riverbed Partner Portal and/or Riverbed Technical Support site.

Advanced WCCP Configuration


Using Multicast Groups If you add multiple routers and Steelhead appliances to a service group, you can configure them to exchange WCCP protocol messages through a multicast group. Configuring a multicast group is advantageous because if a new router is added, it does not need to be explicitly added on each Steelhead appliance. Multicast addresses must be between 224.0.0.0 and 239.255.255.255. Configuring Multicast Groups on the Router On the router, at the system prompt, enter the following set of commands:
Router> enable Router# configure terminal Router(config)# ip wccp 90 group-address 224.0.0.3 Router(config)# interface fastEthernet 0/0 Router(config-if)# ip wccp 90 redirect in Router(config-if)# ip wccp 90 group-listen Router(config-if)# end Router# write memory

NOTE: Multicast addresses must be between 224.0.0.0 and 239.255.255.255. Configuring Multicast Groups on the Steelhead Appliance On the WCCP Steelhead appliance, at the system prompt, enter the following set of commands:
WCCP Steelhead > enable WCCP Steelhead # configure terminal WCCP Steelhead (config) # wccp enable WCCP Steelhead (config) # wccp mcast-ttl 10 WCCP Steelhead (config) # wccp service-group 90 routers 224.0.0.3 WCCP Steelhead (config) # write memory WCCP Steelhead (config) # exit

Limiting Redirection by TCP Port By default all TCP ports are redirected, but you can configure the WCCP Steelhead appliance to tell the router to redirect only certain TCP source or destination ports. You can specify up to a maximum of seven ports per service group. Using Access Lists for Specific Traffic Redirection If redirection is based on traffic characteristics other than ports, you can use ACLs on the router to define what traffic is redirected. ACL considerations: ACLs are processed in order, from top to bottom. As soon as a particular packet matches a statement, it is processed according to that statement and the packet is not evaluated against subsequent statements. Therefore, the order of your access-list statements is very important.
2007-2009 Riverbed Technology, Inc. All rights reserved. 27

RCSP Study Guide

If no port information is explicitly defined, all ports are assumed. By default all lists include an implied deny all entry at the end, which ensures that traffic that is not explicitly included is denied. You cannot change or delete this implied entry.

Access Lists: Best Practice To avoid requiring the router to do extra work, Riverbed recommends that you create an ACL that routes only TCP traffic to the Steelhead appliance. When a WCCP configured Steelhead appliance receives UDP, GRE, ICMP, and other non-TCP traffic, it returns the traffic to the router. Verifying and Troubleshooting WCCP Configuration Checking the Router Configuration On the router, at the system prompt, enter the following set of commands:
Router>en Router#show ip wccp Router#show ip wccp 90 detail Router#show ip wccp 90 view

Verifying WCCP Configuration on an Interface On the router, at the system prompt, enter the following set of commands:
Router>en Router#show ip interface

Look for WCCP status messages near the end of the output. You can trace WCCP packets and events on the router. Checking the Access List Configuration On the router, at the system prompt, enter the following set of commands:
Router>en Router#show access-lists <access_list_number>

Tracing WCCP Packets and Events on the Router On the router, at the system prompt, enter the following set of commands:
Router>en Router#debug ip wccp events WCCP events debugging is on Router#debug ip wccp packets WCCP packet info debugging is on Router#term mon

Server-Side Out-of-Path Deployments


Out-of-path Networks An out-of-path deployment is a network configuration in which the Steelhead appliance is not in the direct physical or logical path between the client and the server. In an out-of-path deployment, the Steelhead appliance acts as a proxy. An out-of-path configuration is suitable for data center locations where physical in-path or virtual in-path configurations are not possible.

28

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

In an out-of-path deployment, the client-side Steelhead appliance is configured as an in-path device, and the server-side Steelhead appliance is configured as an out-of-path device. The command to enable server-side out-of-path is:
HOSTNAME (config) # out-of-path enable

LAN I/F

WAN I/F

WAN
Client-side Steelhead PRI Fixed-target Rule Server-side Steelhead IP SRC=S-SH

A fixed-target rule is applied on the client-side Steelhead appliance to make sure the TCP session is intercepted and statically sent to the out-of-path Steelhead appliance on the server side. When enabling out-of-path on the server-side Steelhead appliance, it starts listening on port 7810 for incoming connections from a client-side Steelhead appliance. The Steelhead appliance can perform NAT. The server will see the IP address of the Steelhead appliance as the source of the connection so the packets are returned to the Steelhead appliance instead of the client. This is necessary to make sure that the bidirectional traffic is seen by the Steelhead appliance. Also keep in mind that optimization will only occur when the TCP connection is initiated by the client. Out-of-Path, Failover Deployment An out-of-path, failover deployment serves networks where an in-path deployment is not an option. This deployment is cost effective, simple to manage, and provides redundancy. When both Steelhead appliances are functioning properly, the connections traverse the master appliance. If the master Steelhead appliance fails, subsequent connections traverse the backup Steelhead appliance. When the master Steelhead appliance is restored, the next connection traverses the master Steelhead appliance. If both Steelhead appliances fail, the connection is passed through unoptimized to the server. The way to do this is to specify multiple target appliances in the fixed-target in-path rule on the client-side Steelhead appliance.

2007-2009 Riverbed Technology, Inc. All rights reserved.

29

RCSP Study Guide

Data Center LAN

Switch

WAN
Router

Server

Steelhead A

Steelhead B

Hybrid Mode: In-Path and Server-Side Out-of-Path Deployment A hybrid mode deployment serves offices with one WAN routing point and users, and where the Steelhead appliance must be referenced from remote sites as an out-of-path device (for example, to avoid mistaken auto-discovery or to bypass intermediary Steelhead appliances). The following figure illustrates the client-side of the network where the Steelhead appliance is configured as both an in-path and server-side out-of-path device.
Switch Steelhead Firewall/VPN

WAN
PRI DMZ Web Server

Client

FTP Server

In this hybrid design, a client-side Steelhead appliance (not shown) would use a typical autodiscovery process to optimize any data going to or coming from the clients shown. If however, a remote user would like to get optimization to the DMZ shown above, the standard autodiscovery process would not function properly since the packet flow would prevent the autodiscovery probe from ever reaching the Steelhead appliance. To remedy this, a fixed-target rule matching the destination address of the DMZ and targeted to the Primary (PRI) interface of the Steelhead appliance above will ensure that the traffic will reach the Steelhead appliance, and due to the server-side out-of-path NAT process, will ensure that it returns to the Steelhead appliance for optimization on the return path.

Asymmetric Route Detection


Asymmetric auto-detection enables Steelhead appliances to detect the presence of asymmetry within the network. Asymmetry is detected by the client-side Steelhead appliances. Once detected, the Steelhead appliance will pass through asymmetric traffic unoptimized allowing the TCP connections to continue to work. The first TCP connection for a pair of addresses might be

30

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

dropped because during the detection process the Steelhead appliances have no way of knowing that the connection is asymmetric. If asymmetric routing is detected, an entry is placed in the asymmetric routing table and any subsequent connections from that IP address pair will be passed through unoptimized. Further connections between these hosts are not optimized until that particular asymmetric routing cache entry times out. Type Complete Asymmetry Description Asymmetric Routing Table and Log Entries Packets traverse both Steelhead Asymmetric Routing Table: bad RST appliances going from the client Log: Sep 5 11:16:38 gen-sh102 kernel: to the server but bypass both [intercept.WARN] asymmetric routing Steelhead appliances on the between 10.11.111.19 and 10.11.25.23 return path. detected (bad RST) Asymmetric Routing Table: bad SYN/ACK Packets traverse both Steelhead appliances going from the client Log: Sep 7 16:17:25 gen-sh102 kernel: [intercept.WARN] asymmetric routing to the server but bypass the between 10.11.25.23:5001 and server-side Steelhead appliance 10.11.111.19:33261 detected (bad on the return path. SYN/ACK) Packets traverse both Steelhead Asymmetric Routing Table: no SYN/ACK appliances going from the client Log: Sep 7 16:41:45 gen-sh102 kernel: to the server but bypass the [intercept.WARN] asymmetric routing client-side Steelhead appliance between 10.11.111.19:33262 and on the return path. 10.11.25.23:5001 detected (no SYN/ACK) There are two types of MultiSYN Retransmit: Probe-filtered occurs when the client-side Steelhead appliance sends out multiple SYN+ frames and does not get a response. SYN-rexmit occurs when the client-side Steelhead appliance receives multiple SYN retransmits from a client and does not see a SYN/ACK packet from the destination server. Asymmetric Routing Table: probefiltered(not-AR) Log: Sep 13 20:59:16 gen-sh102 kernel: [intercept.WARN] it appears as though probes from 10.11.111.19 to 10.11.25.23 are being filtered. Passing through connections between these two hosts.

Server-side Asymmetry

Client-side Asymmetry

Multi-SYN Retransmit

Connection Forwarding
In asymmetric networks, a client request traverses a different network path from the server response. Although the packets traverse different paths, to optimize a connection, packets traveling in both directions must pass through the same client and server Steelhead appliances. If you have one path (through Steelhead-2) from the client to the server and a different path (through Steelhead-3) from the server to the client, you need to enable in-path Connection
2007-2009 Riverbed Technology, Inc. All rights reserved. 31

RCSP Study Guide

Forwarding and configure the Steelhead appliances to communicate with each other. These Steelhead appliances are called neighbors and exchange connection information to redirect packets to each other. You can configure multiple neighbors for a Steelhead appliance. Neighbors can be placed in the same physical site or in different sites, but the latency between them should be small because the packets traveling between them are not optimized. When a SYN arrives on Steelhead-2, it will send a message on port 7850 telling it that it is expecting packets for that connection. Steelhead-3 then acknowledges and once Steelhead-2 gets the confirmation from Steelhead-3 it will continue with the SYN+ out to the WAN. When the SYN/ACK+ comes back, if it arrives at Steelhead-3, it will encapsulate that packet and forward it back to Steelhead-2. Once the connection has been established, there will be no more encapsulation between the two Steelhead appliances for that flow. If a subsequent packet arrives on Steelhead-3, it will perform the destination IP/port rewrite. The Steelhead appliance simply changes the destination IP of the packet to that of the neighbor Steelhead appliance. No encapsulation is involved later on in the flow. In WCCP deployments, Connection Forwarding can also be used to prevent outages whenever the cluster and the redirection table changes. Default behavior of Connection Forwarding is that when a neighbor is lost, the Steelhead appliance that lost the neighbor also will pass through the connection since it is assuming asymmetric routing of traffic. In WCCP deployments this is not the case and this behavior has to be avoided. The command in-path neighbor allowfailure overrides the default behavior and allows the Steelhead appliances to continue optimizing. Understanding the implication of applying this command prior to configuring it in a production environment is recommended. Commands to enable Connection Forwarding:
in-path neighbor enable in-path neighbor ip address <addr> [port <port>] in-path neighbor allow-failure (optional)

IP addresses of neighbors with multiple In-path interfaces only have to be specified with the first In-path interface.

Simplified Routing (SR)


Simplified routing collects the IP address for the next hop MAC address from each packet it receives to use in addressing traffic. Enabling simplified routing eliminates the need to add static routes when the Steelhead appliance is in a different subnet from the client and the server. Without simplified routing, if a Steelhead appliance is installed in a different subnet from the client or server, you must define one router as the default gateway and optionally define static routes for the other subnets. Without having static routes or other forms of routing intelligence, packets can end up flowing through the Steelhead appliance twice causing packet ricochet. This could potentially lead to broken QoS models, firewalls blocking packets, and a performance decrease. Enabling simplified routing eliminates these issues. It is recommend to use destination only (dest-only) in certain asymmetric environments. If using Hot Standby Router Protocol (HSRP), it provides asymmetry in networks. Normally, this is not a problem, but when using WAN accelerators it is crucial to have all information to return to the
32 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

corresponding Steelhead appliance doing the optimization. With source enabled or all, the logical IP address being used by the router is not bound to a physical interface or MAC address. By using all or source-based SR, the MAC of the actual IP is learned by the Steelhead appliances which could cause confusion in the route that the packet takes.

Data Store Synchronization


In a serial failover scenario the data stores are not synchronized by default. When the master Steelhead appliance fails, the backup Steelhead appliance will take over but users will experience cold performance again. Data store synchronization can be turned on to exchange data store content. This can either be done via the Primary or the AUX interface. The synchronization process runs on port 7744, the reconnect timer is set to 30 seconds. Data store synchronization can only occur between the same Steelhead appliance models and can only be used in pairs of two. The commands to enable automatic data store synchronization are:
HOSTNAME (config) #datastore sync peer-ip "x.x.x.x" HOSTNAME (config) #datastore sync port "7744" HOSTNAME (config) #datastore sync reconnect "30" HOSTNAME (config) #datastore sync master HOSTNAME (config) #datastore sync enable

If you have not deployed data store synchronization it is also possible to manually send the data from one Steelhead appliance to another. The receiving Steelhead appliance will have to start a listening process on the Primary/AUX interface. The sending Steelhead appliance will have to push the data to the IP address of the Primary interface. Something to note about Primary and AUX interfaces: if the connection is created from the Steelhead appliance to some external machine (non-Steelhead device), traffic will only go out the Primary or AUX interfaces. Therefore, TACACS+ and RADIUS will only go out the Primary or AUX interface since they originated from the Steelhead appliance. The commands to start this are:
HOSTNAME (config) # datastore receive port <port> HOSTNAME (config) # datastore send addr <addr> port <port>

CIFS Prepopulation
The prepopulation operation effectively performs the first Steelhead read of the data on the prepopulation share. Subsequently, the Steelhead appliance handles read and write requests as effectively as with a warm data transfer. With warm transfers, only new or modified data is sent, dramatically increasing the rate of data transfer over the WAN.

Authentication and Authorization


Authentication The Steelhead appliance can use a RADIUS or TACACS+ authentication system for logging in administrative and monitor users. The following methods for user authentication are provided with the Steelhead appliance: Local RADIUS TACACS+

2007-2009 Riverbed Technology, Inc. All rights reserved.

33

RCSP Study Guide

The order in which authentication is attempted is based on the order specified in the AAA method list. The local value must always be specified in the method list. The authentication methods list provides backup methods should a method fail to authenticate a user. If a method denies a user or is not reachable, the next method in the list is tried. If multiple servers within a method (assuming the method is contacting authentication servers) and a server time-out is encountered, the next server in the list is tried. If the current server being contacted issues an authentication reject, no other servers for the method are tried and the next authentication method in the list is attempted. If no methods validate a user, the user will not be allowed access to the box. The Steelhead appliance does not have the ability to set a per interface authentication policy. The same default authentication method list is used for all interfaces. You cannot configure authentication methods with subsets of the RADIUS or TACACS+ servers specified (that is, there are no server groups). When configuring the authentication server, it is important to specify the service rbt-exec along with the appropriate custom attributes for authorization. Authorization can be based on either the admin account or the monitor user account by using local-user-name=admin or local-username=monitor, respectively. Refer to the CLI Guide for the available RADIUS and TACACS+ authentication commands.

SSL
With Riverbed SSL, Steelhead appliances are configured to have a trust relationship so they can exchange information securely over an SSL connection. SSL clients and servers communicate with each other exactly as they do without Steelhead appliances; no changes are required for the client and server application, nor are they for the configuration of proxies. Riverbed splits up the SSL handshake, the sequence of message exchanges at the start of an SSL connection. This is called split termination. In an ordinary SSL handshake, the client and server first establish identity using public-key cryptography, then negotiate a symmetric session key to be used for data transfer. With Riverbed SSL acceleration, the initial SSL message exchanges take place between the client and the server-side Steelhead appliance. Then the server-side Steelhead appliance sets up a connection to the server, to ensure that the service requested by the client is available. In the last part of the handshake sequence, a Steelhead-to-Steelhead process ensures that both appliances (client-side and server-side) know the session key. The client SSL connection logically terminates at the server but physically terminates at the client-side Steelhead appliancejust as is true for logical versus physical unencrypted TCP connections. And just as the Steelhead-to-Steelhead TCP connection over the WAN may use a better TCP implementation than the ones used by client or server, the Steelhead-to-Steelhead connection may be configured to use better ciphers and protocols than the client and server would normally use. The Steelhead appliance also contains a secure vault which stores all SSL server settings, other certificates (that is, the CA, peering trusts, and peering certificates), and the peering private key. The secure vault protects your SSL private keys and certificates when the Steelhead appliance is not powered on. You set a password for the secure vault which is used to unlock it when the Steelhead appliance is powered on. After rebooting the Steelhead appliance, SSL traffic is not optimized until the secure vault is unlocked with the correct password.

34

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Refer to the Steelhead Appliance Management Console Users Guide for information on configuring SSL.

Central Management Console (CMC)


Introduction The CMC facilitates the essential administration tasks for the Riverbed system: Configuration. The CMC enables you to automatically configure new Steelhead appliances or to send configuration settings to appliances in remote offices. The CMC utilizes policies and groups to facilitate centralized configuration and reporting. Monitoring. The CMC provides both high-level status and detailed statistics of the performance of Steelhead appliances and enables you to configure event notification for managed Steelhead appliances. Management. The CMC enables you to start, stop, restart, and reboot remote Steelhead appliances. You can also schedule jobs to send software upgrades and configuration changes to remote appliances or to collect logs from remote Steelhead appliances.

CMC Configuration Objects The CMC utilizes appliance policies and appliance groups to facilitate centralized configuration and reporting of remote Steelhead appliances. Groups are comprised of Steelhead appliances or sub-groups of Steelhead appliances; all groups and Steelhead appliances are contained in the root default Global group. Policies are sets of common configuration options that can be shared among different Steelhead appliances independently or via group membership. You should be familiar with the policies and the features that each policy manages. The following policy types are available: Optimization Policy. Use optimization policies to manage optimization features such as the data store, in-path rules, and SSL settings, in addition to many others. Networking Policy. Use networking policies to manage networking features such as asymmetric routing, DNS settings, host settings, QoS settings, and others. Security Policy. Use security policies to manage appliances in which security is a key component. System Settings Policy. Use system settings policies to organize and manage system setting features such as alarms, announcements, email notifications, log settings, and others.

Each policy type is made up of particular RiOS features. For example, system settings policies contain feature sets for common system administration settings such as alarm settings, announcements, email notification settings, among others, while security policies contain feature sets for encryption, authentication methods, and user permissions. Each group or Steelhead appliance can be assigned one of each type of policy. Because the Global group serves as the root group, or parent, to all subsequent groups and appliances, any policies assigned to the Global group provide the default values for all groups and Steelhead appliances. The Override Parent feature can override the inheritance of values from the policies applied to the parent group. It is off by default.

2007-2009 Riverbed Technology, Inc. All rights reserved.

35

RCSP Study Guide

Steelhead Appliance Auto-Registration Steelhead appliances must be registered with the CMC so that you can monitor and manage them with the CMC. Steelhead appliances are designed to send a registration request periodically to the CMCeither to an IP address or hostname you specify when you run the Steelhead appliance installation wizard, or to a default CMC hostname. In order for auto-registration with the default hostname to work, you must configure your DNS server to map to the hostname riverbedcmc and the IP address of the CMC. The steps you take to register Steelhead appliances with the CMC depend on the order in which you install the products. You can alternatively add Steelhead appliances manually to be managed by the CMC. Secure Vault on the CMC Initially the secure vault is keyed with a default password known only to the RiOS software. This allows the system to automatically unlock the vault during system start up. You can change the password, but the secure vault does not automatically unlock on start up. To optimize SSL connections or to use data store encryption, the secure vault must be unlocked. Please see the Central Management Console Users Guide for more information.

Steelhead Mobile Solution (Steelhead Mobile Controller & Steelhead Mobile Client)
The Steelhead Mobile Controller (SMC) enables centralized management of Steelhead Mobile clients that deliver wide area data services for the entire mobile workforce. The Steelhead Mobile software is deployed to laptops or desktops for mobile workers, home users, and small branch office users. A Steelhead Mobile Controller, located in the data center, is required for Steelhead Mobile deployment, management, and licensing control. Once deployed and connected, the Steelhead Mobile clients connect directly with a Steelhead appliance in order to accelerate data and application. The Mobile Controller facilitates the essential administration tasks for your Mobile Clients: Configuration. The Mobile Controller enables you to install, configure, and update Mobile Clients in groups. The Mobile Controller utilizes Endpoint policies, Acceleration policies, MSI packages, and deployment IDs (DIDs) to facilitate centralized configuration and reporting. Monitoring. The Mobile Controller provides both high-level status and detailed statistics on Mobile Client performance, and enables you to configure alerts for managed Mobile Clients. Management. The Mobile Controller enables you to schedule software upgrades and configuration changes to groups of Mobile Clients, or to collect logs from Mobile Clients.

Endpoint Policies Endpoint policies are used as configuration templates to configure groups of Mobile Clients that have the same configuration requirements. For example, you might use the default endpoint policy for the majority of your Mobile Clients and create another one for a group of users who need to connect to a different Mobile Controller. Acceleration Policies Acceleration policies are used as configuration templates to configure groups of Mobile Clients that have the same performance requirements. For example, you might use the default
36 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

acceleration policy for the majority of your Mobile Clients and create another acceleration policy for a group of Mobile Clients that need to pass-through a specific type of traffic. Mobile Clients must have both an endpoint policy and an acceleration policy for optimization to occur. MSI Packages You use Microsoft Software Installer (MSI) packages to install and update the Steelhead Mobile Client software on each of your endpoint clients. The MSI package contains information necessary for Mobile Clients to communicate with the Mobile Controller. Deployment IDs The Mobile Controller utilizes deployment IDs to link Endpoint and Acceleration policies to your Mobile Clients. The DID governs which policies and MSI packages the Mobile Controller provides to your clients. You can define the DIDs when you create MSI packages. You then assign policies to the DIDs. When you deploy an MSI package, the DID becomes associated with the endpoint client. The Mobile Controller subsequently uses the DID to identify the client and automatically provide policy and software updates. Firewall Requirements If you deploy the Mobile Controller in the DMZ next to a VPN concentrator with firewalls on each side, the client-side network firewall must have port 7801 available. The server-side firewall must have ports 22, 80, 443, 7800, and 7870 open. If you are using application control, you need to allow rbtdebug.exe, rbtmon.exe, rbtsport.exe, and shmobile.exe. Please see the Steelhead Mobile Controller Users Guide for more information

Interceptor Appliance
The Interceptor appliance extends the performance capabilities of Steelhead appliances to meet the requirements of very large data center environments. Working with Steelhead appliances, an Interceptor appliance can support up to 1,000,000 concurrent connections, running up to 4 gigabits per second. Interceptor Deployment Terminology

2007-2009 Riverbed Technology, Inc. All rights reserved.

37

RCSP Study Guide

Peer Neighbors. Steelhead 1, Steelhead 2, Steelhead 3, and Steelhead 4 are the pool of LAN-side Steelhead appliances that are load balanced by the Interceptor appliances. In relation to the Interceptor appliances, these Steelhead appliances are called peer neighbors. Peer Interceptor Appliances. Interceptor 1 and Interceptor 3 are peers to each other, connected virtually, in parallel. Failover Buddies. Interceptor 1 and Interceptor 2 are failover buddies to each other, connected with cables, in serial. If either Interceptor appliance goes down or requires maintenance, its buddy handles redirection for its connections.

In-Path Rules When the Interceptor appliance intercepts a SYN request to a server, the in-path rules you configure determine the subnets and ports for traffic that will be optimized. You can specify inpath rules to pass-through, discard, or deny traffic; or to redirect and optimize it. In the case of a data center, the Interceptor appliance intercepts SYN requests when a data center server establishes a connection with a client that resides outside the data center. In the connection-processing decision tree, in-path rules are processed before load-balancing rules. Only traffic selected for redirection proceeds to load balancing rules processing. Load Balancing For connections selected by an in-path redirect rule, the Interceptor appliance distributes the connection to the most appropriate Steelhead appliance based on rules you configure, intelligence from monitoring peer neighbor Steelhead appliances, and the Riverbed connection distribution algorithm. Failover You can configure a pair of Interceptor appliances as failover buddies. In the event one Interceptor appliance goes down or requires maintenance, the failover buddy ensures uninterrupted service. Peer Interceptor Monitoring Peer Interceptor appliances include both failover buddies deployed in serial configuration and Interceptor appliances deployed in a parallel configuration to handle asymmetric routes. Asymmetric routing can cause the response from the server to be routed along a different physical network path from the original request, and a different Steelhead appliance may be on each of these paths. When you deploy peer Interceptor appliances in parallel, the first Interceptor appliance that receives a packet delays forwarding it. It requests that the other Interceptor appliances redirect packets for the connection to it. When the other Interceptor appliances have confirmed that they have received and accepted this request, the first Interceptor appliance begins to redirect the connection. Peer Neighbor Monitoring Peer neighbor Steelhead appliances are the pool of Steelhead appliances for which the Interceptor appliance monitors capacity and balances load. To assist in deployment tuning and troubleshooting, you can monitor the state of neighbor Steelhead appliances. Link State Detection and Link State Propagation The Interceptor appliance monitors the link state of devices in its path, including routers, switches, interfaces, and In-path interfaces. When the link state changes (for example, the link goes down or it resumes), the Interceptor appliance propagates the change to the dynamic routing
38 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

table. Link state propagation ensures accurate and timely triggers for failover or redundancy scenarios. EtherChannel Deployment The Interceptor appliance can operate within an EtherChannel. In an EtherChannel deployment, all of the links in the channel must pass through the same Interceptor appliance. VLAN Tagging The Interceptor appliance supports VLAN tagged connections in VLAN trunked links. The Interceptor appliance supports VLAN 802.1q.

2007-2009 Riverbed Technology, Inc. All rights reserved.

39

RCSP Study Guide

III. Features
Feature Licensing
Certain features on Steelhead appliances require a license for operation. Licenses for all features, including platform specific licenses, are included with the purchase of a Steelhead appliance apart from the SSL license which you must request separately. These licenses are factory installed, however licenses can be installed by the user via the CLI or Management Console. Licenses are required to be installed for the base system to function, as well as the application acceleration for CIFS and MAPI. This includes the Scalable Data Referencing license (base), the Windows File Servers license (CIFS), and the Microsoft Exchange (EXCH) license. Additional licensed features that will automatically be included upon activating the base license but do not require a separate license key are the Microsoft SQL, NFS, HTTP, and Oracle Forms optimization modules. All licensed features, with the exception of the Microsoft MS SQL optimization module, are enabled by default.

HighSpeed TCP (HSTCP)


Applicability and Considerations To better utilize links that have high bandwidth and high latency, such as in GigE WANs, OCx/STMx, or any other link that may be classified as a large BDP (bandwidth delay product) link, enabling HSTCP should be considered. HSTCP is a feature you can enable on Steelhead appliances to help reduce WAN data transfers inefficiencies that are caused by limitations with regular TCP. Enabling the HSTCP feature allows for more complete utilization of these long fat pipes. HSTCP is an IETF defined RFC standard (defined in RFC 3649 and RFC 3742), and has been shown to provide significant performance improvements in networks with high BDP values. As a basis for determining the applicability of HSTCP for a given network, the following formulas and their interpretation is provided below. For any given TCP Cwnd (congestion window) size and network latency, the maximum throughput can be calculated by dividing the window size by the latency (64KB/.1s=640KB/s). End nodes that are limited to window sizes of 64KB or less (nodes that do not support TCP window scaling as defined in RFC 1323), will prove inefficient in transferring data across links with bandwidth exceeding the Cwnd/RTT limitation. While it is not HSTCP that introduced TCP window scaling, it does typically make use of it as links that have high BDP values imply that a large TCP window size would be needed. For a given transfer, the TCP window size should be no less than the BDP in order to ensure that the full bandwidth of the link is used by that session. By the same token, having a TCP window that exceeds the BDP may cause the receiving host, or devices in between, to exhaust their resources and potentially cause severe bandwidth degradation. Additional considerations with HSTCP relate to how the Cwnd changes in size during a transfer. For most non-HSTCP implementations, after a short period of exponential Cwnd growth (Slow Start), the window size continues to grow at a rate of 1MSS/RTT. Most operating systems used a value of 1460 bytes as their MSS, meaning that for each successful round trip (ACK received) the window will increase by 1460 bytes. In the case of small BDP and thus Cwnd sizes, the 1460 bytes per RTT, represents a moderate growth rate that can peak within a few short seconds. In the case of a large BDP value however, the 1460 bytes per RTT represents a significant amount of time before the Cwnd would extend to the full BDP value. The problem of increasing the
40 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Cwnd size at the rate prescribed by standard TCP is further compounded by considering that a packet loss event causes TCP to back off by reducing the current Cwnd size by half. This reduction is vital in allowing TCP to play nicely with other sessions sharing link bandwidth, however in the case of high BDP links; the time to recover from such a loss event at standard Cwnd growth rates would represent a very ineffective use of the bandwidth available. For example, for a standard TCP connection with 1500-byte packets and a 100ms round-trip time, achieving a steady-state throughput of 10Gbps would require an average congestion window of 83,333 segments, and a packet drop rate of at most one congestion event every 5,000,000,000 packets (or equivalently, at most one congestion event every 1 2/3 hours). Clearly this is not a likely possibility in real world networks, and is the basis for which HSTCP was developed. HSTCP solves problems related to the rate at which to grow the Cwnd, as well as how to respond when loss events occur and the Cwnd needs to be reduced. Further information as to how this is achieved is explained in the RFCs referenced above. The following table and graph displays how filling a Long Fat Network (OC-12) is done.
Test Scenario Bandwidth RTT Latency Throughput

Baseline A With Steelhead Appliances Baseline B With Steelhead Appliances

622 Mbps 622 Mbps 622 Mbps 622 Mbps

15 ms 15 ms 100 ms 100 ms

36 Mbps 600+Mbps 5 Mbps 600+Mbps

S a m ple FTP tra nsfe rs (3 GB file )

Sample FTP Transfers (3 GB file)

WN (millions bits/second) A u i n i / e) l WAN Utiltil(mlio btss c

700 600 500 400 300 200 100 0 1 49 97 145 193 241 289 337 385 433 481 529 577 625 673 tim e (s e c) Time (seconds) w / Steelhead HSTCP w/Steelhead HSTCP 15 ms RTT 15 ms RTT

Operation and Configuration To display HSTCP settings, use either the CLI command show tcp highspeed, or navigate to the Configure > Optimization > Performance page in the Steelhead Management Console. Configuring HSTCP can be done via the CLI or the Management Console, with the key steps involving enabling HSTCP, and configuring the appropriate buffer sizes for the LAN and WAN interfaces. When adjusting the buffer sizes, it is important to configure them in accordance with the specification of the link. More information about calculating the correct buffer values can be found in the BDP Calculations and Buffer Adjustments section below. To enable HSTCP, use the CLI command tcp highspeed enable. Alternatively, you can enable HSTCP in the Management Console by clicking on the Enable High Speed TCP checkbox and then clicking on Apply. Note that a service restart is required with either method. BDP Calculations and Buffer Adjustments
2007-2009 Riverbed Technology, Inc. All rights reserved. 41

RCSP Study Guide

In order to achieve the maximum throughput possible for a given link with TCP, it is important to set the send and receive buffers to a proper size. Using buffers that are too small may not allow the Cwnd to fully open, while using buffers that are too large may overrun the receiver and break the flow control process. When configuring the send and receive WAN buffers on a Steelhead, it is recommended that they be set to two times the Bandwidth Delay Product. As an example, a 45Mb/s point to point connection with 100ms of latency, should have a buffer size of 1,125,000 bytes set on the WAN send (for the sending Steelhead), and the same number on the receive for the WAN interface on the receiving Steelhead ( (45,000,000bits/8*.1s) *2). For a point-to-point connection such as this one, the send and receive buffers would typically be the same value. Additionally, it is recommended that buffers on WAN routers be set to accommodate the packet influx by allocating at least one times the BDP worth of packets. As an example, considering the case of the 45Mb/s connection above with 100ms of latency, and given that a packet is 1500 bytes in size, we realize that we need to back that buffer 375 packets deep [(45,000,000/8 *.1)/1500].

MX-TCP
MX-TCP optimizes high-loss links where regular TCP would cause under utilization. With MXTCP, the TCP congestion control algorithm is removed on the inner connections. This allows the link to be saturated in a much faster time frame and eliminates the possibility of underutilizing the link. Any class that is defined on the Steelhead appliance can be MX-TCP enabled. You can use MX-TCP to achieve high throughput rates even when the physical medium carrying the traffic has high loss rates. For example, a common usage of MX-TCP is for ensuring high throughput on satellite connections where no lower layer loss recovery technique is in use. Another usage of MX-TCP is to achieve high throughput over high-bandwidth, high-latency links, especially when intermediate routers do not have properly tuned interface buffers. Improperly tuned router buffers cause TCP to perceive congestion in the network, resulting in unnecessarily dropped packets, even when the network can support high throughput rates.

Quality of Service
QoS Concepts You can configure QoS on Steelhead appliances to control the prioritization of different types of network traffic and to ensure that Steelhead appliances give certain network traffic (for instance, VoIP) higher priority than other network traffic. RiOS 5.0 provides two types of QoS structures: Flat and Hierarchical. Flat QoS. All classes are created at the same level. When all classes are on the same level, the types of QoS policies that can be represented are limited. Hierarchical QoS (H-QoS). Provides a way to create a hierarchical QoS structure that supports parent and child classes. You can use a parent/child structure to segregate traffic for remote offices based on flow source or destination. This is a way to effectively manage and support remote sites with different bandwidth characteristics.

QoS allows you to specify priorities for various classes of traffic and properly distributes excess bandwidth among classes. NOTE: QoS enforcement is available only in physical in-path deployments.

42

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Steelhead appliances use HFSC (Hierarchical Fair Service Curve) QoS operations to simultaneously control bandwidth and latency for each QoS class. For each class, you can set a: Priority level Minimum guaranteed bandwidth level, which specifies the minimum amount of bandwidth a QoS class is guaranteed to receive when there is bandwidth contention. If unused bandwidth is available, a QoS class receives more than its minimum guaranteed bandwidth level. The percentage of excess bandwidth each QoS class receives is relative to the percentage of minimum guaranteed bandwidth it has been allocated. The total minimum guaranteed bandwidth level of all QoS classes must be less than or equal to 100%. Upper bandwidth level, which specifies the maximum amount of bandwidth a QoS class is allowed to use, regardless of the available excess bandwidth. Connection limit, which specifies a maximum number of connections the specified QoS class will optimize. Connections over this limit are passed-through.

Once you have defined a QoS class, you can create one or more QoS rules to apply traffic to it. QoS rules define source subnet or port, destination subnet or port, protocol, traffic type, and VLAN and DSCP filters for a QoS class. IMPORTANT: Familiarity with QoS classes and rules from the CLI is required for the exam. About QoS Class Priorities There are five QoS priorities for Steelhead appliances. You assign a class priority when you create a QoS class. Once you have created a QoS class, you can modify its class priority. In descending order, class priorities are: Realtime Interactive Business Critical Normal Priority Low Priority

Priority levels are minimum priority guarantees. If higher priority service is available, a QoS class will receive it even if the class has been assigned a lower priority level. For example, if a QoS class is assigned the priority level Low Priority, and QoS classes that are assigned higher priority levels are not active, the low priority QoS class adjusts to the highest possible priority for the current traffic patterns. Maximum Allowable QoS Classes and Rules The number of QoS classes and rules you can create on a Steelhead appliance depends on the appliance model number. Steelhead Appliance Model 2xx and lower 5x0, 1xx0 2xx0 3xx0 5xx0 and higher Maximum Allowable QoS Classes 20 60 80 120 200 Maximum Allowable QoS Rules 60 180 240 360 600
43

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Service Ports Service ports are the ports used for inner connections between Steelhead appliances. You can configure multiple service ports on the server-side of the network for multiple QoS mappings. You define a new service port and then map destination ports to that port, so that QoS configuration settings on the router are applied to that service port. The default service ports are 7800 and 7810. Riverbed QoS Implementation Steelhead appliances make use of the HFSC QoS scheduling algorithm. Most traditional algorithms allow you to define either the priority of a packet or the amount of bandwidth that should be allocated for specific packet types (priority queuing, custom queuing). These methods each suffer from problems such as starvation to lower priority queues or do not allow low bandwidth queues with latency sensitive traffic the ability to leave the device sooner than larger packets with more bandwidth allocated to them. Newer scheduling methods allow for a blend of a priority queue for latency sensitive traffic, while other traffic would be placed into a general purpose queue with bandwidth allocations specified by the administrator for each traffic type (LLQ (Low Latency Queuing) uses this method). Problems of having a single priority queue, or even multiple priority queues (of the same priority, as is the case with LLQ), stem from the fact that today most networks carry traffic types that cannot be classified with such a binary system (priority queue, or general queue). VoIP traffic, which is typical very latency sensitive, should clearly be placed in a queue of high priority. However, traffic such as stock quotes, video conferencing, remote PC control (i.e. Remote Desktop Protocol, or PC Anywhere) are also latency sensitive, and placing them into either the same priority queue or a separate priority queue with a different bandwidth allocation, still causes the same problem two or more queues of the same priority will give latency preference to packets in the queue that have more bandwidth allocated to it. As an example, consider a case of LLQ where two priority queues are created, one for voice traffic, and one for video traffic. The voice queue is allocated 10% of the bandwidth, and the video queue which is also latency sensitive, is allocated 40% of the bandwidth. Since the router has no ability to differentiate that the small voice packets should generally be allowed out before the larger video packets (up to the bandwidth limit), you will experience a case where small voice packets may get stuck behind several larger video packets despite not fully utilizing their 10% bandwidth allocation. HFSC solves these problems by logically separating the latency element of queuing from the bandwidth element. As such, you can define multiple queues, each of a different priority relative to the other queues, and be assured that despite more bandwidth being allocated in lower queues; the higher queues will still get serviced preferentially from a latency perspective, up to the amount of bandwidth specified for that queue. Steelhead appliances implement five queues with each queue starting at Realtime and ending with Low Priority, with each queue in between having lower latency priority than the next (Realtime having the highest). The strategy imposed by HFSC lends itself particularly well to bursty traffic, as is the case with most networks. Enforcing QoS for Active/Passive FTP Active/Passive FTP Operation To configure optimization policies for the FTP data channel, define an in-path rule with the destination port 20 and set its optimization policy. Setting QoS for destination port 20 on the
44 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

client-side Steelhead appliance affects passive FTP, while setting the QoS for destination port 20 on the server-side Steelhead appliance affects active FTP. In the case of an active FTP session, data connections originate on a server sourced on port 20 and destined to a random port specified by the client. As such, specifying a QoS rule on the server-side Steelhead with a destination port of 20 is appropriate. With passive FTP however, data connections initiate on the client from a random port, and are destined to a server on a random port; as such, there is no seemingly simple way to apply a QoS rule based on the Layer 4 port information. To help solve this problem, the Steelhead allows you to define a client-side QoS rule with a destination port of 20 to tell it that you would like to apply this QoS rule to a passive FTP data connection. The Steelhead will intelligently identify the actual ports used for the passive FTP data transfer, and apply the QoS logic set forth by the class where the rule has been applied. Converting between DSCP, IP Precedence, ToS For the RCSP exam, you are expected to know how to convert various packet marking types. This is important because the Steelhead appliances only understand DSCP (Differentiated Services Code Point) values, while other network devices may support a different method of marking or matching traffic. Various methods of converting to and from DSCP values are defined by RFC 2474. Interpreting and Converting Common Router Policies In addition to being able to convert to and from DSCP values for proper marking and matching between Steelhead appliances and other network nodes on the RCSP exam, understanding how to convert simple QoS configurations from Cisco and other popular routing platforms is required. Generally, some familiarity with QoS configuration on routers and an understanding of how Steelhead appliances implement QoS (see Riverbed QoS Implementation section) should make the process of converting configurations a simpler task.

PFS (Proxy File Service) Deployments


Introduction to PFS PFS is an integrated virtual file server that allows you to store copies of files on the Steelhead appliance with Windows file access, creating several options for transmitting data between remote offices and centralized locations with improved performance. Data is configured into file shares and the shares are periodically synchronized transparently in the background over the optimized connection of the Steelhead appliance. PFS leverages the integrated disk capacity of the Steelhead appliance to store file-based data in a format that allows it to be retrieved by NAS clients. PFS Terms PFS Term Proxy File Server Description A virtual file server resident on the Steelhead appliance, providing Windows file access (with ACLs) capability at a branch office on the LAN network, populated over an optimized WAN connection with data from the origin server. The server located in the data center which hosts the origin data volumes. IMPORTANT: The PFS share and the origin-server share name cannot contain Unicode characters. The Management Console does
45

Origin File Server

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

PFS Term

Description not support Unicode characters. In Domain mode you join the Windows domain for which the Steelhead appliance will be a member. Typically, this is the same domain as your companys domain. Specifies the domain controller name, the host that provides user login service in the domain. (Typically, with Windows Active Directory Service domains, given a domain name, the system automatically retrieves the domain controller name.) In Local Workgroup mode you define a workgroup and add individual users that will have access to the PFS shares on the Steelhead appliance. The data volume exported from the origin server to the remote Steelhead appliance. The name that you assign to a share on the Steelhead appliance. This is the name by which users identify and map a share. The path to the data on the origin server or the Universal Naming Convention (UNC) path of a share to which you want to make available to PFS. Synchronization runs periodically in the background, ensuring that the data on the proxy file server is synchronized with the origin server. You have the Steelhead appliance refresh the data automatically by setting the interval, in minutes; or manually at anytime.

Domain Mode

Domain Controller

Local Workgroup Mode Global Share Local Name Remote Path

Share Synchronization

When to Use PFS Before you configure PFS, evaluate whether it is suitable for your network needs. Advantages of using PFS are: LAN access to data residing across the WAN. File access performance is improved between central and remote locations. PFS creates an integrated file server, enabling clients to access data directly from the proxy filer on the LAN as opposed to the WAN. Transparently in the background, data on the proxy filer is synchronized with data from the origin file server over the WAN. Continuous access to files in the event of WAN disruption. PFS provides support for disconnected operations. In the event of a network disruption that prevents access over the WAN to the origin server, files can still be accessed on the local Steelhead appliance. Simple branch infrastructure and backup architectures. PFS consolidates file servers and local tape backup from the branch into the data center. PFS enables a reduction in number and size of backup windows running in complex backup architectures. Automatic content distribution. PFS provides a means for automatically distributing new and changed content throughout a network.

If any of these advantages can benefit your environment, then enabling PFS in the Steelhead appliance is appropriate. However, PFS requires pre-identification of files and is not appropriate in environments in which there is concurrent read-write access to data from multiple sites.
46 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Pre-identification of PFS files. PFS requires that files accessed over the WAN are identified in advance. If the data set accessed by the remote users is larger than the specified capacity of your Steelhead appliance model or if it cannot be identified in advance, then you should have end-users access the origin server directly through the Steelhead appliance without PFS. (This configuration is also known as Global mode.) Concurrent read-write data access from multiple sites. In a network environment where users from multiple branch offices update a common set of centralized files and records over the WAN, the Steelhead appliance without PFS is the most appropriate solution because file locking is directed between the client and the server. The Steelhead appliance always consults the origin server in response to a client request; it never provides a proxy response or data from its data store without consulting the origin server.

Prerequisites and Tips This section describes prerequisites and tips for using PFS: Before you enable PFS, configure the Steelhead appliance to use NTP to synchronize the time. To use PFS, the Steelhead appliance and DC clocks must be synchronized. The PFS Steelhead appliance must run the same version of the Steelhead appliance software as the server-side Steelhead appliance. PFS traffic to and from the Steelhead appliance travels through the Primary interface. PFS requires that the traffic originated from the Primary interface flows through both Steelhead appliances. For physical in-path deployments the traffic from the Primary interface has to flow through the LAN interface of the same Steelhead appliance. When logically in-path this traffic has to be redirected to the same Steelhead appliance. The PFS share and origin-server share names cannot contain Unicode characters because the Management Console does not support Unicode characters.

Enabling PFS does not reduce the amount of data store allocated for the SDR process performed by a Steelhead appliance. Version 2 vs Version 3 Setup Version 2. Specify the server name and remote path for the share folder on the origin file server. With Version v2.x, you must have the RCU service running on a Windows serverthis can be the origin file server or a separate server. Riverbed recommends you upgrade your v2.x shares to 3.x shares so that you do not have to run the RCU on a server. Version 3. Specify the login, password, and remote path used to access the share folder on the origin file server. With Version 3, the RCU runs on the Steelhead applianceyou do not need to install the RCU service on a Windows server. Upgrading V2.x PFS Shares By default, when you configure PFS shares with Steelhead appliance software versions 3.x and higher, you create v3.x PFS shares. PFS shares configured with Steelhead appliance software v2.x are v2.x shares. V2.x shares are not upgraded when you upgrade Steelhead appliance software. If you have shares created with v2.x software, Riverbed recommends that you upgrade them to v3.x shares in the Management Console. If you upgrade any v2.x shares, you must upgrade all of them. Once you have upgraded shares to v3.x, you should only create v3.x shares.
2007-2009 Riverbed Technology, Inc. All rights reserved. 47

RCSP Study Guide

If you do not upgrade your v.2.x shares: You should not create v3.x shares. You must install and start the RCU on the origin server or on a separate Windows host with write-access to the data PFS uses. The account that starts the RCU must have write permissions to the folder on the origin file server that contains the data PFS uses. NOTE: In Steelhead appliance software version 3.x and higher, you do not need to install the RCU service on the server for synchronization purposes. All RCU functionality has been moved to the Steelhead appliance. You must configure domain, not workgroup, settings. Domain mode supports v2.x PFS shares but Workgroup mode does not.

Domain and Local Workgroup Settings If using your Steelhead appliance for PFS, configure either the domain or local workgroup settings. Domain Mode In Domain mode, you configure the PFS Steelhead appliance to join a Windows domain (typically, your companys domain). When you configure the Steelhead appliance to join a Windows domain, you do not have to manage local accounts in the branch office as you do in Local Workgroup mode. Domain mode allows a DC to authenticate users accessing its file shares. The DC can be located at the remote site or over the WAN at the main data center. The Steelhead appliance must be configured as a Member Server in the Windows 2000 or later ADS domain. Domain users are allowed to access the PFS shares based on the access permission settings provided for each user. Data volumes at the data center are configured explicitly on the proxy file server and are served locally by the Steelhead appliance. As part of the configuration, the data volume and ACLs from the origin server are copied to the Steelhead appliance. PFS allocates a portion of the Steelhead appliance data store for users to access as a network file system. Before you enable Domain mode in PFS, make sure you: Configure the Steelhead appliance to use NTP to synchronize the time. Configure the DNS server correctly. Set the owner of all files and folders in all remote paths to a domain account and not a local account. A DNS entry should exist for the Steelhead appliance Primary interface when using Domain mode.

NOTE: PFS only supports domain accounts on the origin file server; PFS does not support local accounts on the origin file server. During an initial copy from the origin file server to the PFS Steelhead appliance, if PFS encounters a file or folder with permissions for both domain and local accounts, only the domain account permissions are preserved on the Steelhead appliance. Local Workgroup Mode In Local Workgroup mode you define a workgroup and add individual users that will have access to the PFS shares on the Steelhead appliance.

48

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Use Local Workgroup mode in environments where you do not want the Steelhead appliance to be a part of a Windows domain. Creating a workgroup eliminates the need to join a Windows domain and vastly simplifies the PFS configuration process. NOTE: If you use Local Workgroup mode, you must manage the accounts and permissions for the branch office on the Steelhead appliance. The local workgroup account permissions might not match the permissions on the origin file server. PFS Share Operating Modes PFS provides Windows file service in the Steelhead appliance at a remote site. When you configure PFS, you specify an operating mode for each individual file share on the Steelhead appliance. The proxy file server can export data volumes in Local mode, Broadcast mode, and Stand-Alone mode. After the Steelhead appliance receives the initial copy of the data and ACLs, shares can be made available to local clients. In Broadcast and Local mode only, shares on the Steelhead appliance are periodically synchronized with the origin server at intervals you specify, or manually if you choose. During the synchronization process the Steelhead appliance optimizes this traffic across the WAN. Broadcast Mode. Use Broadcast mode for environments seeking to broadcast a set of readonly files to many users at different sites. Broadcast mode quickly transmits a read-only copy of the files from the origin server to your remote offices. The PFS share on the Steelhead appliance contains read-only copies of files on the origin server. The PFS share is synchronized from the origin server according to parameters you specify when you configure it. However, files deleted on the origin server are not deleted on the Steelhead appliance until you perform a full synchronization. Additionally, if, on the origin server, you perform directory moves (for example, move .\dir1\dir2 .\dir3\dir2) regularly, incremental synchronization will not reflect these directory changes. You must perform a full synchronization frequently to keep the PFS shares in synchronization with the origin server. Local Mode. Use Local mode for environments that need to efficiently and transparently copy data created at a remote site to a central data center, perhaps where tape archival resources are available to back up the data. Local mode enables read-write access at remote offices to update files on the origin file server. After the PFS share on the Steelhead appliance receives the initial copy from the origin server, the PFS share copy of the data becomes the master copy. New data generated by clients is synchronized from the Steelhead appliance copy to the origin server based on parameters you specify when you configure the share. The folder on the origin server essentially becomes a back-up folder of the share on the Steelhead appliance. If you use Local mode, users must not directly write to the corresponding folder on the origin server.

NOTE: In Local mode, the Steelhead appliance copy of the data is the master copy; do not make changes to the shared files from the origin server while in Local mode. Changes are propagated from the remote office hosting the share to the origin server. Riverbed recommends that you do not use Windows file shortcuts if you use PFS. Stand-Alone Mode. Use Stand-Alone mode for network environments where it is more effective to maintain a separate copy of files that are accessed locally by the clients at the remote site. The PFS share also creates additional storage space. The PFS share on the Steelhead appliance is a one-time, working copy of data mapped from the origin server. You can specify a remote path to a directory on the origin server, creating a copy at the branch office. Users at the branch office can read from or write to stand-alone shares but there is no
49

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

synchronization back to the origin server since a stand-alone share is an initial and one-time only synchronization.

Lock Files When you configure a v3.x Local mode share or any v2.x share (except a Stand-Alone share in which you do not specify a remote path to a directory on the origin server), a text file (._rbt_share_lock.txt) that keeps track of which Steelhead appliance owns the share is created on the origin server. Do not remove this file. If you remove the._rbt_share_lock.txt file on the origin file server, PFS will not function properly. (V3.x Broadcast and Stand-Alone shares do not create these files.) Notes: To join a domain, the Windows domain account must have the correct privileges to perform a join domain operation. The PFS share and the origin-server share name cannot contain Unicode characters. The Management Console does not support Unicode characters. If you have shares that were created with RiOS v2.x, the account that starts the RCU must have write permissions to the folder on the origin file server. Also, the logon user for the RCU server must to be a member of the Administrators group either locally on the file server or globally in the domain. Make sure the users are members of the Administrators group on the remote share server, either locally on the file server (the local Administrators group) or globally in the domain (the Domain Administrator group). Riverbed recommends that you do not run a mixed system of PFS shares, that is, v2.x shares and v3.0 shares.

50

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

NetFlow
Operation and Implementation Steelhead appliances support the export of NetFlow v5 data. NetFlow can play an important role in an organizations network by providing detailed accounting between hosts. This information can then be used for various purposes such as billing, identifying top talkers, and capacity planning to name a few. It can also assist in troubleshooting denial-of-service attacks. It is common to configure NetFlow on the WAN routers in order to monitor the traffic traversing the WAN. However, when the Steelhead appliances are in place, the WAN routers will only see the inner Steelhead TCP session traffic and not the real IP addresses/ports of the client and server. By supporting NetFlow v5 on the Steelhead appliance, this becomes a non-issue altogether. In fact, it is possible to only have the Steelhead export the NetFlow data instead of the router without compromising any functionality. By doing so, the router can spend more CPU cycles on its core functionality: routing and switching of packets. Similar to configuring NetFlow on the routers, NetFlow statistics are collected on the ingress interfaces of the Steelhead appliance. Therefore, to see a complete flow or conversation between the server and client, it is necessary to configure NetFlow on both the client-side Steelhead appliance as well as the server-side Steelhead appliance, For example, to determine the amount of CIFS traffic on the LAN between a server and client, configure NetFlow to collect on the following interfaces: Client-side Steelhead LAN interface (this will show pre-optimized traffic going from client to server). Server-side Steelhead LAN interface (this will show pre-optimized traffic going from server to client).

Similarly, to determine the amount of CIFS traffic on the WAN between a server and client, configure NetFlow to collect on the following interfaces: Client-side Steelhead WAN interface (this will show optimized traffic going from server to client). Server-side Steelhead WAN interface (this will show optimized traffic going from client to server).

NetFlow Protocol Header and Record Header NetFlow version 5 supports the ordering of NetFlow packets by way of a sequence number transmitted in each packet. Information that can be obtained from a NetFlow packet can be observed by reviewing the supported fields shown in the flow entry packet and include common information such as the IP addresses, interfaces, number of packets, and other data related to the data transfer. Flow information is available for both optimized and passthrough data.

2007-2009 Riverbed Technology, Inc. All rights reserved.

51

RCSP Study Guide

NetFlow Version 5 Flow Header:

NetFlow Version 5 Flow Entry:

Adjusting NetFlow Timers By default, the Steelhead appliance will export active flows every 30 minutes and inactive flows every 15 seconds. An inactive flow is defined as a flow where no traffic has been sent in the last 15 seconds. Terminated flows (either with a FIN or RST packet) will be exported immediately. Some NetFlow collectors provide real-time reporting and the 30 minutes export may be too long. In this case, you can use the CLI command to change the timeout.
ip flow-setting active_to <seconds> ip flow-setting inactive_to 60

However, bear in mind that more frequent exports could impact the performance of the Steelhead appliance and more network bandwidth will be required to transmit the extra data.
52 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

IPSec
You configure IPSec encryption to allow data to be communicated securely between peer Steelhead appliances. Enabling IPSec encryption makes it difficult for a third party to view your data or pose as a machine you expect to receive data from. To enable IPSec, you must specify at least one encryption (DES or NULL) and authentication algorithm (MD5 or SHA-1). Only optimized data is protected, passthrough traffic is not. IMPORTANT: You must set IPSec support on each peer Steelhead appliance in your network for which you want to establish a secure connection. You must also specify a shared secret on each peer Steelhead appliance. NOTE: If you NAT traffic between Steelhead appliances, you cannot use the IPSec channel between the Steelhead appliances because the NAT changes the packet headers causing IPSec to reject them.

Operation on VLAN Tagged Links


A Steelhead appliance can be placed on trunk (802.1q) links. The difference between a trunk and a non-trunk link is that there are multiple VLANs flowing over a single link. To ensure traffic can be processed correctly, the traffic is tagged with a VLAN number with the exception of traffic in the native VLAN. When a packet enters a trunk, a tag is attached, and when a packet exits the trunk the tag is removed again. Without the tag, there is no way of knowing which VLAN the packet belongs to. With a Steelhead appliance physically on the trunk, the Steelhead appliance has to be able to read the tags attached by the trunking devices. Since the Steelhead appliance is intercepting and originating traffic, it needs IP connectivity to the network and therefore also needs the ability to write tags for the traffic originated by the Steelhead appliance leaving on the In-path interface. The command to write tags is:
in-path interface inpathx_x vlan [nr]

When you specify the VLAN Tag ID for the In-path interface, all packets originating from the Steelhead appliance (In-path interface) are tagged with that VLAN number. The subnet that is specified by the VLAN for an In-path interface is the one that the appliance will use to setup its inner channel with other Steelhead appliances in your network. For passthrough traffic, the same VLAN tag is applied to the packet upon exiting the opposite interface it enters in on. As an example, if a passthrough packet enters the LAN interface on VLAN 10, it will leave the WAN interface on VLAN 10 as well. For optimized traffic however, the packet may enter the LAN interface on VLAN 10, but after the auto-discovery process will use a connection that the In-path interface is configured on for the inner connection. Traffic returned to the Steelhead appliance from another appliance via the inner TCP session will be placed on the correct VLAN upon return. The VLAN Tag ID might be the same value or a different value than the VLAN tag used on the client. A zero (0) value specifies non-tagged (or native) VLAN. When considering the use of a Steelhead appliance on a trunk link, routing is often a point of concern due to the potential for many networks. While static inpath routes can be used, simplified routing commonly allows for an easier deployment. NOTE: When the Steelhead appliance communicates with a client or a server it uses the same VLAN tag as the client or the server. If the Steelhead appliance cannot determine which VLAN the client or server is in, it uses its own VLAN until it is able to determine that information.

2007-2009 Riverbed Technology, Inc. All rights reserved.

53

RCSP Study Guide

IV. Troubleshooting
Common Deployment Issues
Speed and Duplex Some symptoms around speed and duplex could be: Access does not speed up. If you look at interface counters and see errors (sometimes counters on a Steelhead appliance stay low, increase on network gear). There should be alarm/log messages about error counts. Packet traces see lots of retransmissions. In Ethereal use:
o o o o tcp.analysis.retransmission tcp.analysis.fast_retransmission tcp.analysis.lost_segment tcp.analysis.duplicate_ack

A likely problem is that the router is set to 100Full (fixed) whereas the Steelhead appliance is set to Auto. In this case, check with flood-ping, ping f I {in-path-ip} s 1400 {clientIP} or from server-side Steelhead appliance to server. Do not perform across the WAN. Change the interface speed/duplex to match. NOTE: Ideally the WAN and LAN have the same duplex settings, otherwise the devices around the Steelhead appliance will have a duplex mismatch when in bypass. SMB (Server Message Block) Signing SMB signing is a protocol add-on to protect permission distribution. It adds a cryptographic signature to CIFS packets and authenticates endpoints to prevent man-in-the-middle attacks (or optimization). A symptom could be that file access does either not speed up or perhaps not as much. You should see a log message about signed connections. Check the logs for error=SMB_SHUTDOWN_ERR_SEC_SIG_REQUIRED messages. A likely problem is that either the server has SMB signing enabled as does the client (1.X only) or, the server has SMB signing required and the client has it enabled. In this case, change the server to not be required:
(if enable:enable) protocol cifs secure-sig-opt enable

Packet Ricochet If network connections fail on their first attempt but succeed on subsequent attempts, it could be due to packet ricochet. You should suspect packet ricochet if: The Steelhead appliance on one or both sides of a network has an In-path interface that is different from that of the local host. There are no in-path routes defined in your network but are needed. You experience packet ricochet symptoms. Symptoms of packet ricochet are: Connections between the Steelhead appliance and the clients or server are routed through the WAN interface to a WAN gateway, and then they are routed through a Steelhead appliance to the next-hop LAN gateway.

54

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

The WAN router drops SYN packets from the Steelhead appliance before it issues an ICMP redirect. Note that some routers might not be able or could be configured to not send ICMP redirect packets. ICMP redirects are on by default on most routers and are sent whenever the router has to send the packet out the same interface it arrived on to route it towards the destination and when the next hop is on the same subnet as the source IP address. ICMP redirect information is stored for five minutes on the Steelhead appliance.

Opportunistic Locks (Oplocks) Windows (CIFS) uses opportunistic locking (oplock) to determine the level of safety the OS/application has in working with a file. Types of Oplocks The following list describes the types of oplock that a client may hold: Level II oplock. Informs a client that there are multiple concurrent clients of a file, and none have yet modified it. It allows the client to perform read operations and file attribute fetches using cached or read-ahead local information, but all other requests have to be sent to the server. Exclusive oplock. Informs a client that it is the only one to have a file open. It allows the client to perform all file operations using cached or read-ahead local information until it closes the file, at which time the server has to be updated with any changes made to the state of the file (contents and attributes). Batch oplock. Informs a client that it is the only one to have a file open. It allows the client to perform all file operations on cached or read-ahead local information (including open and close operations).

Losing an oplock may pose a problem for several reasons including anti-virus programs. The oplock controls the consistency of optimizations such as read-ahead. Oplock levels are reduced when conflicting opens are made to a file. The Steelhead appliance maintains the safety, thus it reduces optimization when a client has shared access to a file instead of exclusive access in order to keep correctness. Asymmetric Routing (AR) AR occurs when the transmit path is different than the return path for packets. For a Steelhead appliance to optimize traffic it must see the flow bi-directionally. Traffic can flow asymmetrically everywhere else in the network (between Steelhead appliances). Detecting Asymmetric Routing To detect AR by a client-side Steelhead look for things like: A RST packet from the client with an invalid SYN number while the connection is in the SYN_SENT state Receiving a SYN/ACK packet from the server with an invalid ACK number while the connection is in the SYN_SENT state Receiving an unusually high number of SYN retransmits from the client Receiving an ACK packet from the client while the connection in SYN_SENT state

Asymmetric Route Passthrough Asymmetric route passthrough allows connections to be passed through and an entry to be placed into the AR table. The entry is placed in the table for a default of 24 hours. For SYN
2007-2009 Riverbed Technology, Inc. All rights reserved. 55

RCSP Study Guide

retransmissions AR will be placed in AR table at first for 10 seconds. If AR is confirmed, the timeout is increased to the default (24 hours) with a reason code updated to SYN Rexmit (confirmed AR). If a SYN/ACK is received after we stop probing then it is placed in the table for 5 minutes with a reason code of probe filtered (not AR). If AR passthrough is disabled and we detect AR, we will not pass through the connection. A warning message is still placed in the log. However, the alarm is not raised and no email notifications are sent. Normal functionality would be to send a probe each time. However, adding a passthough rule for 24 hours is a better approach. Sending a probe out again will save on overhead of re-transmitting the probe.

Reporting and Monitoring


Logging Eight logging levels: Emergency Alert Critical Error Warning Notice Info Debug default none default none

Logging local <log level> Logging trap <log level>

Alarm Definitions and Resolution Viewing Alarm Status Reports The Health-Alarm Status report provides status for the Steelhead appliance alarms. The Health-Alarm Status report contains the following table of statistics that summarize traffic activity by application. To refresh your report every 15 seconds, click 15 Seconds. To refresh your report every 30 seconds, click 30 Seconds. To turn off refresh, click Off.

Alarms Admission Control - Whether the system connection limit has been reached. The appliance is optimizing traffic beyond its rated capability and is unable to handle the amount of traffic passing through the WAN link. During this event, the appliance will continue to optimize existing connections, but new connections are passed through without optimization. The alarm clears when the Steelhead appliance moves out of this condition. Asymmetric Routing - Indicates OK if the system is not experiencing asymmetric traffic. If the system does experience asymmetric traffic, this condition is detected and reported here. In addition, the traffic is passed through, and the route appears in the Asymmetric Routing table.
56 2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Central Processing Unit (CPU) Utilization - Whether the system has reached the CPU threshold for any of the CPUs in the Steelhead appliance. If the system has reached the CPU threshold, check your settings. If your alarm thresholds are correct, reboot the Steelhead appliance. NOTE: If more than 100 MB of data is moved through a Steelhead appliance, while performing PFS synchronization, the CPU utilization might become high and result in a CPU alarm. This CPU alarm should not be cause for concern. Data Store - Whether the data store is corrupt. To clear the data store of data, restart the Steelhead service and select Clear Data Store on Next Restart. Fan Error - Whether the system has detected a problem with the fans. Fans in 3U systems can be replaced. IPMI - Whether the system has encountered an Intelligent Platform Management Interface (IPMI) error. The system will display a blinking amber LED. To clear the alarm, run the clear hardware error-log command. Licensing - Whether your licenses are current. Link State - Whether the system has detected a link that is down. You are notified via SNMP traps, email, and alarm status. Memory Error - Whether the system has encountered a memory error. Memory Paging - Whether the system has reached the memory paging threshold. If 100 pages are swapped approximately every two hours the Steelhead appliance is functioning properly. If thousands of pages are swapped every few minutes, then reboot the Steelhead appliance. If rebooting does not solve the problem, contact Riverbed Technical Support. Neighbor Incompatibility - Whether the system has encountered an error in reaching a Steelhead appliance configured for Connection Forwarding. Network Bypass - Whether the system is in bypass mode. If the Steelhead appliance is in bypass mode, restart the Steelhead service. If restarting the service does not resolve the problem, reboot the Steelhead appliance. If rebooting does not resolve the problem, shut down and restart the Steelhead appliance. NFS V2/V4 Alarm (If NFS enabled and V2/V4 used) - Whether the system has triggered a v2 or v4 NFS alarm. Optimization Service - Whether the system has detected a software error in the Steelhead service. The Steelhead service continues to function, but an error message appears in the logs that you should investigate. Prepopulation or Proxy File Service Configuration Error - Whether there has been a PFS or prepopulation operation error. If an operation error is detected, restart the Steelhead service and PFS. Prepopulation or Proxy File Service Operation Failed - Whether a synchronization operation has failed. If an operation failure is detected, attempt the operation again. Proxy File Service partition Full - Whether the PFS partition is full. RAID - Whether the system has encountered RAID errors (for example, missing drives, pulled drives, drive failures, and drive rebuilds). For drive rebuilds, if a drive is removed and then reinserted, the alarm continues to be triggered until the rebuild is complete.
2007-2009 Riverbed Technology, Inc. All rights reserved. 57

RCSP Study Guide

IMPORTANT: Rebuilding a disk drive can take 4-6 hours. Software Version Mismatch - Whether there is a mismatch between software versions in your network. If a software mismatch is detected, resolve the mismatch by upgrading or reverting to a previous version of the software. SSL Alarms - Whether an error has been detected in your SSL configuration. System Disk Full - Whether the system partitions (not the data store) are almost full. For example, /var which is used to hold logs, statistics, system dumps, TCP dumps, and so forth. Temperature - Whether the CPU temperature has exceeded the critical threshold. The default value for the rising threshold temperature is 70 C; the default reset threshold temperature is 67C. System Dumps A system dump file contains data that can help Riverbed Technical Support diagnose problems. From the CLI:
debug generate dump

To view system dump files


show files debug-dump

To update the file to a remote host:


file debug-dump upload <filename> <URL>

TCPDump
tcpdump <options>

Tcpdump options: The tcpdump command takes the standard Linux options: -a Attempt to convert network and broadcast addresses to names. -c Exit after receiving count packets. -e Print the link-level header on each dump line. -E Use algo:secret for decrypting IPSec ESP packets. -f Print foreign internet addresses numerically rather than symbolically. -i Listen on interface. If unspecified, tcpdump searches the system interface list for the lowest numbered, configured up interface. -n Do not convert addresses (that is, host addresses, port numbers, and so forth) to names. -m Load SMI MIB module definitions from file module. This option can be used several times to load several MIB modules into tcpdump. -q Quiet output. Print less protocol information so output lines are shorter. -r Read packets from file (which was created with the -w option). -S Print absolute, rather than relative, TCP sequence numbers. -s Snarf snaplen bytes of data from each packet rather than the default of 68. 68 bytes is adequate for IP, ICMP, TCP and UDP but may truncate protocol information from name server and NFS packets. Packets truncated because of a limited snapshot are indicated in the
2007-2009 Riverbed Technology, Inc. All rights reserved.

58

RCSP Study Guide

output with [|proto], where proto is the name of the protocol level at which the truncation has occurred. -v (Slightly more) verbose output. For example, the time to live, identification, total length and options in an IP packet are printed. Also enables additional packet integrity checks such as verifying the IP and ICMP header checksum. -w Write the raw packets to file rather than parsing and printing them out. They can later be printed with the -r option. Standard output is used if file is -. -x Print each packet (minus its link level header) in hex. The smaller of the entire packet or snaplen bytes will be printed. -X When printing hex, print ASCII too. Thus if -x is also set, the packet is printed in hex/ascii. This option enables you to analyze new protocols.

To delete or upload a tcpdump file from the CLI type:


file tcpdump {delete <filename> | upload <filename> <URL or scp://username:password@hostname/path/filename>}

Troubleshooting Best Practices


Physical Environment Cables. Make sure you have connected your cables properly. Straight-through cables. Primary and LAN ports on the appliance to the LAN switch. Crossover cable. WAN port on the appliance to the WAN router. Speed and duplex settings. Do not assume network auto-sensing is functioning properly. Make sure your speed and duplex settings match on the Steelhead appliance and the router or switch. Use a ping flood to test duplex settings. WAN/LAN connections. Ensure the WAN interface is connected to traffic egress and the LAN interface is connected to traffic ingress.

Appliance Configuration IP addresses. To verify the IP address has been configured correctly: Ensure the Steelhead appliances are reachable via the IP address. For instance, use the Steelhead CLI command ping. Verify that the server-side Steelhead appliance is visible to the client-side Steelhead appliance. For example, at the system prompt, enter the CLI command:
tproxytrace -i inpath0_0 server:port

Verify that the client-side Steelhead appliance is visible to the server-side Steelhead appliance. For example, at the system prompt, enter the CLI command:
tproxytrace -i inpath0_0 client:port

In-path rules. Verify that in-path rules are configured correctly. For example, at the system prompt, enter the CLI command:
show in-path rules

In-path routes. Verify that in-path routes are configured correctly. For example, at the system prompt, enter the CLI command:
sh ip in-path route <interface-name>
2007-2009 Riverbed Technology, Inc. All rights reserved. 59

RCSP Study Guide

Steelhead service. If necessary, enable the Steelhead service. For example, at the system prompt, enter the CLI command:
service enable

In-path support. If necessary, enable in-path support. For example, at the system prompt, enter the CLI command:
in-path enable

In-path client out-path support. If necessary, disable in-path client out-of-path support. For example, at the system prompt, enter the CLI command:
no in-path oop all-port enable

Network (LAN/WAN) Topology Packet traversion. Physically draw out both sides of the entire network and make sure that packets traverse the same client and server Steelhead appliances in both directions (from the client to the server and from the server to the client). Verify packet traversion by running a traceroute from the client to the server and the server to the client. Bi-directional continuity. Make sure there is bi-directional continuity between the client and the client-side Steelhead appliance, and the server Steelhead appliance and the network server. Auto-discovery. If the auto-discovery mechanism is failing, try implementing a fixed-target rule. You can define fixed-target rules using the Management Console or the CLI. Auto-discovery can fail due to devices dropping TCP options, which sometimes occurs with certain satellite links and firewalls. To fix this problem, create fixed-target rules that point to the remote Steelhead appliances In-path interface on port 7800. LAN/WAN bandwidth and reliability. Check if there are any client and server duplex issues or VoIP traffic that may be clogging the T1 lines. Protocol optimization. Are all protocols that you expect to optimize actually optimized in both directions? If no protocols are optimized, only some of the expected protocols are optimized, or expected protocols are not optimized in both directions, check: That connections have been successfully established That Steelhead appliances on the other side of a connection are turned on For secure or interactive ports that are preventing protocol optimization For any passthrough rules that could be causing some protocols to passthrough Steelhead appliances unoptimized That the LAN and WAN cables are not inadvertently swapped

60

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

V. Exam Questions
Types of Questions
The RSCP exam includes a variety of question types including single-answer multiple choice, multiple-answer multiple choice, and fill in the blank. The question distribution is heavily targeted towards the multiple choice variety, however, fill in the blank questions are used in situations where the command is believed to be important part of everyday Steelhead appliance operation. Regardless of the type of question, selecting the best answer(s) in response the questions will yield the best score.

Sample Questions
1. How do you view the full configuration in the CLI? a. > show con b. > show configuration c. > show config all d. # show config full e. (config) # show con 2. Under what circumstances will the NetFlow cache entries flush (be sent to the collector)? (Select 3) a. When inactive flows have remained for 15 seconds b. When inactive flows have remained for 30 minutes c. When active flows have remained for 30 minutes d. When the TCP URG bit is set e. When the TCP FIN bit is set 3. The auto-discovery probe uses which TCP option number? a. 0x4e (76 decimal) b. 0x4c (76 decimal) c. 0x42 (66 decimal) d. Auto-discovery does not use TCP options 4. In order to achieve optimization using auto-discovery for traffic coming from site C and destined to site A in the exhibit, which configuration below would be required? a. In-path fixed-target rule on site B Steelhead pointing to Site A Steelhead b. Peering rule on site B Steelhead passing through probes from site C c. Peering rule on site B Steelhead passing through probe responses from site A d. Both A and C e. Both B and C

2007-2009 Riverbed Technology, Inc. All rights reserved.

61

RCSP Study Guide

5. You are configuring HighSpeed TCP in an environment with an OC-12 (622Mbit/s) and 60 milliseconds of round-trip latency. The WAN router queue length is set to BDP for the link. Assuming 1500 byte packets, the queue length for this link would be closest to: a. 3,110 packets b. 6,220 packets c. 775 packets d. 150 packets e. 10,000 packets 6. Which of the following correctly describe the combination of cable types used in a fail-towire scenario for the interconnected devices shown in the accompanying figure? Assume Auto-MDIX is not enabled on any device. a. Cable 1: Crossover, Cable 2: Crossover b. Cable 1: Straight-through, Cable 2: Straight-through c. Cable 1: Crossover, Cable 2: Straight-through d. Cable 1: Straight-through, Cable 2: Crossover

62

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

7. In the accompanying figure, on which interfaces would you capture the NetFlow export data for active FTP data packets when a client performs a GET operation? (Assume you are not interested in client response packets such as acknowledgements.) (Select the best answer) a. A and B b. B and D c. C and D d. B and C e. A and C
SH3

B
f0 s0 s0 f0 wan

C
WAN
wan

SH4

lan

lan

L3 Switch

D
L2 Switch

FTP Server

FTP Client

8. Which of the following control messages are NOT used by WCCP? a. HERE_I_AM b. I_SEE_YOU c. REDIRECT_ASSIGN d. REMOVAL_QUERY e. KEEPALIVE 9. A customer wants to mark the DSCP value for active FTP data connection as AF22. Which of the following are true? (Select the best answer) a. Specify qos dscp rule at client-side Steelhead with a dest-port of 21 and with a DSCP value of 22 b. Specify qos dscp rule at client-side Steelhead with a src-port of 20 and with a DSCP value of 20 c. Specify qos dscp rule at server-side Steelhead with a dest-port of 21 and with a DSCP value of 22 d. Specify qos dscp rule at client-side Steelhead with a dest-port of 20 and with a DSCP value of 22 e. Specify qos dscp rule at server-side Steelhead with a dest-port of 20 and with a DSCP value of 20

2007-2009 Riverbed Technology, Inc. All rights reserved.

63

RCSP Study Guide

10. Type in the command used to show information regarding the current health (status) of a Steelhead, the current version, the uptime, and the model number. (fill in the blank) _______________ Answers 1d, 2ace, 3b, 4b, 5a, 6c, 7e, 8e, 9e, 10 show info

64

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

VI. Appendix
Acronyms and Abbreviations
Acronym/Abbreviation AAA ACL ADS AR ARP BDP BW CA CAD CDP CIFS CLI CMC CPU CSV DC DHCP DID DNS DSCP EAD FIFO FTP GB GRE GUI HFSC HSRP HSTCP HTTP HTTPS ICMP ID IOS IP IPSec L2 Definition Authentication, Authorization, and Accounting Access Control List Active Directory Services Asymmetric Routing Address Resolution Protocol Bandwidth-Delay Product Bandwidth Certificate Authority Computer Aided Design Cisco Discovery Protocol Common Internet File System Command-Line Interface Central Management Console Central Processing Unit Comma-Separated Value Domain Controller Dynamic Host Configuration Protocol Deployment ID (for Steelhead Mobile) Domain Name Service Differentiated Services Code Point Enhanced Auto-Discovery First in First Out File Transfer Protocol Gigabytes Generic Routing Encapsulation Graphical User Interface Hierarchical Fair Service Curve Hot Standby Routing Protocol High-Speed Transmission Control Protocol HyperText Transport Protocol HyperText Transport Protocol Secure Internet Control Message Protocol Identification number (Cisco) Internetwork Operating System Internet Protocol Internet Protocol Security Protocol Layer 2
65

2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Acronym/Abbreviation L4 LAN LED LZ MAC MAPI MDIX MIB MOTD MS SQL MSI MTU MX-TCP NAS NAT NFS NSPI NTP OSI PBR PCI PFS QoS RADIUS RAID RCU SA SDR SFQ SH SMB SMC SMI SMTP SNMP SQL SSH SSL TA TACACS+
66

Definition Layer 4 Local Area Network Light-Emitting Diode Lempel-Ziv Media Access Control Messaging Application Protocol Interface Medium Dependent Interface Crossover Management Information Base Message of the Day Microsoft Structured Query Language Microsoft Software Installer Maximum Transmission Unit Max-Speed TCP Network Attached Storage Network Address Translation Network File System Name Service Provider Interface Network Time Protocol Open System Interconnection Policy-Based Routing Peripheral Component Interconnect Proxy File Service Quality of Service Remote Authentication Dial-In User Service Redundant Array of Independent Disks Riverbed Copy Utility Security Association Scalable Data Referencing Stochastic Fairness Queuing Riverbed Steelhead Appliance Server Message Block Steelhead Mobile Controller Structure of Management Information Simple Mail Transfer Protocol Simple Network Management Protocol Structured Query Language Secure Shell or server-side Steelhead Secure Sockets Layer Transaction Acceleration Terminal Access Controller Access Control System
2007-2009 Riverbed Technology, Inc. All rights reserved.

RCSP Study Guide

Acronym/Abbreviation TCP TCP/IP TTL ToS UDP UNC URL VLAN VoIP VWE WAN WCCP WFS

Definition Transmission Control Protocol Transmission Control Protocol/Internet Protocol Time to Live Type of Service User Datagram Protocol Universal Naming Convention Uniform Resource Locator Virtual Local Area Network Voice over IP Virtual Window Expansion Wide Area Network Web Cache Communication Protocol Windows File System

2007-2009 Riverbed Technology, Inc. All rights reserved.

67

You might also like