You are on page 1of 15

NetApp Test Plan for Evaluation

NetApp Test Plan for Customer Name


Ver. 2.0

Page 1 of 15 March 9 2013

NetApp Test Plan for Evaluation

1 Index

This test plan contains the following sections: 1 Index 2 1.1 Introduction ...........................................................................................................................................................................3 1.2 Objectives .............................................................................................................................................................................3 1.3 Success Criteria ....................................................................................................................................................................3 1.4 Pre-requisites ........................................................................................................................................................................3 1.5 Contacts.................................................................................................................................................................................4 1.6 NetApp Equipment Provided.................................................................................................................................................4 1.7 Proposed Test Architecture....................................................................................................................................................5 1.8 Duration and Timing of the Evaluation.................................................................................................................................5 1.9 Support During the Evaluation..............................................................................................................................................5 2 Functional Tests of NetApp Fabric-Attached Storage (FAS) Systems 6 2.1 Hardware and Installation Tests...........................................................................................................................................6 2.2 Cluster Tests..........................................................................................................................................................................7 2.3 UNIX and NFS testing...........................................................................................................................................................7 2.4 Filename and File Access Tests ............................................................................................................................................8 2.5 Management and Security Tests............................................................................................................................................8 2.6 SnapShot and SnapRestore Tests...........................................................................................................................................9 2.7 SnapMirror, DR, Reversal and Resync Tests......................................................................................................................10 2.8 Networks and Performance..................................................................................................................................................11 2.9 Applications.........................................................................................................................................................................11 2.10 Support...............................................................................................................................................................................12 3 Performance Tests 13 3.1 Hardware and Installation Tests.........................................................................................................................................13 3.2 Basic Performance Testing Loop........................................................................................................................................14

Page 2 of 15 March 9 2013

NetApp Test Plan for Evaluation 1.1 Introduction

The Customer (at address) has agreed to evaluate and test a NetApp solution applicable to the environment currently under review.

1.2

Objectives

The Customers associated business issue may be summarized thusly: explanation of business problem. At a high level, the objectives of this evaluation and test are as follows: Verify that the NetApp solution will meet or exceed The Customers requirements for a solution to the business issue outlined above. Demonstrate the advantages of the NetApp proposed solution with regards to disaster recovery, reliability, and resilience to component failure. Show that performance of the proposed solution meets or exceeds that required by The Customer. Define and test relevant scalability, availability, and performance metrics for this evaluation.

1.3

Success Criteria

Any evaluation process must have clearly-defined success criteria in order to test and judge the exit process. These may be time-sensitivity, functionality proof, performance tuning metrics, or the like. In each case, both NetApp and The Customer should agree a priori to the individual criteria. Criterion Time Frame Measurement All testing must be completed by DATE Installation/setup must be completed by DATE Breakdown and shipment from customer must occur by DATE Etc. Functionality Site-to-site disaster recovery of Oracle Dataset volume must be demonstrated DR as above must occur within 10 minutes of primary filer interruption Etc. 2000 IOPS sustained measured via perfstat under representative Oracle load. Aggregate throughput of 200MB/sec via NFSv3 to/from 4 Sunfire V480 hosts sustained for 30 minutes demonstrated. Etc. NetApp Signature Customer Signature

Performance

1.4

Pre-requisites

In order for The Customer to evaluate and test the NetApp solution, certain tasks must be completed prior to the beginning of the tests. These may be divided as follows: NetApp Page 3 of 15 March 9 2013

NetApp Test Plan for Evaluation


The Customer Distribution of Evaluation Agreement to customer Distribution of System Installation documentation Development (with customer) of the Evaluation Test Plan Contact information for all involved parties Other items as necessary Signed and exercised Evaluation Agreement Completed NetApp Pre-Install Checklist (available from NetApp SE) Contact information for all involved parties Other items as necessary

1.5

Contacts

The evaluation of the NetApp solution is assumed to be a joint process undertaken and supported by both NetApp and representatives of The Customer. In particular, on-site and dedicated resources from both parties are vital to the success of the endeavour. The Personnel below are assigned to the task(s) discussed above. Leads should be involved in the scheduling and the tracking of all evaluation and test items, and should consistently steer the accomplishment of all tests toward the Success Criteria mutually agreed upon above. NetApp Contacts Title Name Address Phone Cell Email/pager Signature

Etc.

Customer Contacts Title Customer Technical Lead Customer Business Unit Lead Etc. Name Address Phone Cell Email Signature

1.6

NetApp Equipment Provided

The following is a high-level listing of the components and equipment NetApp is providing for the purpose of this test. Unless stated here below, no other major equipment is to be provided by NetApp. Specifically, NetApp is not responsible for providing such non-NetApp hardware as IP switches, cables to IP switches, servers, or the personnel to configure, install or service such infrastructure for the duration of the test. The Customer is assumed to be responsible for infrastructure items not listed below which may be required to perform the evaluation, including third-party software licenses for evaluation purposes, power and receptacles, etc. NetApp Equipment list: Model: Number of Racks: NetApp software Page 4 of 15 March 9 2013

NetApp Test Plan for Evaluation

1.7

Proposed Test Architecture

Please see the attached sketch (Figure 1) for details of the test architecture.

1.8

Duration and Timing of the Evaluation

Evaluation and testing of the NetApp solution will be subject to the following timelines and durations. The entire test and evaluation process will begin on Date and run until Date. Individual sections of this test plan may be executed on particular dates. Each test below contains a space where the date may be recorded. After the tests are completed, The Customer and representatives from NetApp will meet together to discuss the completed plan. This meeting will happen no later than Date. In addition to the post-test discussion, a completed report will be submitted by The Customer internally which details the results of the entire test/evaluation process. A similar or otherwise appropriate (approved for external distribution) version of the report shall be released to NetApp by Date. Such a completed report shall be considered the official results documentation and, as such, signal the end of the evaluation process. Breakdown, disconnection of the systems, blanking of disk media, crating and shipment of the system(s) shall be negotiated and planned in a mutually beneficial timeframe and manner between The Customer and NetApp.

1.9

Support During the Evaluation

NetApp provides close customer support for evaluations of its solutions by prospective customers. At the center of the NetApp support infrastructure is the System Engineer (SE) listed in the Contacts section above. The SE is responsible for the support of the testing and evaluation process, and in solving any issues arising from the test process itself. As a part of the testing process, the SE will likely consult with internal resources such as the Global Support Center (GSC), other System Engineers and Professional Services Engineers, as well as engineering/product development personnel within NetApp. Calls to 1-888-4NETAPP or opening a ticket on the NOW site should only be undertaken with consultation and direction from the NetApp SE. For example, the acknowledgment of autosupport messages or the navigation and knowledge transfer of the NOW site may be constructed as a part of the evaluation/testing process of the solution as a whole. In such a case, the SE guides the contact process to the GSC and manages the call, the information transfer involved, and the eventual closure of the issue/ticket. In short, the support process for any evaluation system in the field begins with the NetApp SE, and should proceed with that persons guidance and full knowledge in all stages of the test cycle.

Page 5 of 15 March 9 2013

NetApp Test Plan for Evaluation

2 Functional Tests of NetApp Fabric-Attached Storage (FAS) Systems


Specific Functional tests are categorized below. Corresponding Performance tests may also be added. Note that functional tests (which reveal whether a certain feature or capability works correctly) do not imply load, thus are necessary but not sufficient conditions for judging a systems run-time operational capabilities. Actual operation under significant load is necessary for full results. See Section 3 for Performance-oriented test guidelines.

2.1

Hardware and Installation Tests

The tests in this section are designed to show the resilience of the hardware platform to typical events. Unplanned interruption of power can cause more downtime to users than a simple planned shutdown. Task #
2.1.1 2.1.2 2.1.3

Task Confirm cold-boot time. (halt, then boot) Confirm warm boot time. (reboot) Run through system setup to install filer on network and supply with a hostname. Time how long to complete install process. Define an aggregate. Confirm an aggregate can be dynamically extended by a single disk without corrupting file system to demonstrate system scalability. Confirm an aggregate can be dynamically extended by a group of disks without corrupting file system to demonstrate system scalability. Hot-add a new disk shelf while system is running to demonstrate system scalability. Confirm operation of redundant power supplies on filer shelves and controllers. Confirm operation of redundant fans on filer shelves and controllers. Remove a data disk while being accessed, and determine if any data corruption has occurred. Break RAID set by removing parity disk to ascertain time required for it to be rebuilt. Perform at-the-wall power failure as described below. Upon restart, ensure no data is corrupted. Time how long

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.1.4 2.1.5

2.1.6

2.1.7

2.1.8

2.1.9

2.1.10

2.1.11

2.1.12

Page 6 of 15 March 9 2013

NetApp Test Plan for Evaluation


2.1.13

2.1.14

before users are able to access system again. Create a flexible volume, increase the size then decrease the size of the volume. Record effects. Upgrade system software. Record effects on users and time length of outage.

* Note: Cutting the power off at the wall plug/switch is an important test. Because some devices feature soft power buttons which shut down the system in an orderly manner (de-staging cache, e.g.) and lastly signal the power supply to cut power to the hardware, simply hitting a button on the front of the unit switches is not an adequate simulation of power failure. Pull the plug or cut AC current suddenly instead.

2.2

Cluster Tests

NetApp systems are commonly configured and installed as clusters, where multiple independent controllers or individual units intercommunicate to provide seamless failover of services if one operational unit should fail. Task #
2.2.1

Task Manually initiate failover between cluster nodes. Time failover. Check the virtual names/aliases for correctness. Initiate failover by powering off one cluster node. Time failover. Manually initiate failback between cluster nodes. Time failback. Initiate failback by powering off live cluster node. Time failback. Force failover by switching filer off. Once takeover is complete, switch off partner. Reboot the first Filer. Can this filer see the disks?

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.2.2

2.2.3

2.2.4

2.2.5

2.2.6

2.3

UNIX and NFS testing

These tests are intended to show the level of integration with a standard UNIX environment: Task #
2.3.1 2.3.2

Task Add filer into NIS domain. Add NFS exports to share filer volumes.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

Page 7 of 15 March 9 2013

NetApp Test Plan for Evaluation

2.4

Filename and File Access Tests

These tests are intended to test migration of users to the filer from an existing user environment: Task #
2.6.1

Task Copy at least 500 files and directories with embedded spaces to and from the filer and determine if corruption of filenames occurs Configure a user account to use the filer as a home directory. Logon to the user account from a workstation. Confirm correct functionality. Add disks & increase the volume size note impact on users. Simulate network failure and impact to users re: access to data.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.6.2

2.6.3

2.6.4

2.5

Management and Security Tests

These tests are intended to demonstrate the level of administration required and that the filer is a secure storage system. Task #
2.7.1

Task Test use of SSH for Remote access and configuration. Test use of telnet for Remote access and configuration. Test use of Filerview for Remote access and configuration. Can SSL be used to provide management, if required? Use syslog messages file to monitor system. Export an area of filesystem only to root users.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.7.2

2.7.3

2.7.4 2.7.5 2.7.6

Page 8 of 15 March 9 2013

NetApp Test Plan for Evaluation 2.6 SnapShot and SnapRestore Tests
Recovering single files is a substantial burden at many sites. Just one restore per year from tape per five users can tie up whole system administrators at large sites. If NetApps SnapShots are enabled for user volumes (recommended) and can be browsed by users, then users can recover their own lost or damaged data without operator/admin assistance. (Note that SnapShot directories may be restricted from user view by the filers administrator.) Task #
2.8.1

Task Create a Snapshot schedule to automatically create SnapShots at regular periods during the day. Create a Snapshot. Time how long this takes and how long before you are able to recover a deleted file. Update file from the Snapshot version (copy/paste). Open a file in a SnapShot and take another SnapShot while opened. Recover large deleted file from Snapshot Hourly.0. Take a new Snapshot during that file recovery. Configure SnapShot share to allow administrator file restores when Snapshots are not visible to users Change Snapshot reserve to make more space for new SnapShots. Ensure that this is dynamic and that old SnapShots are retained. Simulate recovering a whole filesystem/volume: Configure a large filesystem (100GB+). Copy in data until the filesystem is approximately 80% full. Take a SnapShot. Over-write at least 10% of the data. Recover to the previous filesystem state using SnapRestore. Restrict access to Snapshot directories and verify that access is indeed restricted. Show single-file SnapRestore capabilities.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.8.2

2.8.3

2.8.4

2.8.5

2.8.6

2.8.7

2.8.8

2.8.9

2.8.10

Page 9 of 15 March 9 2013

NetApp Test Plan for Evaluation

2.7

SnapMirror, DR, Reversal and Resync Tests

Simulate mirroring a Remote clustered filer to a Core clustered filer via SnapMirror in order to test Disaster Recovery (COB) functionality from a remote site: Configure to allow Core filer to take over fileserving activities if Remote filer is lost. Simulate failure of Remote filer and ensure that Core filer takes over. o Perform failover with open files and check to see that operations are uninterrupted and that there is no data loss or corruption. Configure Reversal of SnapMirror relationship to allow recovery of Remote filer. Perform resync, then suspend and failback. o Perform failback with open files and check to see that operations are uninterrupted and that there is no data loss or corruption. Task Establish SnapMirror relationship between Remote filer(s) and Core filer(s). Pre-load Remote filer(s) with file data. Perform SnapMirror Initialization between Remote filer(s) and Core filers. Time the initial transfer. Simulate failure of Remote filer A and ensure that Remote filer B takes over (Perform failover with open files and check to see that operations are uninterrupted and that there is no data loss or corruption) Simulate failure of the Remote filer and assume catastrophic volume loss: Delete the source volume on the Remote filer which is the source of the SnapMirror transfer to the Core filer. Break the SnapMirror with Core filer. Re-point Remote filers clients to the Core filer and continue. Re-add a new (empty) volume to the Remote filer which is at least as large as the previous one. Configure Reversal of SnapMirror direction to allow data movement/recovery back from Core filer to the Page 10 of 15 March 9 2013 Date or Time Assignee Notes Pass / Fail Cert. Initials

Task #
2.9.1

2.9.2

2.9.3

2.9.4

2.9.5

2.9.6

NetApp Test Plan for Evaluation


2.9.7

2.9.8

2.9.9

2.9.10

2.9.11

Remote filer. Re-initialize SnapMirror, this time FROM the Core filer TO the Remote filer. Time the transfer Resync/update the SnapMirror transfer as needed. After data movement is complete, Break the SnapMirror relationship and bring back new volume on Remote filer online. Check to see that operations are uninterrupted and that there is no data loss or corruption. Re-establish SnapMirror relationship from Remote filer to Core filer.

2.8

Networks and Performance

These tests demonstrate the ability of the filer to cope with a network failure and demonstrate the level of interruption visible to users. Task #
2.10.1

Task Confirm Operation of Autosupport Email Notification Change the IP address of a network interface. Is a reboot required to make this change work? Create a VIF on one controller and test operation if one wire is pulled.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.10.2

2.10.3

2.9

Applications

This section is set aside for any customer applications which are part of the evaluation process. For example, a customer may elect to test the operation of a custom application while FCP-connected to LUNs on the NetApp filer. Or a Microsoft application which previously ran as FCP-attach may need to be tested over iSCSI. Another possible area is application-aware failover testing, in which a single storage system or cluster is artificially failed to test resiliency of the application itself to withstand catastrophes. In general, the functional testing of such customer-specific applications requires support directly from the customer organization, and NetApp resources advise the customer of their options in placing and serving the data from the NetApp systems. Often,

Page 11 of 15 March 9 2013

NetApp Test Plan for Evaluation


operation and performance testing can be automated for this procedure. Note that third-party software licensing of the specific application for evaluation purposes is usually the responsibility of the customer. Task # 2.19.1 2.19.2 2.19.3 2.19.4 2.19.5 2.19.6 Task Date or Time Assignee Notes Pass / Fail Cert. Initials

2.10 Support
Evaluations often do not examine the background support infrastructure a company is able to provide. Ultimately, any solution is likely to have some issues that require technical resolution or help, and it is how a vendor is able to step up which delineates the difference. Task #
2.20.1

Task Contact your Systems Engineer to arrange for a temporary NOW account. Log into the NOW site using temporary username and password http://now.netapp.com Access Knowledgebase section. Access Software section. Check out the technical library for useful papers: http://www.netapp.com/tech _library/ Configure autosupport Submit a test issue/case via the NOW site. Submit a test issue/case via 1-888-4NETAPP.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

2.20.2

2.20.3 2.20.4 2.20.5

2.20.5 2.20.6 2.20.7

Page 12 of 15 March 9 2013

NetApp Test Plan for Evaluation

3 Performance Tests
Functional tests, which reveal whether a certain feature or capability works correctly, do not imply load. Functional tests thus are necessary but not sufficient conditions for judging a systems run-time operational capabilities. Actual operation under significant load is necessary for full results. This section is designed to load a system properly and repetitively, and may require the usage of NetApp tools such as SIO or perfstat in order to measure or record statistics for storage operations. (Both of these tools may be found at http://now.netapp.com/eservice/toolchest ). In addition, certain tests may require third-party tools in order to perform some data collection or analysis. The most popular one of these is perftest, which is a Windows tool for counter collection. NetApp may advise customers to set perftest counters on the server side when running Exchange, MS SQL, Sharepoint, or any load simulation software from a Windows host. Outside of perftest, which is free and readily available for customers of Microsoft Server platforms, NetApp does not require any other third-party measurement tool, except if amended below for application-specific testing.

3.1
Task #
4.1.1 4.1.2 4.1.3 4.1.4

Hardware and Installation Tests


Task Server stand-up, cabling to IP switch. Server O/S build and certification by customer. Networking configuration on servers. Control/monitoring workstation setup, networking enabled. Switch configuration and certification by customer. Create aggregates on filers. Create volumes on filers. Create LUNs on filers. Configure VIFs Export the filesystems, mount them from the servers (assumes NFS) and/or Share out the filesystems and access them from the servers (assumes CIFS) and/or Mount LUNs from the servers (assumes iSCSI/FCP) Application installation on all necessary servers. Application configuration/tuning. Application certification by customer. Date or Time Assignee Notes Pass / Fail Cert. Initials

4.1.5 4.1.6 4.1.7 4.1.8 4.1.9 4.1.10

4.1.11 4.1.12 4.1.13

Page 13 of 15 March 9 2013

NetApp Test Plan for Evaluation 3.2 Basic Performance Testing Loop
True evaluation of performance depends upon the logical application of changes in variables, the correct recording of baseline activity, and excellent note-taking while the evaluation is proceeding. The loop below defines the basics for testing an application running disk activity against a NetApp filer. Further detail and additional steps are encouraged; the repetitive block is only a suggested outline. Task #
4.2.1

Task Start Host, Network and NetApp performance gathering tools. Run for 15 minutes. Gather performance data in realtime and for post-test analysis of BASELINE (no load) state. Stop all tools. Reset servers/switch/filers to pristine state if necessary. Prepare application for test run (load datasets, prepare scripts, adjust variables, etc.) Note the time/date. Start Host, Network and NetApp performance gathering tools. Start the application. Wait until steady-state performance is gained. Stop the application. Wait until all activity dies down. Stop all tools. Reset servers/switch/filers to pristine state if necessary Parse performance data. Analyze data. Make recommendations for changes/options/tuning. Note changes in variables in a runbook or logbook. Change run/test parameters on application, host, network, or filer as appropriate. Reboot or restart systems if required. Delete data and zero drives if necessary. Return to 3.2.4 and repeat.

Date or Time

Assignee

Notes

Pass / Fail

Cert. Initials

4.2.2

4.2.3

4.2.4

4.2.5 4.2.6

4.2.7 4.2.8 4.2.9 4.2.10 4.2.11

4.2.12

4.2.13 4.2.14

4.2.15 4.2.16

Tips:

Create various test cases which vary by run-time length and activity. Devote the initial tests to estimation of the effects of global variable changes (i.e. changes to the filer or server configurations). Devote later tests to evaluating the effects of altering application inputs. Devote the initial test cycles to the tests with shortest run times. Be ready to document any test cases or environments which result in abnormal behaviour or failure so that these are not repeated when longer or larger tests are being run later. Page 14 of 15 March 9 2013

NetApp Test Plan for Evaluation


Estimate that the longest test case should be run the fewest number of times. Therefore, these cases are usually run last in a given cycle. Change only one variable at a time, and document every small change. Save as many test results as possible to a secondary data repository, such as a secondary archive, laptop, memory stick, CDROM, detachable hard drive, etc. Document and name the files in an explicit manner (no a.out files, e.g.)

Page 15 of 15 March 9 2013

You might also like