You are on page 1of 67

EMC Global Education IMPACT

Home

IMPACT modules consist of focused, in-depth training content that can be consumed in about 1-2 hours

Welcome to Symmetrix Foundations

Course Description Start Training Run/Download the PowerPoint presentation Student Resource Guide Training slides with notes Assessment Must be completed online
(Note: Completed Assessments will be reflected online within 24-48 hrs.)

Complete Course Directions on how to update your online transcript to reflect a complete status for this course.

For questions or support please contact Global Education

2004 EMC Corporation. All rights reserved.

EMC Global Education IMPACT

Course Completion

Link to Knowledgelink to update your transcript and indicate that you have completed the course.

Symmetrix Foundations
Course Completion Steps:

1. 2. 3. 4.

Logon to Knowledgelink (EMC Learning management system). Click on 'My Development'. Locate the entry for this learning event you wish to complete. Click on the complete icon [ ].

Note: The Mark Complete button does not apply to items with the Type: Class, Downloadable (AICC Compliant) or Assessment Test. Any item you cancel from your Enrollments will automatically be deleted from your Development Plan. Click here to link to Knowledgelink

For questions or support please contact Global Education Back to Home


2004 EMC Corporation. All rights reserved.

EMC Global Education

Symmetrix Foundations - IMPACT


Course Description This foundation level course provides participants with an understanding of the Symmetrix Architecture and how it is an integral component of EMCs offering. This course is part of the EMC Technology Foundations curriculum and is a pre-requisite to other learning paths. Audience This course is intended for any person who presently or plans to: Educate partners and/or customers on the value of EMCs Symmetrix-based storage infrastructure Provide technical consulting skills and support for EMC products Analyze a Customers business technology requirements Qualify the value of EMCs products Collaborate with customers as a storage solutions advisor
Course Number: Method:

MR-5WP-SYMMFD IMPACT Duration: 3 hours

e-Learning

Prerequisites Prior to taking this course, participants should have strong understanding of IT concepts and a basic knowledge of storage concepts. Course Objectives Upon successful completion of this course, participants should be able to: Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA) Write a detailed list of host connectivity options for Symmetrix Explain how Symmetrix functionally handles I/O requests from the host environment Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes Describe the media protection options available on the Symmetrix Referencing a diagram, explain some of the high availability features of Symmetrix and how this potentially impacts data availability Describe the front-end, back-end, cache, and physical drive configurations of a DMX and other Symmetrix models

Modules Covered Labs Labs reinforce the information you have been taught. The labs for this course include: None Assessments validate that you have learned the knowledge or skills presented during a learning experience. This course includes a self-assessment quiz, to be conducted on-line via KnowledgeLink, EMCs Learning Management System. Assessments This course includes a single module on Symmetrix Architecture

If you have any questions, please contact us by email at GlobalEd@emc.com

Page

1 of 1

Symmetrix Foundations, 1

Symmetrix Foundations

EMC Global Education


2004 EMC Corporation. All rights reserved. 1

Welcome to Symmetrix Foundations. EMC offers a full range of storage platforms, from the CLARiiON CX200 at the low end, to the unsurpassed DMX 3000 at the high end. This training provides an architectural introduction to the Symmetrix family of products. The focus will be on DMX, but prior generations of Symmetrix will also be discussed. Copyright 2004 EMC Corporation. All rights reserved. These materials may not be copied without EMC's written consent. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 2

Symmetrix Foundations
After completing this course, you will be able to: Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA) Write a detailed list of host connectivity options for Symmetrix Explain how Symmetrix functionally handles I/O requests from the host environment Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes Describe the media protection options available on the Symmetrix Explain some of the high availability features of Symmetrix and how it impacts data availability Describe the front-end, back-end, cache, and physical drive configurations of various Symmetrix models
EMC Global Education
2004 EMC Corporation. All rights reserved. 2

These are the learning objectives for this training. Please take a moment to read them.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 3

High-End Storage: The New Definition


High-End Then
Simple redundancy
Automated Fail-over

High-End Today
Non-disruptive everything
Upgrades, operation, and service

Benchmark performance (IOPs and MB/s)


Single and/or simple workloads

Predictable performance unpredictable world


Complex, dynamic workloads

Basic local and remote data replication


Backup windows, testing, and disaster recovery

Replicate any amount, any time, anywhere


Replicate any amount of data, across any distance, without impact to service levels

Scalability
Capacity

Flexibility
Capacity, performance, connectivity, workloads, etc.

Manage the storage array


Easy configuration, simple operation, minimal tuning

Manage service levels


Centralized management of the storage environment

Both Open Systems and Mainframe connectivity


EMC Global Education
2004 EMC Corporation. All rights reserved. 3

Both of EMCs Storage Platforms, the CLARiiON and the DMX Symmetrix, raised the bar. What was once considered high-end is provided in the CLARiiON today. The Symmetrix has provided higher levels of capabilities that were never before available. Availability - It used to be that high-end availability meant simple redundancy. Use two of everything. Two busses, mirrored cache boards, dual power suppliersuse the second one if the first one breaks. But today, thats a mid-tier feature. High-end needs to be always onlinewhich means non-disruptive everythingnon-disruptive upgrades, non-disruptive reconfigurations, and non-disruptive serviceability. Performance - It used to be all about low-level benchmarkshow many, and how fast. IOPs and megabytes per second. Today, simple benchmarks are used to measure mid-tier arrays, not high-end. High-end customers want predictable performance in an unpredictable world. High service levels mean being able to guarantee great application response, even if theres a surprise like an unpredictable workload. And you cant measure that with a simple benchmark. Replication - Today, just about every mid-tier array can do replication. High-end means being able to copy any amount of data, at any time during the day, and send it any distance if need be, delivering high application performanceall at the same time. Scalability - In todays world, SANs give you lots of ports. If you want large capacities, a 50 TB CLARiiON is the better deal, or Centera, which can handle up to a petabyte. Flexibility Being able to handle requirements with just the right mix of performance and capacity. It means supporting the right connectionslike iSCSI and GigE. And it means being able to handle different requirements cost-effectivelyif things change. And one of the things that sets high-end apart is its ability to handle change. Management - It wasnt so long ago that management at the high end meant a nice, easy-to-use GUI that helped you configure the array. But today, thats what mid-tier arrays do. Theres a new requirement. Its not just the array. Its the switch, and the server, and the applicationsits the whole end-to-end stack.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 4

Symmetrix Integrated Cached Disk Array


Highest level of performance and availability in the industry Consolidation
Capacities to 84TB Up to 64 host ports SAN or NAS

Advanced functionality
Parallel processing architecture Intelligent prefetch Auto cache destage Dynamic mirror service policy Multi-region internal memory Predictive failure analysis and call home Back-end optimization
EMC Global Education
2004 EMC Corporation. All rights reserved. 4

Enginuity Operating Environment


Base services for data integrity, optimization, security, and Quality of Service Core services for data mobility, sharing, repurposing, and recovery

There are basically three categories of storage architectures: Cache Centric, Storage Processor centric, and JBOD or Just a Bunch Of Disks. The Symmetrix falls under the category of cache centric storage. We call it an ICDA, or Integrated Caching Disk Array. It is not a RAID box, it is an Integrated Caching Disk Array! As we go through this presentation, you will understand the differences.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 5

Enginuity Operating Environment

Symmetrix Based Applications Host Based Management Software ISV Software WideSky Management Middleware Enginuity Operating Environment Symmetrix Hardware

Enginuity Operating Environment is the Symmetrix software that: Manages all operations Ensures data integrity Optimizes performance Enginuity is often referred to as the microcode WideSky middleware provides common API and CLI interface for managing Symmetrix and the entire storage infrastructure EMC and ISV develop management software supporting heterogeneous platforms using WideSky API and CLIs
5

EMC Global Education


2004 EMC Corporation. All rights reserved.

Before we get into the hardware, lets briefly introduce the software components, as most functionality is based in software and supported by the hardware. Enginuity is the operating environment for the Symmetrix storage systems. Enginuity manages all Symmetrix operations, from monitoring and optimizing internal data flow, to ensuring the fastest response to the users requests for information, to protecting and replicating data. Enginuity is often referred to as the Microcode. WideSky is storage management middleware that provides a common access mechanism for managing multivendor environments, including the Symmetrix, storage, switches, and host storage resources. It enables the creation of powerful storage management applications that dont have to understand the management details of each piece within an EMC users environment. In addition to being middleware, WideSky is a development initiative (that is, a program available to ISVs and developers through the EMC Developers Program) and provides a set of storage application programming interfaces (APIs) that shield the management applications from the details beneath. It provides a common set of interfaces to manage all aspects of storage. With WideSky providing building blocks for integrating layered software applications, ISVs and third-party software developers (through the EMC Developers Program), and EMC software developers are given wide-scale access to Enginuity functionality.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 6

Symmetrix Architecture

Front End Channel Director

Shared Global Memory Cache

Back End Disk Director

All Symmetrix share the same basic architecture


EMC Global Education
2004 EMC Corporation. All rights reserved. 6

All members of the Symmetrix family share the same fundamental architecture. This architecture was initially called MOSAIC 2000 and is the architecture that continues to drive the Symmetrix through the year 2000 and beyond. This modular hardware framework allows rapid development of new storage technology, while supporting existing configurations. There are three functional areas:

Shared Global Memory - provides cache memory and link between independent front end and back end (intelligent boards comprised of memory chips) Front End - how the Symmetrix connects to the host (server) environment, referred to as Channel Directors (multi-processor circuit boards) Back End - how the Symmetrix controls and manages its physical disk drives, referred to as Disk Directors or Disk Adapters (multi-processor circuit boards)

What differentiates the different generations and models is the number, type, and speed of the various processors, and the technology used to interconnect the front-end and back-end with cache.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 7

Symmetrix 4.8 Architecture


Front End
Channel Director

3 Bay Cabinet 1 Bay Cabinet

3930 - OS 5930 - MF 3830 - OS 5830 - MF 3630 - OS 5630 - MF

Shared Global Memory


Y Bus

Back End
Disk Director

Bay Cabinet Port C

Processor b
68060-75Mhz

Processor b
68060-75Mhz

Port D

Cache
Port C Processor a
68060-75Mhz

Processor a
68060-75Mhz

Port D

X Bus 40 MBS UWD SCSI Bus 360 MBS Internal Bus (Total of 720 MBS)

EMC Global Education

Dual X and Y Buses

2004 EMC Corporation. All rights reserved.

The Symm 4.x Architecture includes:


Dual X and Y buses Odd Numbered Directors connect to the X Bus. Even Numbered Directors connect to the Y Bus. Memory Boards connect to both the X and Y Busses Motorola Processors 40 MB/Second Ultra SCSI Back-end

The Symm 4.X family is based on a dual system bus design. Each director is connected to either the X bus (odd numbered director) or Y bus (even numbered director). Each director card has two sides, the b processor (top half) and the a processor (bottom half). The processors for the Symm 4.X are Motorola 68000 series (Symm 4 core frequency of 66 MHz | Symm 4.8 = 75 MHz). The a & b processors have their own dedicated circuitry, except for SDRAM (Control Store - where the microcode lives) and the logic to arbitrate for and control the internal system busses. Data is transferred throughout the Symm (from Channel Director to Memory to Disk Director) in a serial fashion along the system busses. For every 64 bits of data, the Symm creates a 72 bit Memory Word (64 bits of data + 8 bits of parity). These Memory Words are then sent in a serial fashion across the internal busses to director from cache or from cache to director.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 8

Symmetrix 5.0 Architecture


Front End Shared Global Memory
Top High Channel Director Top Low Disk Director

3 Bay Cabinet

8730 8430

Back End

1 Bay Cabinet

Processor b
PowerPC 750 266 Mhz

Processor b
PowerPC 750 266 Mhz

360 MBS Internal Bus

Cache

360 MBS Internal Bus

Processor a
PowerPC 750 266 Mhz

Processor a
PowerPC 750 266 Mhz

High Memory Low Memory

Bottom Low

Bottom High

40 MBS SCSI Bus

New Memory and Quad Bus Architecture


EMC Global Education
2004 EMC Corporation. All rights reserved. 8

The Symmetrix 5 is a prime example of MOSAIC 2000. The basic architecture has not changed from Symm 4 to Symm 5, but has been enhanced. Here is what has changed:

Addition of 2 internal system busses (total of 4); each bus still 360MB/s for an aggregate of 1440 MB/s Odd Numbered Directors connect to both Top High and Bottom Low Busses. Even Numbered Directors connect to both Top Low and Bottom High Busses. Memory boards connect to either the Top High and Bottom High (High Memory) or Top Low and Bottom Low (Low Memory). The director processors are IBM/Motorola (jointly developed) PowerPC 750 (RISC-based processor). This processor switch required Symm microcode to be translated from Motorola Assembler Language to C++. To further enable the processor swap, each director has an additional chip (called the Gumba) that makes the PowerPC look like a 68060 to the CPU Control Gate Array, handles Control Store mirroring functions and is responsible for SDRAM control. Each director connects to 2 internal system busses (Top High & Bottom Low for odd directors | Bottom Low & Top High for even directors). M3 Generation of Memory Boards introduced the concept of 4 addressable regions per board (High Memory = board connected to Top High & Bottom High | Low Memory = board connected to Top Low & Bottom Low)

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 9

Symmetrix 5.X LVD Architecture


Front End
Channel Director Top High

3 Bay Cabinet

8830 8530 8230

Shared Global Memory

Top Low

Back End
Disk Director

1 Bay Cabinet Bay Cabinet

Processor b
PowerPC 750 333Mhz

Processor b
PowerPC 750 333Mhz

400 MBS Internal Bus

Cache

400 MBS Internal Bus

Processor a
PowerPC 750 333 Mhz

Processor a
PowerPC 750 333 Mhz

High Memory Low Memory

80 MBS SCSI LVD Bus Bottom High

Bottom Low

Faster Processors, Faster Bus, Faster Back-end,


EMC Global Education
2004 EMC Corporation. All rights reserved. 9

Again, here is another example of the MOSAIC 2000 Architecture. The basic architecture hasnt change but has been enhanced to improve performance by eliminating bottlenecks. Here is what has changed for Symm 5.X LVD Architecture:

Increased bus speed to 400MB/s for an aggregate of 1600 MB/s Back End Directors and Drives support Ultra 2 SCSI LVD (Low Voltage Differential) and the bus speed has increased to 80 MBs. The director processors are now 333 Mhz. ESCON directors are 400 Mhz. Each director connects to 2 internal system busses (Top High & Bottom Low for odd directors | Bottom High & Top Low for even directors ). M4 Generation of Memory Boards support LVD ( Low Voltage Differential or Ultra 2 SCSI Enginuity 5567 or greater

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 10

Symmetrix DMX Architecture


Front End
Channel Director
CACHE SLOTS Processor d
PowerPC 500 Mhz

Shared Global Memory

Back End
Disk Director
Processor d
PowerPC 500 Mhz

Processor c
PowerPC 500 Mhz

Processor c
PowerPC 500 Mhz

Cache
Processor b
PowerPC 500 Mhz

Processor b
PowerPC 500 Mhz

Track Table Processor a


PowerPC 500 Mhz

Status and Communications MAILBOXES

Processor a
PowerPC 500 Mhz

Direct Matrix Each Director gets its Own 500MB/sec Point to Point Connection to each Cache Board

2 GB Fibre Channel

Direct Matrix, Quad Processor Directors, Faster Processors, 2GB Fibre Channel Back-end, and Communications Matrix EMC Global Education
2004 EMC Corporation. All rights reserved. 10

A testimonial to EMCs Symmetrix architecture is the DMX. While Symmetrix Direct Matrix (DMX) is a radical redesign, it contains the same functional blocks with a significant advantage beyond yesterdays bus and switch architecture. The result is even greater performance and availability. Performance The Symmetrix DMX dramatically reset performance expectations in a broad range of demanding transactional, decision support, and consolidated environments. More importantly, when coupled with Enginuity storage operating system, Symmetrix DMX has a unique ability to effectively react to bursts of unexpected activity, while continuing to deliver high service levels. Availability The Symmetrix DMX goes beyond yesterdays design to set a new standard in availability, including the elimination of buses and switches, and the incorporation of triple-module-voting for key components. Power systems, and the ability to do on-line upgrades, have been dramatically improved.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 11

Symmetrix DMX Architecture


Servers

Each board gets its own direct connection to cache!

Disks
EMC Global Education
2004 EMC Corporation. All rights reserved. 11

The Symmetrix DMX features a high-performance, Direct Matrix Architecture (DMX) supporting up to 128 point-topoint serial connections in the DMX2000/3000 (up to 64 in the DMX1000). Symmetrix DMX technology is distributed across all channel directors, disk directors, and global memory directors in Symmetrix DMX systems. Enhanced global memory technology supports multiple regions and 16 connections on each global memory director. In the Direct Matrix Architecture, contention is minimized because control information and commands are transferred across a separate and dedicated message matrix. The major components of Symmetrix DMX architecture are the front-end channel directors (and their interface adapters), global memory directors, and back-end disk directors (and their interface adapters). In a fully configured Symmetrix DMX1000 system, each of the eight director ports on the eight directors connects to one of the sixteen memory ports on each of the four global memory directors. That is, there are two connections between each director and each global memory director. These 64 individual point-to-point connections facilitate up to 64 concurrent global memory operations in the system In a fully configured Symmetrix DMX2000/3000 system, each of the eight director ports on the sixteen directors connects to one of the sixteen memory ports on each of the eight global memory directors. These 128 individual pointto-point connections facilitate up to 128 concurrent global memory operations in the system.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 12

Symmetrix DMX Architecture


Servers

Separate Control and Communications Message Matrix

Disks
EMC Global Education
2004 EMC Corporation. All rights reserved. 12

Another major performance improvement with the DMX is the separate control and communications matrix. This enables communication between the directors, without consuming cache bandwidth. It becomes more apparent as we talk about read and write operations and the information flow through the Symmetrix later in this module.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 13

Read Operation - Cache Hit


Front End
Channel Director 4. 1.
Processor

Shared Global Memory


CACHE SLOTS

Back End
Disk Director

Port C

3.

Port D

2.
Port C Track Table Status and Communications Port D

1. 2.

Host sends READ request Channel Director checks Track Table

3. 4.

Requested data located in cache - Cache Hit! CD retrieves data and sends to host

Read operation completed at memory speed!


EMC Global Education
2004 EMC Corporation. All rights reserved. 13

Host sends read request (requesting to read some number of blocks from a physical disk). Host sees storage on Symm as entire physical drive (actually logical volume on Symmetrix - piece of a physical drive). Through the configuration file (bin file), logical volumes are given a channel address for the 1)Channel Director, 2) Processor, and 3) Port that will be accessing that volume (logical volume 001 gets channel address (1,0) on SA # 3, Processor a, port A). Open systems hosts view disk drives using the SCSI target and LUN addressing scheme (target ranging from 0-16 | LUN ranging from 0-16) Channel Director receives the request to read some number of blocks for target 1, LUN 0 (continuing from the previous example). By looking in the bin file (stored within the directors EPROM), it translates the blocks requested for (1,0) as blocks requested for logical volume 001. The Channel Director then scans the track table to discover if the requested blocks on 001 are already resident in cache. In this case (read-hit), the data that is being requested is resident within cache. The channel director reads the requested data from cache. At this point, the Age-Link-Chain is updated to reflect the access (data moved to top of LRU queue - now most recently used). The data is sent from the Channel Director back to the host. Total I/O response time would be something on the order of 1 millisecond.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 14

Read Operation - Cache Miss


Front End
Channel Director

Shared Global Memory


CACHE SLOTS

Back End
Disk Director
Port C

6.
1.
Processor Port D

3.

2.

4.
Port C Track Table Status and Communications

Processor

Port D

5. 1. 2. 3. Host sends READ request CD checks Track Table - Data Not in Cache CD notifies DA using Message Matrix 4. 5. 6. DA retrieves data from disk (updates track table) CD is notified that data is in cache CD retrieves data and sends to host
14

EMC Global Education


2004 EMC Corporation. All rights reserved.

Host sends Read Request. If the data being requested is not in the process of prefetch, the Channel Director will disconnect from the channel (known as long miss). This enables the host to perform other operations. If the requested data is in the process of prefetch (known as short miss), the Channel Director will not disconnect from the channel. On Symm 4 and Symm 5 Architectures, Directors communicate through Mailboxes. Basically, all directors monitor the mailbox area in cache to see if there is work for it. With the DMX, Directors communicate to each other through the communication matrix. This eliminates the added burden on cache of continuously polling the mailbox. DA retrieves data from physical disk and place it in an available cache slot. The Channel Director is notified by the Disk Director (via the Status & Communications Mailboxes or in a DMX through the communication matrix) to check the track table once more. From this point, the operation that occurs is exactly the same as a read cache hit. If the Channel Director has disconnected from the channel, it must now reconnect the channel. It may seem that in the case of a read request not being in cache, it would simply be faster to bypass cache and retrieve the information directly from the physical storage on the back end. While this is certainly true, the important thing to keep in mind is that if cache is bypassed, the requested read would not be placed in cache for future access. Additionally, you would also lose the integrity checking that occurs as data is placed within cache. Again, with DMX architecture and the efficiency gained through director communications via the communications matrix, the faster quad processor directors, up to 128GB of cache, and 2GB fibre channel back-end drive, the impact of a cache miss is reduced greatly.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 15

Write Operation - Cache Hit


Front End
Channel Director 3. 1.
Processor Processor

Shared Global Memory


CACHE SLOTS

Back End
Disk Director

4.
2.

Port C

Port D

Track Table Status and Communications

Port C

Port D

1. 2. 3.

Host sends WRITE request to CD CD places data in an available cache slot Write Complete sent to host

4.

Tracks marked as Write Pending - DA will de-stage at earliest convenience Data remains in cache until replaced by LRU algorithm
15

EMC Global Education


2004 EMC Corporation. All rights reserved.

Host sends write request to Channel Director. Channel Director locates an available cache slot and places data in cache. If the track(s) already exists in cache as write pending (waiting to be written to disk), the Channel Director will write the data in question to the existing slot in cache. For example, I/O # 1 consists of a write to the last block on the first track on the first cylinder of Logical Volume 001. I/O # 2 then consists of a write to the first block on that same track. When the Channel Director checks the track table for an available slot in cache, it will see that the track in question is already flagged as a write pending. Therefore, the Channel Director will write the first block (I/O# 2) to the same slot in cache where the last block (I/O# 1) on that track is already residing. Host is notified that write is complete. As soon as the Disk Director(s) that are managing the physical copy(ies) of the data are available, the data will be read from cache and write to buffer on the Disk Director. The data is then written to the physical disk. Note: Even if the host only writes/updates one block, the entire track (8 blocks in a sector, 8 sectors in a track) is marked as write pending. When the track is marked as write pending, all four mirrored positions are flagged write pending for that track. Remember that the data remains in cache until 1) it is committed to disk, AND 2) it becomes the Least Recently Used data in cache. Write pending tracks are not subject to the LRU algorithm. When the data is restaged to disk (removal of write pending flag), it then enters the LRU Age-Link-Chain as the most recently used data. If the data is frequently accessed, it will remain towards the front of the Chain. If the data is not accessed, it will move to the end of the Chain and subsequently cycled out of cache (slot in cache made available for other use). The effect of a write cache hit is that the host is immediately freed up to process more I/O as soon as the write is received in cache. This greatly enhances the performance of the host itself. Less cycles are spent awaiting acknowledgement, freeing the host to process application data.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 16

Fast Write Ceiling


Cache algorithms are designed to optimize cache utilization and fairness for all Symmetrix Volumes Cache allocation dynamically adjusted based on current usage
Symmetrix constantly monitors system utilization (including individual volume activity) More active volumes dynamically are allocated additional cache resources from relatively less active volumes Each volume has a minimum and maximum number of cache slots for write operations based on configuration (known as Fast Write Ceiling or Write Pending Ceiling)

During a write operation, a Delayed Write occurs when the Write Pending Ceiling is reached
Logical Volume Level (not a fixed percentage - dynamically determined by Symmetrix) Symmetrix System Level (80% of Symmetrix cache slots contain write pendings)
EMC Global Education
2004 EMC Corporation. All rights reserved. 16

When a Symmetrix is IMPLed (Initial Microcode Program Load), the amount of available cache resources is automatically distributed to all of the logical volumes in the configuration. For example, if a Symmetrix were configured with 100 logical volumes of the same size and emulation, then at IMPL, each one would receive 1% of available cache resources. As soon as reads and writes to volumes begins, the Symmetrix Operating Environment (Enginuity) dynamically adjusts the allocation of cache. If only 1 of the 100 volumes was active, it would get incrementally more cache and the remaining amount would be redistributed to the other 99 volumes. It is important to remember that there will always be cache resources available for reads. By default, the 80% fast write ceiling ensures that at least 20% of cache resources will be free for read requests. Managing each individual volumes write activity (via the dynamic fast write ceiling) enables Enginuity to typically prevent system-wide delayed write situations.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 17

Write Operation - Delayed Fast Write


Front End 6.
1. Channel Director

Shared Global Memory


CACHE SLOTS

Back End
Disk Director 3.
Port C

Processor

2.

5.

Processor Port D

4.
Track Table Status and Communications Port C

Port D

1. 2. 3.

Host sends WRITE request to CD CD cannot locate free cache slot and signals DA to destage DA will do a forced de-stage of Write Pendings to free cache slots

4. 5. 6.

DA signals CD of available slots CD places data in an available cache slot Write complete sent to host
17

EMC Global Education


2004 EMC Corporation. All rights reserved.

Host Sends Write request. The Channel Director does not find available cache slots for writing because the volume has reached its Fast Write Ceiling, or the entire Symm has 80% of its cache slots containing write pendings. When the volume Fast Write ceiling is reached, only that volumes performance is impacted. When the Symm System Fast Write Ceiling is reached, the entire Symms performance is impacted. Disk Director frees up cache slots. The Disk Director signals the Channel Director through the Mailbox or the Communication Matrix on DMX. The rest of the operation is similar to a fast write. Again, this operation takes significantly longer than a fast write but ensures that the I/O flows through cache. It is likely that information just written by a host will be read in the near future. If cache were bypassed and the data written directly to disk, the data would not then be available directly from cache for the next request.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 18

Symmetrix Front End


Channel Directors allow Symmetrix to connect to the host environment
Minimum of 2 directors per frame (redundancy) Maximum of 4, 6 or 8 directors per frame (depending upon model and configuration)

Port A Port B Port A

Channel Director

Port B Type(s) of Channel Director cards determined by the type of host and the selected protocol for communication with Symmetrix Cards are Field Replaceable Units (FRUs) and hot swappable Open Systems and Windows hosts connect to Symmetrix using either:

SCSI (Small Computer System Interface) Fibre Channel (SCSI protocol to be sent over greater distances via Fibre Channel protocol and fiber optic cable)

Mainframe hosts will typically connect to Symmetrix using ESCON or FICON (IBM-based protocols that allow mainframe hosts to connect to storage using fiber optic cables)
EMC Global Education
2004 EMC Corporation. All rights reserved. 18

Normally, Channel Directors are installed in pairs, providing redundancy and continuous availability in the event of repair or replacement to any one Channel Director. Each Channel Director has multiple microprocessors and supports multiple independent data paths to the global memory to and from the host system.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 19

Open Systems Connectivity Options


DMX supports eight-port, four processor Fibre Channel Directors
2Gb/sec (can be configured for 1Gb/sec) Single-mode and multi-mode configurations:
Eight multi-mode ports Seven multi-mode ports and one single-mode port Six multi-mode ports and two single-mode ports

8,192 Logical Volumes per director (2048 per port)

SCSI Channel Directors supported on Symm 8000


4 ports, 4 concurrent I/Os (Ultra 40MB/sec) 4 ports, 4 concurrent I/Os (Ultra LVD 80MB/sec

iSCSI support using Multi-Protocol Channel Director


Low cost connectivity using existing IP network infrastructure
EMC Global Education
2004 EMC Corporation. All rights reserved. 19

Depending upon the model, from 2 to 8 front-end Channel Directors are supported per system. Today, networked storage (SAN or NAS) is the preferred method to connect hosts with storage. For SAN connectivity, Fibre Channel is the interface of choice. Legacy systems often use parallel SCSI, and SCSI Front-end directors are supported on nonDMX systems. Fibre ChannelThe DMX supports an eight port four processor Fibre Channel Director. Earlier Symmetrix offered 2 port, 4 port fibre Channel directors, and a 12 port director with an embedded switch. The standard fibre channel connection uses Short-wave Laser optics and multimode fiber optical cables for distances of up to 500 Meters over a 50 micron cable. The optional Long wave laser uses 9 micron single mode optics for distances of 10K and greater. Both switched fabrics and arbitrated loops SANs are supported. SCSI Channel Directors support HVD and LVD and speeds to 80MB/sec. SCSI Channel Directors are not supported in DMX. iSCSI allows block level access over IP networks. It is supported on the DMX using the new Multi-Protocol Channel Director. This director can be configured to support FICON, 1Gb Ethernet for SRDF attach, and 1Gb Ethernet for iSCSI host attach. iSCSI is ideal for storage and server consolidation environments that require low cost connectivity that leverages existing IP networks. Note: The 4 Port Multi-Protocol Channel Director is supported on the DMX1000, DMX2000, and DMX3000. The 2-port director is supported on all DMX systems.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 20

Mainframe Connectivity Options


ESCON eight-port, four Processor Director
Supports data transfer rates up to 17 MB/s per port Single-mode and multi-mode configurations:
Eight multi-mode ports Seven multi-mode ports and one single-mode port Six multi-mode ports and two single-mode ports

8,192 Logical Volumes per director (2048 per port)

FICON support using Multi-Protocol Channel Director


2Gb/Sec Point-to-point Switched point-to-point
Single FICON Fibre Channel Director between server and storage No mixing FICON and FC Open Systems on the same Switch
EMC Global Education
2004 EMC Corporation. All rights reserved. 20

Today, mainframe connectivity is through either ESCON or FICON serial channels. The original mainframe connectivity was through parallel interfaces with bus and tag cables. Except for a few legacy systems, this bus and tag has been replaced with ESCON because of increased speed and flexibility. ESCON uses multimode fiber optics and supports distances of up to 3 kilometers. Greater distances are supported using media converters. FICON is Fibre Channel for mainframes. It offers superior performance and extended distance as compared to its predecessor, ESCON. As such, most mainframe customers will adopt FICON as their primary mainframe channel connectivity over the next few years. FICON uses multimode fiber optics and supports distances of up to 500 meters. FICON may also use single mode fiber optics for distances of up to 10KM and beyond.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 21

Symmetrix Back End


Disk Director
Processor b

Port C Port D

Disk Director (also called Disk Adapter or DA) writes and reads data to/from physical disk drives
DA also responsible for disk and cache scrubbing and assists in parity-based data rebuilding

Port C Port D

Processor a

DAs are Field Replaceable Units (FRUs) and are hot swappable DAs installed in pairs on adjacent slots within the card cage of Symmetrix

SYMM 4 and 5 architectures use 40/80MB/s SCSI to connect physical drives with a maximum of 12 drives per port DMX Architecture uses 2Gb Fibre Channel drives
Eight ports per DA Maximum 18 dual ported drives per port In addition to the Direct Matrix connections to cache, each director has a separate message matrix for the transfer of control information
EMC Global Education
2004 EMC Corporation. All rights reserved.

Disk Director
Processor d
PowerPC 500 Mhz

Processor c

PowerPC 500 Mhz

Processor b
PowerPC 500 Mhz

Processor a

PowerPC 500 Mhz

21

The primary purpose of the Back End director is to read and write data to the physical disks. However, when it is not staging data in cache or destaging data to disk, the disk director is responsible for proactive monitoring of physical drives and cache memory. This is referred to as disk and cache scrubbing. Disk Scrubbing or Disk Error Correction and Error Verification: The disk directors use idle time to read data and check the polynomial correction bits for validity. If a disk read error occurs, the disk director reads all data on that track to Symmetrix cache memory. The disk director writes several worst case patterns to that track searching for media errors. When the test completes, the disk director rewrites the data from cache to the disk device, verifying the write operation. The disk microprocessor maps around any bad block (or blocks) detected during the worst case write operation, thus skipping defects in the media. If necessary, the disk microprocessor can reallocate up to 32 blocks of data on that track. To further safeguard the data, each disk device has several spare cylinders available. If the number of bad blocks per track exceeds 32 blocks, the disk director rewrites the data to an available spare cylinder. This entire process is called error verification. The disk director increments a soft error counter with each bad block detected. When the internal soft error threshold is reached, the Symmetrix service processor automatically dials the EMC Customer Support Center and notifies the host system of errors via sense data. It also invokes dynamic sparing (if the Dynamic Sparing option is enabled). This feature maximizes data availability by diagnosing marginal media errors before data becomes unreadable. Cache Scrubbing or Cache Error Correction and Error Verification: The disk directors use idle time to periodically read cache, correct errors, and write the corrected data back to cache. This process is called error verification or scrubbing. When the directors detect an uncorrectable error in cache, Symmetrix reads the data from disk and takes the defective cache memory block offline until an EMC Customer Engineer can repair it. Error verification maximizes data availability by significantly reducing the probability of encountering an uncorrectable error by preventing bit errors from accumulating in cache.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 22

Disk Performance Basics


Three components of disk performance
Time to reposition actuator - seek time Rotational latency Transfer rate Rotational Delay

With a Symmetrix, I/Os are services from cache, not from the physical HDA
Minimizes the inherent latencies of physical disk I/O Disk I/O at memory speeds

Position Actuator

Transfer Data

Disk I/O = Seek time + Rotational Delay + Transfer Rate time


EMC Global Education
2004 EMC Corporation. All rights reserved. 22

When you look at a physical disk driver, a read or write operation has three components that add up to the overall response time. Actuator positioning is the time it takes to move the read/write heads over the desired cylinder. This is mechanical movement and is typically measured in milliseconds. The actual time that it takes to reposition depends on how far the heads have to move, but this contributes to the greatest share of the overall response time. Rotational Delay is the time it takes for the desired information to come under the ready write head. This time is the function of the revolutions per second, or drive RPM. The faster the drive turns, the lower the rotational latency. A 10,000 RPM drive has an average rotational latency of approximately 3.00 milliseconds, which is half the time it takes to make one revolution. Transfer Rate is the smallest time component and consists of the time it takes to actually read/write the data. This is a function of drive RPM and the data density. It is often measured as internal transfer rate or external transfer rate. The external rate is the speed that the drive transfers data to the controller. This is limited by the internal transfer rate, but with buffers on the drive modules themselves, it allow faster transfer rates. The design objective of a Symmetrix is not to limit the performance of host applications based on the performance limitations of the physical disk. This is accomplished using cache. Write operations are to cache and asynchronously destage to disk. Read operations are from cache using the Least recently Used algorithm and prefetching to keep the information that is most likely to be accessed in memory.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 23

Symmetrix Disk Comparisons

36 GB

18 GB

36 GB

73 GB

146 GB
10,000 Sym 5 Ultra SCSI 136 MF 146 OS 135.97 OS

181 GB

73 GB

146 GB

Spindle Speed Symmetrix Architecture Interface Formatted Capacity (Mkt GB) Formatted Capacity (Eng GB)

7,200 Sym 4.8 Ultra SCSI 35.80 MF 36.20 OS 33.72 OS

10,000 Sym 5 Ultra SCSI 17.90 MF 18.10 OS 16.86 OS

10,000 Sym 5 Ultra SCSI 35.8 MF 36 OS 33.72 OS

10,000 Sym 5 Ultra SCSI 72.17 MF 73.10 OS 68.38 OS

7,200 Sym 5 Ultra SCSI 178.7 MF 181 OS 169.31 OS

10,000 DMX Fibre Channel 72.17 MF 73.10 OS 68.38 OS

10,000 DMX Fibre Channel 136 MF 146 OS 135.97 OS

EMC Global Education


2004 EMC Corporation. All rights reserved. 23

Symmetrix physical drives are manufactured by our suppliers (Seagate) to meet EMCs rigorous quality standards and unique product specifications. These specification include, dedicated microprocessors (that can be XOR capable), the most functionally robust microcode available, and large onboard buffer memory (4MB 32MB). Again, while the physical speed of disk drives does contribute to the overall performance, the design is for most read or write operations to be handled from cache. Note: Marketing defines a GB as 1000 X 1000 X 1000, while Engineering defines a GB as 1024 X 1024 X 1024.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 24

Symmetrix Global Cache Directors


Memory boards are now referred to as Global Cache Directors and contain global shared memory Symmetrix has a minimum of 2 memory boards and a maximum of 8 Individual cache directors are available in 2 GB, 4 GB, 8 GB, and 16 GB sizes. Boards are comprised of memory chips and divided into four addressable regions Generally installed in pairs Memory boards are FRUs and hot swappable (does not require Symm power down or reboot)
EMC Global Education
2004 EMC Corporation. All rights reserved. 24

Model DMX 800 DMX 1000 DMX 2000 DMX 3000 8830 8530 8230

Number of Cache Boards 2 4 8 8 4 2/4 2

Maximum Cache Size 32 GB 64 GB 128 GB 128 GB 64 GB 32 GB/64 GB 32 GB

Cache boards are designed for each family of Symm. Symm 4.8 uses the M2 generation of memory boards. Symm 5 uses the M3/M4 generation of memory boards. The DMX uses M5. Because these boards have different designs, they cannot be swapped between families of Symm. Memory boards in the DMX are referred to as Global Cache Directors with CacheStorm technology. On Symm 5, memory boards that connect to the Top High and Bottom High internal system busses are referred to as High Memory. Conversely, boards that connect to Top Low and Bottom Low are known as Low Memory. Important to note that even on the Symm 4.X, cache connects to both the X and Y internal busses. DMX uses direct connections between directors and cache. Hot swappable means that a Customer Engineer, following documented procedure, can be removed and replace the board without powering down the Symm. The CE procedure includes destaging all remainder data in cache and fencing off the board in order to prevent loss of data. When configuring cache for the Symmetrix DMX systems, follow these guidelines:

A minimum of four and a maximum of eight cache director boards is required for the DMX2000 system configuration; and a minimum of two and a maximum of four cache director boards is required for the DMX1000 system configuration. Two-board cache director configurations require boards of equal size. Cache directors can be added one at a time to configurations of two boards and greater. A maximum of two different cache director sizes is supported, and the smallest cache director must be at least one-half the size of the largest cache director. In cache director configurations with more than two boards, no more than one half of the boards can be smaller than the largest cache director.
Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 25

Symmetrix Shared Global Memory


Shared Global Memory contain three types of information
Cache Slots: temporary repository for frequently accessed data (staging area between host and physical drive) Track Table: directory of the data residing in cache and of the location and condition of the data residing on Symmetrix physical disk(s) Communications and Mailboxes: contains performance and diagnostic information concerning Symmetrix and allows independent front end and back end to communicate DMX also uses message matrix for control and communications
CACHE SLOTS

Track Table Status and Communications MAILBOXES

EMC Global Education


2004 EMC Corporation. All rights reserved. 25

The actual size requirements for cache depends on the configuration. The general rule that more is better also applies to cache, but again, the actual requirements is a function of the configuration and application access patterns. The CQS system provides sizing guidelines based on actual configuration. The primary use for cache is for staging and destaging data between the host and the disk drives. Cache is allocated in tracks and is referred to as cache slots, which are 32Kbytes in size (47 or 57 Kbytes for Mainframe). If the Symm is supporting both FBA and CKD emulation within the same frame, the cache slots will be the size of the largest track size, either 47K (3380) or 57K (3390) track size. The Track Table is used to keep track of the status of each track of each logical volume. Approximately 16 Bytes of cache space is used for each track. So, a 2GB volume would use approximately 1MB of cache for track table space (( 200GB/32KB) X 16B) of cache space. You can see that cache requirements depend on the actual configuration. Cache is also used to maintain all diagnostic and short-term performance information, as well as provide the facility for Channel Directors to communicate with Disk Directors. The Symm maintains diagnostic information for every component within the architecture. Performance data includes I/Os per second, cache hit rate and read/write percentage for the entire system, individual directors, and individual devices (logical volumes). This information is accumulated and stored as part of the Symms normal operations, whether or not someone (CE or customer) is referencing it. The Mailbox is used for communications between the directors. With DMX, while the Mailboxes still exist, a Communications and Control Matrix allows direct communication between directors.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 26

Symmetrix Cache Management


Symmetrix Cache management is based upon the following principles:
Locality of Reference
If a data block has been recently used, adjacent data will be needed soon Data staged from disk to cache at a minimum of 4K or blocks to end of track or full track Prefetch algorithm detects sequential data access patterns
CACHE SLOTS

Data re-use
Accessed Data will probably be used again

Track Table Status and Communications MAILBOXES

Least Recently Used (LRU) data is flushed from the cache first
Only keep active data in the cache Free up cache slots that are inactive to make room for more active data

EMC Global Education


2004 EMC Corporation. All rights reserved. 26

Prefetching - Once sequential access is detected, prefetch is turned on for that logical volume. Prefetch is initiated by 2 sequential accesses to a volume. Once turned on, for every sequential access, the Symm will pull the next two successive tracks into cache (access to track 1 on cylinder 1 and will prompt the prefetch of tracks 2 & 3 on cylinder 1). After 100 sequential accesses to that volume, the next sequential access will initiate the prefetching of the next 5 tracks on that volume (access to track 1 on cylinder 10 will prompt the prefetch of tracks 2,3,4,5 & 6 on cylinder 10). After the next 100 sequential accesses to that volume, the prefetch track value is increased to 8 (access to track 1 on cylinder 100 will prompt the prefetch of tracks 2,3,4,5,6,7,8 & 9 on cylinder 100). Any non-sequential accesses to that volume will turn the prefetch capability off. As data is placed into cache or accessed within cache, it is given a pseudo timestamp. This allows the Symm to maintain only the most frequently accessed data to remain in cache memory. The data residing in cache is ordered through an Age-Link-Chain. As data is touched (read operation for example), it moves to the top of the Age-LinkChain. Every time a director performs a cache operation, it must take control of the LRU algorithm. This forces the director to mark the least recently used data in cache as available (to be overwritten by the next cache operation).

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 27

Symmetrix Card Cage

A0 1 9 2 8 4 7 6 5 3 A0 1 9 2 8 4 7 6 5 3

0 F

A0 1 9 2 8 4 7 6 5 3 A0 1 9 2 8 3 7 6 5 4

0 F

A 0 1 9 2 8 7 6 543 A0 1 9 2 8 7 6 543

0 F

A01 2 89 3 7 6 5 4 A01 9 2 8 3 7 6 5 4

0 F

A 0 1 9 2 8 7 6 543 A0 1 9 2 8 7 6 543

0 F

A0 1 9 2 8 3 7 65 4 A0 1 9 2 8 3 7 6 54

0 F

A 0 1 9 2 8 7 6 543 A0 1 9 2 8 7 6 543

0 F

A0 1 9 2 8 4 7 6 5 3 A0 1 9 2 8 3 7 6 5 4

0 F

0 F

0 F

0 F

0 F

0 F

0 F

0 F

0 F

DA DA SA FA EA MM MM EA FA SA DA DA

DMX800 Model
DMX800 DMX1000 DMX1000P DMX2000 DMX2000P DMX 3000 8830 8530 8230

DMX1000 Maximum Front End Directors

DMX2000 Maximum Back End Directors

DMX3000
1 2 3 4 5 M1 M2 12 13 14 15 16

Maximum Cache Directors

Maximum Cache

Maximum Disk Drives

2 6 4 12 8 8 8 4 2

2 2 4 4 8 8 8 4 2

2 4 4 8 8 8 4 4 2

32GB 128GB 64GB 256GB 256GB 256GB 64GB 64GB 32GB

120 144 144 288 288 576 384 96 48


27

EMC Global Education


2004 EMC Corporation. All rights reserved.

Though we logically divide the architecture of the Symm into Front End, Back End, and Shared Global Memory, physically, these director and memory cards reside side-by-side within the card cage of the Symm. The DMX P model are configured for maximum performance rather than connectivity.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 28

Enginuity Overview
Operating Environment for Symmetrix
Each processor in each director is loaded with Enginuity
Downloaded from service processor to directors over internal LAN Zipped code loaded from EEPROM to SDRAM (control store of director)

Enginuity is what allows the independent director processors to act as one Integrated Cached Disk Array
Also provides the framework (coding) for advanced functionality like SRDF, TimeFinder, etc.

All DMX shipped with the latest Enginuity 5670 as of Sept. 2003

5568.34.22
Symmetrix Hardware Supported: 50 = Symm3 52 = Symm4 55 = Symm5 56 = DMX Microcode Family (Major Release Level) Field Release Level of Symmetrix Microcode (Minor Release Level) Field Release Level of Service Processor Code (Minor Release Level)

EMC Global Education

2004 EMC Corporation. All rights reserved.

28

Enginuity automatically reserves 12 GB (raw) for internal use as a Symmetrix File System (SFS). This space is automatically allocated while initially loading the Enginuity Operating Environment on Symmetrix systems and is not visible to the host environment. This 12 GB of raw SFS space is translated into 6 GB of usable space (mirrored configuration) and is spread equally across two 3 GB volumes. The SFS stores statistic data that is generated and is used to provide a number of benefits:

Dynamically adjusting performance algorithms Enhancement of dynamic mirror service policy Enhancement of Symmetrix Optimizer More rapid recovery from problems Enhanced system audit and investigation

Enginuity also allows Quality of Service (QoS), giving the ability to set varying priority levels to applications residing within a Symmetrix to meet varying customer needs or agreements.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 29

Symmetrix Configuration Information


Symmetrix configuration information includes the following:
Physical hardware that is installed number and type of directors, memory, and physical drives Mapping of physical disks to logical volumes Mapping of SCSI addresses to volumes and volumes to front-end directors Operational parameters for front-end directors
PC Memory From Disk Edit Configuration Information (IMPL.BIN file)

PC Hard disk

Configuration information is referred to as the IMPL.bin file or simply the bin file Stored in two places:
On the hard disk of the Symmetrix Service Processor In the EEPROM of each Symmetrix Director

From system

Default Director

Configuration changes can also be made using EMC ControlCenter Configuration Manager GUI and WideSky CLI
EMC Global Education
2004 EMC Corporation. All rights reserved. 29

Two very important concepts: Each director (both Channel and Disk) has a local copy (stored in EPROM) of the configuration file. This enables Channel Directors to be aware of the Disk Directors that are managing the physical copy(ies) of Symmetrix Logical Volumes and vice versa. The bin file also allows Channel Directors to map host requests to a channel address, or target and LUN to the Symmetrix Logical Volume. Changes made to the bin file (non-SDR changes) must first be made to the IMPL.BIN on the Service Processor and then downloaded to the directors over the internal Ethernet LAN. Though Customer Service has the capability to do remote bin file updates (using the SymmRemote application), their standard operating procedure mandates the CE be physically present for all configuration changes. In addition, CS requires that all CEs do a comparison analysis prior to committing changes (read out existing IMPL.BIN and compare to proposed IMPL.BIN.)

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 30

Mapping Physical Volumes to Logical Volumes


Symmetrix Physical Drives are split into Hyper Volume Extensions
Physical Drive 18 GB Logical Volume Logical Volume Logical Volume Logical Volume 4.5 GB 4.5 GB 4.5 GB 4.5 GB

Hyper Volume Extensions (disk slices) are then defined as Symmetrix Logical Volumes
Symmetrix Logical Volumes internally labeled with hexadecimal identifier (0000-FFFF) Maximum number of Logical Volumes per Symmetrix configuration = 8192

EMC Global Education


2004 EMC Corporation. All rights reserved. 30

While hyper -volume and split refer to the same thing (a portion of a Symmetrix physical drive), a logical volume is a slightly different concept. A logical volume is the disk entity presented to a host via a Symmetrix channel director port. As far as the host is concerned, the Symmetrix Logical Volume (SLV) is a physical drive. As we will see, an SLV physically resides on at least one hyper-volume, but may be mirrored to more than one hyper-volume on the back end. Do not confuse Symmetrix Logical Volumes with host-based logical volumes. Symmetrix Logical Volumes are defined by the Symmetrix Configuration (BIN File). Host-based logical volumes are configured (by customers) through Logical Volume Manager software (Veritas LVM, NT Disk Administrator, ...etc.). Note: This is a very simplistic example of hyper-volume extensions on a physical drive. In actuality, the true useable capacity of the drive would be less than 18GB due to disk formatting and overhead (track tables, etc.). This would result in each of the 4 splits in this example being approximately 4.21GB in size (open systems).

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 31

Symmetrix Logical Volume Specifications


Physical Disk Physical Disk Physical Disk Physical Disk Physical Disk

Volume Specifications
Enginuity allows up to 128 Hyper Volumes to be configured from a single Physical Drive Size of Volumes defined as number of Cylinders (FBA Cylinder = 15 * 32K), with a max. size ~16 GB All Hyper Volumes on a physical disk do not have to be the same size, however a consistent size makes planning and ongoing management easier Hyper Volume(s) are the physical disk partitions that comprise Symmetrix Logical Volumes
One mirrored Symmetrix Logical Volume = Two Hyper Volumes
EMC Global Education
2004 EMC Corporation. All rights reserved. 31

Volume specifications are illustrated here.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 32

Defining Symmetrix Logical Volumes


Physical Disk Physical Disk Physical Disk Physical Disk Physical Disk

Symmetrix Service Processor

Running SymmWin Application

Symmetrix Logical Volumes are configured using the service processor and SymmWin interface/application
EMC Configuration Group uses information gathered during presite survey to create initial configuration
Subsequent changes to configuration must be approved by Configuration Group through their standard change control process (expected turnaround is 5 days)

Generates configuration file (IMPL.BIN) that is downloaded from the service processor to each director

EMC Global Education

Most configuration changes can be performed on-line at the discretion of the EMC Customer Engineer Configuration changes can be performed online using the EMC ControlCenter Configuration Manager and WideSky Command Line Interface
32

2004 EMC Corporation. All rights reserved.

The C4 group (Configuration and Change Control Committee) is the division of Global Services responsible for initial Symm configuration and any subsequent changes to the configuration. They use time-honored and extensive best practices and tools to configure Symms. There is also much manual review to be done to ensure that BIN files are valid. For planning purposes, allow at least 5 days to produce a BIN file or make major changes to a configuration. An important misperception to correct is that only the CE can change the bin-file. While this might have been true at one time, today the customer may make configuration changes using EMC ControlCenter GUI or the WideSky GUI. Prior to 5x66 Enginuity, BIN file configuration was performed using a DOS-based program called AnatMain.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 33

Symmetrix Logical Volume Types


Open Systems hosts use Fixed Block Architecture (FBA)
Each block is a fixed size of 512 bytes Sector = 8 Blocks (4,096 Bytes) Track = 8 Sectors (32,768 Bytes) Cylinder = 15 Tracks (491,520 Bytes) Volume size referred to by the number of Cylinders

Data Block 512 Bytes

Mainframes use Count Key Data (CKD)


Variable block size specified in count Emulate Standard IBM volumes

Count Key

Data

3380D, E, K, K+, K++ (max. track size 47,476 bytes) 3390-1, -2, -3, -9 (max. track size ~ 56,664 bytes) Volume size defined as a number of Cylinders

Symmetrix stores data in cache in FBA and CKD and on physical disk in FBA format (32 KB tracks)
Emulates expected disk geometry to host OS through Channel Directors
EMC Global Education
2004 EMC Corporation. All rights reserved. 33

CKD and FBA physicals can be mixed in a Symmetrix if the ESP license is purchased for that Symm. ESP allows the Symmetrix to deal with the 2 fundamentally different types of low-level formats. A notable exception to the 512-byte Open Systems rule is AS/400. It uses 520 bytes per block. The extra 8 bytes are for host system overhead. Enginuity, prior to 5566 on the Symmetrix 5, only supports a single type of FBA format on Open Systems drives. If you connect an AS/400 to a pre-5566 Symmetrix, all FBA devices must be formatted 520. Open Systems hosts other than the AS/400 must be configured to use 520-formatted volumes. BE AWARE THAT CHANGING THE LOW-LEVEL FORMAT OF PHYSICAL DEVICES TYPICALLY REQUIRES SYMMETRIX DOWNTIME. Also, reformatting existing 512 devices will erase them, requiring a potentially complex backup and restore of all Open Systems data (VTOC the drives). With 5566+ on Symm 5 +, Enginuity has SLLF (Selective LowLevel Format) capabilities. This allows some drives to be formatted 512 and others 520, avoiding the complications mentioned above.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 34

Media Protection
Data protection options are configured at the volume level and the same system can employ a variety of protection schemes
Mirroring (RAID 1) Parity RAID
Highest performance, availability and functionality Two mirrors of one Symmetrix Logical Volume located on separate physical drives

RAID 1/0 Mirrored Stripped Mainframe Volumes Dynamic Sparing

3 +1 (3 data and 1 parity volume) or 7 +1 (7 data and 1 parity volume) Known as RAID S or RAID R in Symm 5 and earlier

SRDF (Symmetrix Remote Data Facility)

One or more HDAs that are used when Symmetrix detects a potentially failing (or failed) device Can be utilized to augment data protection scheme Minimizes exposure after a drive failure and before drive replacement Mirror of Symmetrix Logical Volume maintained in separate Symmetrix frame

The RAID Advisory Board has rated configurations with both SRDF and Parity RAID or RAID 1 Mirroring with the highest availability and protection classification: Disaster Tolerant Disk System Plus (DTDS+)
EMC Global Education
2004 EMC Corporation. All rights reserved. 34

RAID - Redundant Array of Independent Disks See http://www.raid-advisory.com/emc.html for the ratings.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 35

Mirror Positions
Internally, each Symmetrix Logical Volume is represented by one four mirror position M1, M2, M3, M4 Mirror positions are actually data structures that point to a physical location of a mirror of the data and status of each track Each mirror position represents a mirror copy of the volume or is unused
Unprotected Volume

Symmetrix Logical Volume 001

Remote Replica Local Replica

M1
EMC Global Education
2004 EMC Corporation. All rights reserved.

M2

M3

M4
35

Before getting too far into volume configuration, understanding the concept of mirror positions is very important. Within the Symmetrix, each logical volume is represented by four mirror positions M1, M2, M3, M4. These Mirror Positions are actually data structures that point to a physical location of a data mirror and the status of each track. Each position either represents a mirror or is unused. For example, an unprotected volume will only use the M1 position to point to the only data copy. A RAID-1 protected volume will use the M1 and M2 positions. If this volume was also protected with SRDF, three mirror positions would be used, and if we add a BCV to this SRDF protected RAID-1 volume, all four mirror positions would be used. Note that the order that mirror positions are assigned is not important. For example, a BCV or SRDF mirror is assigned the next available unused mirror position. For example, if a BCV was established to a RAID1 protected volume, it would assume the M3 mirror position. Another thing to keep in mind is Mirror Positions are logical pointers. With local mirrors, the pointer is to the physical hyper volume (Disk Director, Drive, and Split). In the case of SRDF, the mirror position actually points to a Logical Volume in the remote Symmetrix.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 36

Mirroring: RAID-1
Two physical copies or mirrors of the data Host is unaware of data protection being applied
Disk Director 2
Physical Drive

Target = 1 LUN = 0
Logical Volume 001

Disk Director 15
Physical Drive LV 001 M2

LV 001 M1
EMC Global Education
2004 EMC Corporation. All rights reserved. 36

Mirroring provides the highest level of performance and availability for all applications. Mirroring maintains a duplicate copy of a logical volume on two physical drives. The Symmetrix maintains these copies internally by writing all modified data to both devices. The mirroring function is transparent to attached hosts as the hosts views the mirrored volumes as a single logical volume. In the example shown, Hyper 3 on Physical Drive 0 on DA 2 is the M1 for Logical Volume 001. Hyper 0 on physical drive 0 on DA 15 is the M2 for Logical Volume 001. Two physical mirrors of one logical volume are being presented to the host (using SCSI address 1,0) as if it were an entire physical drive. Notice that if the director numbers of the DAs are added together (2+15) , they equal 17. This is what is known as the rule of 17. Because of where within the card cage the DA pairs reside (1/2, 3/4, 13/14, 15/16), as long as the sum total of the DA director numbers equal 17 (1/16, 2/15, 3/14, 4/13), the mirrors will always be on different internal system busses (for the highest availability and maximum Symm resources).

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 37

Mirrored Service Policy


Physical Drive LV 000 M1 LV 004 M1 LV 008 M1 LV 00C M1 Logical Volume 008 Logical Volume 00C Logical Volume 000 Logical Volume 004 Physical Drive LV 000 M2 LV 004M2 LV 008 M2 LV 00C M2

Symmetrix leverages either or both mirrors of a Logical Volume to fulfill read requests as quickly and efficiently as possible Two options for mirror reads: Interleave and Split
Interleave maximizes throughput by using both Hyper Volumes for reads alternately Split minimizes head movement by targeting reads for specific volumes to either M1 or M2 mirror

Dynamic Mirror Service Policy (DMSP) sets, policy is dynamically adjusted based on I/O patterns
Adjusted approximately every 5 minutes Set at a logical volume level
EMC Global Education
2004 EMC Corporation. All rights reserved. 37

During a read operation, if data is not available in cache memory, the Symmetrix reads the data from the volume chosen for best overall system performance. Performance algorithms within Enginuity track path-busy information, as well as the actuator location, and which sector is currently under the disk head in each device. Symmetrix performance algorithms for a read operation choose the best volume in the mirrored pair based upon these service policies.

Interleave Service Policy Share the read operations of a mirror pair by reading tracks from both logical volumes in an alternating method: a number of tracks from the primary volume (M1) and a number of tracks from the secondary volume (M2). The Interleave Service Policy is designed to achieve maximum throughput. Split Service Policy Different from the Interleave Service Policy because read operations are assigned to either the M1 or the M2 logical volumes, but not both. Split Service policy is designed to minimize head movement. Dynamic Mirror Service Policy (DMSP) -DMSP dynamically chooses between the Interleave and Split policies at the logical volume level based on current performance and environmental variables, to maximum throughput and minimum head movement. DMSP adjusts each logical volume dynamically based on recent access patterns. This is the default mode. The Symmetrix system tracks I/O performance of logical volumes (Including BCV volumes), physical disks, and disk directors. Based on these measurements, directs read operation for mirrored data to the appropriate mirror. As the access patterns and workloads change, the DMSP algorithm analyzes the new workload and adjusts the service policy to optimize performance.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 38

Parity RAID
RAID Rank 0 RAID Rank 1 RAID Rank 2 RAID Rank 3 LV 001 Volume A LV 004 Volume D LV 007 Volume G Parity for JKL LV 002 Volume B LV 005 Volume E Parity for GHI LV 00a Volume J LV 003 Volume C Parity for DEF LV 008 Volume H LV 00b Volume K Parity for ABC LV 006 Volume F LV 009 Volume I LV 00c Volume L

Parity RAID is also referred to as RAID-S in SYMM 5 and earlier architectures 3 +1 (3 data volumes and 1 parity volume) or 7 +1
Parity calculated by Symmetrix Disk Drives using Exclusive-OR (XOR) function Parity and difference data (result of XOR calculations) passed between drives by DAs Member drives must be on different DA ports (ideally on different DAs)

Parity volumes distributed across member drives in RAID Group Unlike RAID-5, the data is not striped (Volume A in the diagram above is an entire Logical Volume and only related to Volume B and Volume C via parity calculations) EMC Global Education
2004 EMC Corporation. All rights reserved. 38

Symmetrix Parity RAID technology is a combination of hardware and software functionality that improves data availability on drives in Symmetrix systems by using a portion of the array to store redundancy information. This redundancy information, called parity, can be used to regenerate data if the data on a disk drive becomes unavailable. Parity RAID is also referred to as RAID-S in Symm 5 and earlier architectures, and resembles RAID-5. However, EMCs Parity RAID DOES NOT STRIPE DATA, however, Parity is striped across all disks in the rank. Compared to a mirrored Symmetrix system, Parity RAID offers more usable capacity than a mirrored system containing the same number of disk drives. Like the Mirroring or Dynamic Sparing options, Symmetrix RAID parity protection can be dynamically added or removed. For example, for higher performance requirements and high availability, a Parity RAID group of volumes can be reconfigured as multiple mirrored pairs. Within the same Symmetrix system, data can be protected through Parity RAID, mirroring, and SRDF. Two Configurations are supported: 3 +1 and 7+1. Parity RAID employs the same technique for generating parity information as many other commercially available RAID solutions, that is, the Boolean operation EXCLUSIVE OR (XOR). However, EMCs Parity RAID implementation reduces the overhead associated with parity computation by moving the operation from controller microcode to the hardware on the XOR-capable disk drives.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 39

Parity RAID Considerations


While Symmetrix Parity RAID minimizes some of the hardware and software overhead associated with typical RAID-5, it is not offered as a performance solution

For high data availability environments where cost and performance must be balanced Fixed 3 + 1 configuration means 25% of disk space used for protection Avoid using in application environments that are 25% or greater write intensive Every write to a data volume requires an update (write) to the parity volume within that rank Write activity to the parity volume equals the total writes to the 3 data volumes within that rank In write intensive environments, the parity volume is likely to reach its Fast Write Ceiling, sending the entire rank into delayed write mode

High write volumes spread across Parity RAID Groups (avoid spindle contention) In some configurations, Parity RAID in a DMX environment may perform as well as RAID 1 protection on a Symmetrix 8000
EMC Global Education
2004 EMC Corporation. All rights reserved. 39

Some of the inefficiencies associated with RAID 5 have been eliminated with EMCs Parity RAID in a DMX system; however, RAID-1 Mirroring continues to provide the highest in availability and performance, and should be positioned as such. If customer requirements dictate using Parity RAID, planning and careful attention to layout is required to ensure optimal performance.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 40

Dynamic Sparing
Dynamic Spare

Dedicated spare(s) protects storage Disk errors are detected during I/O operations or through DAs Disk Scrubbing Data from failed disk is copied to Dynamic Spare When failed disk is replaced, data is automatically restored and Dynamic Spare resumes role as standby

EMC Global Education


2004 EMC Corporation. All rights reserved. 40

Every Symmetrix logical volume has 4 mirror positions. There is no priority associated with any of these positions. They simply point to potential physical locations (on the back end of the Symmetrix) for the logical volume entity. When sparing is necessitated, hyper volumes on the spare disk devices take the next available mirror position for the logical volumes present on the failing volume. All of these dynamic spare hyper volumes are marked as having all tracks invalid in the respective mirror positions of the logical volumes. It is now the responsibility of the Symmetrix to copy all tracks over to the Dynamic Spare. Dynamic sparing occurs at the physical drive level, since a physical drive is the FRU (Field Replaceable Unit) in the Symmetrix. In other words, you cant just replace a failed hyper volume, only the disk it resides on. However, the actual data migration from the volumes on the failed drive to the dynamic spare occurs at the logical volume level. Dynamic Sparing is also supported with Parity RAID. A minimum of volumes in the config, minimum of 3 spares are suggested. If a drive fails, a dynamic spare drive will copy the data volumes onto itself by rebuilding them from parity and reading from any remaining uncorrupted data. If there are at least 3 spares available, the 1st spare will also start copying data from uncorrupted drives in the group. The other 2 spares will copy the contents of the remaining data volumes on the unaffected drives in the group. This results in the formerly parity-protected volumes now being temporarily mirrored. Since parity cant be calculated with a drive lost, and mirroring is a faster way to make sure the data is redundantly protected, mirroring the entire RAID group results in the best way to protect against data loss until the problematic drive can be replaced.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 41

Meta Volumes
Between 2 and 255* Symmetrix Logical Volumes can be grouped into a Meta Volume configuration and presented to Open System hosts as a single disk
Assigned one SCSI address SCSI Address: Target = 1 LUN = 0 Logical Volume 001 Logical Volume 002 Logical Volume 003 Logical Volume 00F Meta Volume
LV 001 LV 002 LV 003 LV 00F

Allows volumes larger than the current maximum hyper volume size of 16GB
Satisfies requirements for environments where there is a limited number of SCSI addresses or volume labels available

Data is striped or concatenated within the Meta Volume Stripe size is configurable
2 cylinders stripe is default and appropriate for most environments

*Note: Symmetrix Engineering recommends Meta Volumes no larger than 512GB


41

EMC Global Education


2004 EMC Corporation. All rights reserved.

Meta Volumes become very useful in several environments. First, the environment where channel addresses are at a premium. Meta Volumes allow customers to present larger Symmetrix Logical Volumes to the host environment. They are able to present more GBs with less channel addresses. For example, the maximum number of devices that can be presented on a Symm 5 FA port is 256 (128 for Symm 4.X). If the customer has multipathing software (like PowerPath), devices will be presented down multiple Symm ports. Four paths to 64 volumes has just exhausted the 256 devices for those four Symm ports. There is a limitation on the number of volumes a host can manage. For example, with NT, the Drive lettering puts a limit on the number of volumes, and Meta Volumes prevent running out of drive letters by presenting larger volumes to NT hosts (Engineering has successfully presented a 1 TB volume to NT).

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 42

TimeFinder Introduction
TimeFinder allows local replication of Symmetrix Logical Volumes for business continuance operations Utilizes special Symmetrix Logical volume called a BCV, or Business Continuance Volume
BCV can be dynamically attached to another volume, synchronized, and split off Host can access BCV as an independent volume that may be used for business continuance operations Full volume copy
EMC Global Education
2004 EMC Corporation. All rights reserved. 42

STD

BCV

BCV Established

STD

BCV

BCV Split
1. Establish BCV 2. Synchronized 3. Split BCV 4. Execute BC operations using BCV

TimeFinder uses Business Continuance Volumes (BCVs) to create copies of a volume for parallel processing. Basic TimeFinder operations include:

Establish Mirror relationship between any standard volume and BCV. Basically, the BCV assumes the next available mirror position of the source volume. While a BCV is established, it is hidden from view and cannot be accessed. Synchronize data from Source to BCV volume. Synchronization will take place while production continues on the source volume. TimeFinder supports incremental establish by default where only changed data since the last established is synchronized. Split allows the BCV to be accessed as an independent volume for parallel processing. Restore allows the BCV to be established as a mirror to either the original source or a different volume and the data on the BCV is synchronized.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 43

EMC SNAP Introduction


EMC SNAP uses Snapshot techniques to create logical point-in-time images of a source volume
Snapshot is virtual abstraction of a volume Multiple Snapshots can be created from same source Snapshots are available immediately
Volume A Production view of volume

EMC SNAP does a Copy-on-Write


Writes to production volume are first copied to Save Area Uses only a fraction of the source volumes capacity (~2030%)

Snapshot of Volume A (VDEV) Save Area

Snapshot view of volume

Snapshots can be used for both read and write processing


Reads of unchanged data will be from Production volume Changed data will be read from Save Area Writes to Snapshot as save in Saved Area
EMC Global Education
2004 EMC Corporation. All rights reserved.

New writes copied to Save Area

43

EMC Snap creates space-saving, logical point-in-time images or snapshots. The snapshots are not full copies of data; they are logical images of the original information based on the time the snapshot was created. Its simply a view into the data. A set of pointers to the source volume data tracks is instantly created upon activation of the snapshot. This set of pointers is addressed as a logical volume and is made accessible to a secondary host that uses the point-in-time image of the underlying data.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 44

SRDF Introduction
Symmetrix Remote Data Facility (SRDF) Maintains real-time, or near real-time copy of data at remote location Similar concept as RAID-1, except mirror is located in a different Symmetrix Primary copy is called Source, remote copy is called Target Link options between local and remote Symmetrix based on distance and performance requirements
ESCON Fibre Channel Gigabit Ethernet

Three different options to meet recovery objectives

Source
EMC Global Education
2004 EMC Corporation. All rights reserved.

Target

44

SRDF is an online, host-independent, mirrored data storage solution that duplicates production site data (source) to a secondary site (target). If the production site becomes inoperable, SRDF enables rapid manual fail over to the secondary site, allowing critical data to be available to the business operation in minutes. While it is easy to see this as a disaster recovery solution, the remote copy can also be used for business continuance during planned outages as well as backups, testing, and decision support applications. EMC offers a complete set of replication solutions to meet a wide range of service level requirements. When implementing a remote replication solutions, users must balance application response time, recovery point objectives, and communications and infrastructure costs. SRDF in Synchronous mode offers minimal impact to the application. Performance is dependent on the distance between the source and target Symmetrix. The greater the distance, the more overhead to complete the write operation. The communications connections must be sized appropriately to handle peak processing workloads without impacting performance. SRDF, in synchronous mode, provides the highest level of data integrity. SRDF/AR (formerly SAR) offers no impact to the host server performance, but requires BCVs (Business Continuance Volumes) to allow point-in-time copies to be periodically split off from the source copy. Because the copy is periodically split off, the communication bandwidth requirement is less than a synchronous mode operation. The target copy, however, is no longer synchronous to the source, meaning in the event of a source failure, the data on the target will only be current to the last resync from the BCV. SRDF/A bridges the gap between SRDF and SRDF/AR by balancing response time, infrastructure costs, communication requirements, and recovery point objectives to provide a new level of remote replication. SRDF/A offers no impact to the host servers, requires some additional cache to operate adding slightly to the infrastructure costs, but only requires communication links sized to meet the average I/O work load (vs. peak for SRDF synchronous). SRDF/A provides an improved recovery point objective (vs. SRDF/AR) and allows customers to deploy remote replication over extended distances.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 45

Physical and Logical Volumes


Symmetrix Physical Drives are divided into Hyper Volumes (disk slices) One or more Hyper Volumes comprise Symmetrix Logical Volumes
Mirroring would require 2 Hyper Volumes for every 1 Symmetrix Logical Volume (M1 & M2)
Channel Director Channel Director

Symmetrix Logical Volumes are made available to hosts through Channel Directors
Bin file must map Logical Volumes channel address to the Channel Director -Processor- Port in order to be discovered/used by hosts
Disk Directo r

Cach e Disk Directo r

Host sees Symmetrix Logical Volumes as if they were entire physical drives
Physical Logical Disk Drives Disk Volumes

EMC Global Education


2004 EMC Corporation. All rights reserved. 45

From the Symmetrix perspective, physical disk drives are being partitioned into disk slices, called Hyper Volumes. Hyper Volumes could be used as unprotected Symmetrix Logical Volume; a mirror of a Symmetrix Logical Volume, a Business Continuance Volume (BCV), a parity volume for RAID S, a remote mirror using SRDF, a Disk Reallocation Volume (DRV), etc. Within the Symmetrix bin-file, the emulation type, size in cylinders, count, number of mirrors, and special flags (like BCV, DRV, Dynamic Spare) are defined. Each Symmetrix Logical Volume is assigned a hexadecimal identifier. The bin file also tells the Channel director what volumes are presented on what port and the address used to access it. When more than one host is connected to a port, LUN Masking, using Volume Logix, is used to further restrict which host has access to which volume. From the Hosts perspective, when a device discovery process occurs, the information provided back to the OS appears to be referencing a series of SCSI disk drives. To an Open Systems host, the Symm looks like JBOD (Just Bunch Of Disks). The host is unaware of the bin file, RAID protection, remote mirroring, BCV mirrors, dynamic sparing, ...etc. In other words, the host thinks its getting an entire physical drive.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 46

Configuration Considerations
Understand the applications on the host connected to the Symmetrix system
Capacity requirements I/O rates Read/Write ratios Locality of reference Sequential or Random

Understand special host considerations

Maximum drive and file system sizes supported Consider Logical Volume Manager (LVM) on the host and the use of data striping Device sharing requirements - Clustering Symmetrix provides flexibility for different sizes and protection within a system Standard sizes make it easier to manage Number of channels available from each host

Determine Volume size and appropriate level of protection

Determine connectivity requirements

Distribute workloads from the busiest to the least busy


EMC Global Education
2004 EMC Corporation. All rights reserved. 46

The best advice for configuring a Symmetrix storage subsystem for maximum performance is Go wide before deep!. This means the best possible performance will only be achieved if all the resources within the system are being equally utilized. This is much easier said than done, but through careful planning, you will have a better chance for success. Planning starts with understanding the host and application requirements.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 47

Symmetrix Availability: Phone-Home and Dial-In


EMC Phone-Home capability
Service Processor connects to external modem (can fit in existing teleco racks) Communicates error and diagnostic information to EMC Customer Service Provides problem resolution

Dial-In capability
Product Support Engineer (PSE) or Customer Engineer (CE) dial-in Allows full control of service processor through proprietary and secure interface Allows for proactive and reactive maintenance Can be disabled by customer through external modem

EMC Global Education


2004 EMC Corporation. All rights reserved. 47

Every Symmetrix unit has an integrated service processor that continuously monitors the Symmetrix environment. The service processor communicates with the EMC Customer Support Center through a customer-supplied, direct phone line. The service processor automatically dials the Customer Support Center whenever Symmetrix detects a component failure or environmental violation. An EMC Product Support Engineer at the Customer Support Center can also run diagnostics remotely through the service processor to determine the source of a problem and potentially resolve it before the problem becomes critical. Most call-home incidents are software-related and can be resolved remotely by dialing back into the Symmetrix. When required, a Customer Engineers will be dispatched to the Symmetrix to replace hardware or perform other maintenance.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 48

Symmetrix Availability: Hardware Redundancy


Symmetrix Architecture based on the concept of N + 1 redundancy
One more component than is necessary for operation

Continuous operation, even if failures occur to any major component:


Global Memory Director boards Channel Director boards Disk Director boards Disk drives Communications Control Module Cooling Fan Modules Power modules Batteries Service Processor
Channel Adapters and Cache Cards Disks

Non-disruptive Microcode Upgrades and Loads


EMC Global Education
2004 EMC Corporation. All rights reserved.

Power Modules Batteries

48

The Symmetrix undergoes the most rigorous pre-ship testing in the industry. Component, environmental, and operational testing all but guarantee the elimination of defective or substandard components. Non-disruptive Microcode Upgrades and Loads: Non-disruptive microcode upgrade and load capabilities are currently available for the Symmetrix. Symmetrix takes advantage of a multi-processing and redundant architecture to allow for hot loadability of similar microcode platforms. Within a code family, release levels can be non-disruptively loaded without interruption to user access. During a non-disruptive microcode upgrade, the Product Support Engineer downloads the new microcode to the service processor. The new microcode loads into the EEPROM areas within the channel and disk directors and remains idle until requested for hot load in control storage. The Symmetrix system does not require manual intervention on the customers part to perform this function. All channel and disk directors remain in an on-line state to the host processor, thus maintaining application access. Symmetrix will load executable code at selected windows of opportunity within each director hardware resource until all directors have been loaded. Once the executable code is loaded, internal processing is synchronized and the new code becomes operational. This capability can be utilized to upgrade or to back down from a release level within a family. NOTE: During a nondisruptive microcode load within a code family, the full microcode is loaded, which consists of the same base code plus additional patches that reside in the patch area.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 49

Advanced Availability: PowerPath


Channel Director Processor Processor Processor Processor CACHE

PowerPath from EMC is host-based software that supports multiple paths to a Symmetrix Volume
Open Systems only (not needed for OS/390) GUI or CLI management capabilities

Channel Director Processor Processor Processor Processor

Symmetrix is configured so that volumes can be accessed through multiple directors/ports Eliminates HBA, cable, switch, and director as single points of failure Load balancing across paths also improves performance
49

EMC Global Education


2004 EMC Corporation. All rights reserved.

While Channel Directors are redundant, it is important to remember that there is no automatic fail over on the front end. EMC PowerPath, along with a properly architected connectivity from hosts to storage, ensures continuous availability on the front end. PowerPath is an Open Systems, host-based software application that allows UNIX and Windows hosts to have multiple paths to the same Symmetrix Logical Volume (disk from a hosts perspective). For the highest availability, the physical connections from the HBA should be to: 1) separate channel directors, and 2) that are located on different internal system busses. The easiest way to achieve this configuration is to ensure one Channel Director is odd numbered and one is even numbered. This is not an issue with the DMX in that all directors have a direct path to cache. Important note, the more paths that exist to one Symmetrix Logical Volume, the more SCSI addresses are being used within the Symm. Though PowerPath can accommodate up to 32 paths to one Logical Volume, realistically, this could quickly exhaust available addresses. For example, with 4 Symm Ports and 100 volumes, it would be impossible to present all 100 volumes on all 4 ports (paths). This is because of the 256 maximum devices on any one FA port.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 50

DMX: Dual-ported Disk and Redundant Directors


Directors are always configured in pairs to facilitate secondary paths to drives Each disk module has two fully independent Fibre Channel ports Drive port connects to the Director by a separate loop
Each port connects to different Director in the Director pair Star-hub topology Port bypass cards prevent a drive failure or replacement for effecting the other drives on the loop
Disk Director 1 Disk Director 16 S P S P S P S P P S P S P S P

Directors have four primary loops for normal drive communication, and four secondary loops to provide alternate path if other director fails
EMC Global Education
2004 EMC Corporation. All rights reserved.

S P = Primary Connection to Drive S= Secondary Connection for Redundancy


50

Symmetrix DMX back-end employs an arbitrated loop design and dual-ported disk drives. Each drive connects to two Disk Directors through separate Fibre Channel loops. The loops are configured in a star-hub topology with gated hub ports and bypass switches that allow individual Fibre Channel disk drives to be dynamically inserted or removed.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 51

Symm 5: Dual-Initiator Disk Director


Disk Directors are installed in pairs to facilitate secondary paths to drives In the unlikely event of a disk director processor failure, the adjacent director will continue servicing the attached drives through secondary path
In this example, DA1 processor b would see ports C & D for DA2 processor b as its A &B ports in a fail-over scenario

DA 1
Processor b

MIDPLANE
Port C

Port D

DA 2
Port C

Protecting against DA processor card failure Physical drives are not dualported but are connected via a dual-initiator SCSI Bus Volumes are typically mirrored across directors
EMC Global Education
2004 EMC Corporation. All rights reserved.

Processor b
Port D

MIDPLANE Solid line = Primary Path Dotted line = Secondary Path

51

Symm 4 and 5 architectures utilize a dual-initiator back-end architecture that ensures continuous availability of data in the unlikely event of a Disk Director failure. This feature works by having two disk directors shadow the function of the each other. That is, each disk director has the capability of servicing any or all of the disk devices of the disk director it is paired with. Under normal conditions, each disk director services its disk devices. If Symmetrix detects a disk director hardware failure, Symmetrix calls home but continues to read from or write to the disk devices through the disk director it is paired with. When the source of the failure is corrected, Symmetrix returns the I/O servicing of the two disk directors to their normal state. Note: On the 4.x family, dual-initiator occurs by physically connecting one disk directors port card to the port card of the adjacent disk director.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 52

Advanced Availability: Power Subsystem


Each Symmetrix has 3 power supplies and redundant batteries
Symmetrix can connect to 2 external power sources (primary / auxiliary) Three AC/DC and three DC/DC power supply modules operate in a redundant parallel configuration While one battery is acting as the primary standby, the other battery is acting as the secondary standby (periodically switch roles)

Power modules and batteries are FRUs and hot swappable Batteries are periodically load tested to ensure their availability in the event of main power system failure Batteries power cache and all disks within the ICDA

Upon detection of main power failure; Symm will continue to accept I/O from the host environment for 90 seconds If power is not re-established: Symm then waits for battery timer to run down to begin the graceful shutdown process (spin-down the drives and retract heads) Symm would be immediately available to hosts (no IML required) if power returns prior to battery timer run down
Symm will stop accepting I/O Destage all write pending data to its actual location on disk

EMC Global Education


2004 EMC Corporation. All rights reserved. 52

Power Subsystem: The Symmetrix has a modular power subsystem featuring a redundant architecture that facilitates field replacement without interruption. The Symmetrix power subsystem connects to two dedicated or isolated AC power lines. If AC power fails on one AC line, the power subsystem automatically switches to the other AC line. Three AC/DC power supply modules operate in a parallel configuration. These modules provide 56V power for the DC/DC power distribution system. If any single AC/DC power supply module fails, the remaining power supplies continue to share the load. These modules provide 5V and 12V power to the various components in the Symmetrix unit. System Battery Backup: The Symmetrix backup battery subsystem maintains power to the entire system if AC power is lost. The backup battery subsystem allows Symmetrix to remain online to the host system for three minutes in the event of an AC power loss, allowing the directors flush cache write data to the disk devices. Symmetrix continually recharges the battery subsystem whenever it is under AC power. When a power failure occurs, power switches immediately to the backup battery and Symmetrix continues to operate normally. When the battery timer window elapses, Symmetrix presents a busy status to prevent the attached hosts from initiating any new I/O. The Symmetrix destages any Write data still in cache to disk, spins down the disk devices and retracts the heads and powers down. Symmetrix Emergency Power Off: The Symmetrix emergency power off sequence allows 20 seconds to destage pending write data. When the EPO switch is set to off, Symmetrix immediately switches to battery backup and initiates writes of cache data. Data directed to mirrored pairs is written to only one device. The first available mirror device receives the data, and the other mirror device status is set to invalid. Data directed to non-mirrored volumes is written to the first available spare area on any devices available for write. The director records that there are pending write operations to complete, and stores the location of all data that has been temporarily redirected. When power is restored, all data is written to its proper volume and mirrored pairs are reestablished as part of the initial load sequence.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 53

Advanced Availability: Cache Protection


Why is cache not mirrored? - Advanced method of cache protection allows for more usable cache - for optimal I/O performance
Proven effective by the Symms install base

Minimum of 2 memory boards per Symmetrix (redundancy)


Each board is connected to and accessed by multiple busses Each board has redundant power sources

Memory boards comprised of multiple chips (chips proactively monitored through I/O activity and cache scrubbing)
Each chip has redundant paths (A and B port ASICs) Through Enginuity, each chip has a threshold for correctable errors

When the correctable error threshold is reached or a permanent (uncorrectable) error is detected:
Call-Home initiated and the suspect area within cache is fenced off Any write pending data is written to disk The board is non-disruptively replaced by a Customer Engineer

Data written to cache is rescanned against the data residing within DA or Channel Director buffer to ensure correctness
EMC Global Education
2004 EMC Corporation. All rights reserved. 53

Proactive Cache Maintenance: EMC makes every effort to provide the most highly reliable hardware in the industry, and provides unique methods for detecting and preventing failures in a proactive way. This sets it apart from all others in providing continuous data integrity and high availability. Symmetrix actively looks for soft errors before they become permanent, and then records them. By tracking these soft or temporary errors during normal operation, Symmetrix can recognize patterns of error activity and predict a hard failure before it occurs. This proactive error tracking can usually prevent an error in cache by generating a call-home for service or by fencing off a failing memory segment before any hard data errors occur. Cache Scrubbing: All locations in cache are periodically read and rewritten to detect any increase in single-bit errors. This cache scrubbing technique maintains a record of errors for each memory segment. If the predetermined error threshold is reached for single-bit errors, the service processor generates a call-home for immediate attention. Constant cache scrubbing reduces the potential for multi-bit or hard errors. Should a multi-bit error be detected during the scrubbing process, it is considered a permanent error, the segment is immediately fenced (removed from service), the segment's contents are moved to another area in cache, the service processor call-home alerts EMC, and Customer Service is immediately notified and a customer engineer is dispatched with the appropriate parts for speedy repair. Even in cases where errors are occurring and are easily corrected, if they exceed a preset level, the call home is executed. This represents EMC Engineering philosophy of not accepting any level of probability for errors. On-line Maintenance: Every Symmetrix is configured with a minimum of two memory boards to allow for on-line hot replacement of a failing memory board.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 54

Advanced Availability: Cache Protection


Cache slots are protected using advanced error detection and correction logic along with data interleaving
ECC is employed by every director to allow for single-bit and non-consecutive doublebit error detection and correction
Data is sent between Directors and Cache as a 72 bit memory word (64 bits of data + 8 bits of parity)
Port ASIC Data 64+8 A Command 10+1 Address 32+1 Maintenance
Processor

Data 64+8 Port ASIC B Command 10+1 Address 32+1


Port ASIC Data 64+8 A Command 10+1 Address 32+1 Maintenance
Processor

Port ASIC on the Memory Board creates 80 bit package (64 bits of data + 16 bits of parity) from the incoming memory word These 80 bits are interleaved amongst 20 different SDRAM chips (memory bank) LRC (longitude redundancy check) also employed to XOR accumulated 4KB sectors within region

Data 64+8 Port ASIC B Command 10+1 Address 32+1 Data 64+8 Port ASIC A Command 10+1 Address 32+1 Maintenance
Processor

These factors enable single nibble (4 bits) error correction (consecutive) and double nibble error detection
Resulting in the capability to withstand the failure of an entire SDRAM chip
EMC Global Education
2004 EMC Corporation. All rights reserved.

Data 64+8 Port ASIC B Command 10+1 Address 32+1


Port ASIC Data 64+8 A Command 10+1 Address 32+1 Maintenance
Processor

Data 64+8 Port ASIC B Command 10+1 Address 32+1

54

Symmetrix assures the highest level of data integrity by checking data validity through the various levels of the data transfer in and out of Cache. Byte Level Parity Checking All data and control paths have parity generating and checking circuitry that verify hardware integrity at the byte or word level. All data and command words passed on the system bus, and within each director and global memory board, include parity bits used to check integrity at each stage of the data transfer. Error Checking and Correction (ECC) The directors detect and correct single-bit and non-consecutive double-bit errors and report uncorrectable 3-bit or more errors. Sector Level Longitude Redundancy Code (LRC) The LRC calculation further assures data integrity. The check bytes are the XOR (exclusive OR) value of the accumulated bytes in a 4KB sector. LRC checking can detect both data errors and wrong block access problems. Nibble-level Interleaving Data and storage locations are spread across multiple components to improve error detection and recovery. For example, each memory word and associated ECC (80 bits) are stored in 20 separate DRAM chips. The failure of a single memory chip, the most common failure, is detected as a correctable error; 4 consecutive bits (a nibble) can be rebuilt using the remaining healthy chips and the associated ECC.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 55

Symmetrix DMX Series

DMX800
Packaging Drives Capacity (raw) Capacity (usable) Drive channels Cache Directors Maximum cache Connectivity (combinations may be limited by board slots)
EMC Global Education
2004 EMC Corporation. All rights reserved.

DMX1000
Integrated 144 21 TB 18.4 TB (parity 7+1) 16 x 2 Gb Fibre Channel 24 64 GB 48 x 2 Gb FC 48 x ESCON 24 x 2 Gb FICON 8 x GigE SRDF 24 x GigE iSCSI

DMX2000
Integrated 288 42 TB 36.8 TB (parity 7+1) 32 x 2 Gb Fibre Channel 48 128 GB 96 x 2 Gb FC 96 x ESCON 48 x 2 Gb FICON 8 x GigE SRDF 48 x GigE iSCSI

DMX3000
Integrated 576 84 TB 73.5 TB (parity 7+1) 64 x 2Gb Fibre Channel 48 128 GB 64 x 2 Gb FC 64 x ESCON 32 x 2 Gb FICON 8 x GigE SRDF 32 x GigE iSCSI
55

Modular 60 / 120 8.75 / 17.5 TB 7.6 / 15.3 TB (parity 7+1) 8 / 16 x 2 Gb Fibre Channel 2 32 GB 8/16 x 2 Gb FC 4 x 2 Gb FICON 4 x GigE SRDF 4 x GigE iSCSI

These are the features of the DMX series.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 56

Symmetrix Foundations Summary


Symmetrix basic architecture is comprised of three functional areas (Front End, Back End and Shared Global Memory), connected by four internal system busses Hosts connect to Symmetrix using SCSI, Fibre Channel or ESCON, FICON, and today, iSCSI All I/O must be serviced through cache (read hit, read miss, fast write, delayed write) Symmetrix physical disk drives are divided into Hyper Volumes, which comprise Symmetrix Logical Volumes that are presented to the host environment as if they were entire physical drives Mirroring, Parity RAID, SRDF, and Dynamic Sparing are all media protection options available on Symmetrix Redundancy in the hardware design and intelligence through Enginuity allow Symmetrix to provide the highest levels of data availability
EMC Global Education
2004 EMC Corporation. All rights reserved. 56

These are some of the main features of the Symmetrix. Please take a moment to read them.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 57

Course Summary
Key points covered in this course: Draw and describe the basic architecture of a Symmetrix Integrated Cached Disk Array (ICDA) Write a detailed list of host connectivity options for Symmetrix Explain how Symmetrix functionally handles I/O requests from the host environment Illustrate the relationship between Symmetrix physical disk drives and Symmetrix Logical Volumes Describe the media protection options available on the Symmetrix Referencing a diagram, explain some of the high availability features of Symmetrix and how this potentially impacts data availability Describe the front-end, back-end, cache, and physical drive configurations of various Symmetrix models
EMC Global Education
2004 EMC Corporation. All rights reserved. 57

These are the main points covered in this training. Please take a moment to read them.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 58

Enginuity 5670+ Update

EMC Global Education


2004 EMC Corporation. All rights reserved. 58

Updates have been made to this course based on Enginuity code 5670+. This section includes new features supported by this code update.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 59

Update Objectives
Upon completing this update, you will be able to list:
Enginuity 5670+ Management Features Enginuity 5670+ Business Continuity Features Enginuity 5670+ Performance Features

EMC Global Education


2004 EMC Corporation. All rights reserved. 59

Upon completion of this update, you will be able to list the features supported by Enginuity 5670+.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 60

Management Features
5670+ Management Features
End User Configuration
User control of volumes and type

Symm Purge
Secure deletion method

Logical Volumes
Increase number of hypers

Volume Expansion
Striped- meta expansion

EMC Global Education


2004 EMC Corporation. All rights reserved. 60

User Configuration - Enginuity v 5670+ will allow users to un-map CKD volumes, delete CKD volumes, or convert CKD volumes to FBA. These user configuration controls will simplify the task of reusing a Symmetrix by not requiring an EMC resource to modify the bin file. Symm Purge - provides customers a secure method of deleting ( electronic shredding) sensitive data. This will simplify the reuse of drive assets. Logical Volumes - v 5670+ will support an increased number of hypers per spindle. The number of hypers will be dependent upon the protection scheme. Volume Expansion - Previous microcode versions only supported the expansion of concatenated meta volumes. V5670+ will now support the expansion of both striped and concatenated meta volumes.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 61

Business Continuity Features


5670+ Business Continuity Features
SRDF/A
multi-session support

Protected Restore
Enhanced restore features

SNAP Persistence
Preserves snap session

EMC Global Education


2004 EMC Corporation. All rights reserved. 61

SRDF/A- currently (v 5670) SRDF-A can only support a single-session. With v5670+ code, support will be available for multi-session SRDF/A data replication. Multi-session uses host control ( Mainframe only). Cycle switching is synchronized between the single-session SRDF/A Symmetrix pairs. Protected Restore- v 5670+ provides Protected Restore features. While the restore is in progress, read miss data will come from the BCV volume, writes to the Standard volume will not propagate to the BCV volume, and the original Standard to BCV volume relationship will be maintained. SNAP Persistence - v 5670+allows a protected snap restore and preserves the virtual snap session when the restore terminates.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 62

Performance Features
5670+ Performance Features
RAID 5
Either (3+1) or (7+1) configurations in same system Both Parity RAID and RAID 5 can exist in same system on same disks SRDF/ BCV protection

Optimizer for RAID 5


Support for swapping individual members No support for Parity RAID

EMC Global Education


2004 EMC Corporation. All rights reserved. 62

RAID 5 will be available in two configurations: either 3 data drives and 1 parity drive, or 7 data drives and 1 parity drive. Currents limitations include: RAID 5 3+1 and 7+1 configuration cannot exist in the same frame. The same restrictions for Parity RAID must be observed ( mixing Parity RAID 3+1 and 7+1 in the same frame), as well as any combination of 3+1 and 7+1 Parity RAID and RAID 5 configurations. For example, Parity RAID (3+1) is not supported with RAID 5 ( 7+1). A single parity RAID protection scheme can be configured within a frame with any combination of SRDF, BCV, and mirroring protection. A single RAID 5 protection scheme can be configured within a frame with any combination of SRDF, BCV, and mirroring protection. A single Parity RAID protection scheme and RAID 5 of the same scheme can be configured within a frame with any combination of SRDF, BCV, and mirroring protection (for example, Parity RAID 3+1 is supported with RAID 5 3+1.) Optimizer v 5670+ will provide Optimizer support for RAID-5. The microcode will support the swapping of individual members of a RAID 5 group instead of swapping the entire RAID 5 group. Optimizer does not support Parity RAID.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 63

Summary
In this update, the following key point were discussed:
Enginuity 5670+ Management Features Enginuity 5670+ Business Continuity Features Enginuity 5670+ Performance Features

For additional information: http://powerlink.emc.com

EMC Global Education


2004 EMC Corporation. All rights reserved. 63

New Enginuity 5670+ features were covered in this Symmetrix Foundations module update. For additional information refer to http://powerlink.emc.com.

Copyright 2004 EMC Corporation. All Rights Reserved.

Symmetrix Foundations, 64

EMC Global Education


2004 EMC Corporation. All rights reserved. 64

Thank you for your attention. This ends our Symmetrix Foundations training.

Copyright 2004 EMC Corporation. All Rights Reserved.

You might also like