You are on page 1of 26

VMAX3 operating environemnt is HYPERMAX OS 5977.

%% It is factory pre-configured
%% 100% Virtually Provisioned
%% Can be configured as all Flash
%% Rapidly provision storage
%% set service Level Objectives

----------
VMAX 100 K
----------

## 1 TO 2 eNGINES.
## 1440 Drives

----------
VMAX 200 K
----------
## 1 to 4 Engines
## 2880 Drives

----------
VMAX 400 K
----------

## 1 to 8 engines
## 5760 Drives

Features of VMAX3.

** Low latenty
** high IOPS
** can be configured all flash
** storage can be rapidly provisioned with desired service level agreements.
** it is management simplicity
** Massive scalability
** EMC solutions enabler version 8.0 ( uniSPHERE 8.0) to control VMAX3 Arrays.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
VMAX 100K,200K,400K

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
feature Description

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
Maximum Drives Per Engine @ (720) 2.5"
@ (360) 3.5"

-----------------------------------------------------------------------------------
------
Drive Options @ Hybrid ( mixed drive types )
@ All-Flash

-----------------------------------------------------------------------------------
------
DAE Mixing @ 60-drve and 12-drive behind an engine
@ Dingle increments

-----------------------------------------------------------------------------------
------
Power Option @ Three-Phase Delta(50amp),WYE(32 amp)
@ single-phase(32 amp)

-----------------------------------------------------------------------------------
------
Dispersion @ System Bays 2-8, upto 25 meters from
System Bay 1

-----------------------------------------------------------------------------------
------
Vault @ Vault to Flashin engine

-----------------------------------------------------------------------------------
------
Racking Options @ Single Engine
@ Dual Engine
@ Thier-party Racking

-----------------------------------------------------------------------------------
------
Service Access @ Integrated service processor in System
Bay 1
# Management Module congrol station
(MMCS)
@ Service tray on additional system bays

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
VMAX 100K VMAX 200K
VMAX 400K

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Engines 1-2 1-4
1-8

-----------------------------------------------------------------------------------
----------------------------
Cache/Engine 512 Gb or 1 TB 512 GB, 1TB,or 2 TB
512 Gb ,1 TB or 2 TB

-----------------------------------------------------------------------------------
----------------------------
Engine Type 2.1 GHz 2.6 GHz
2.7 GHz
24 core 32 core
48 core

-----------------------------------------------------------------------------------
----------------------------
Max 2.5" Drives 1440 2880
5760

-----------------------------------------------------------------------------------
----------------------------
Max 3.5" Drives 720 1440
2880

-----------------------------------------------------------------------------------
----------------------------
Max usable Capacity 0.5 PBu 2.1 PBu
4.0 PBu

-----------------------------------------------------------------------------------
----------------------------
Max FE ports 64 128
256

-----------------------------------------------------------------------------------
----------------------------
Infiniband Fabric Dual 12-port Switches Dual 12-port Switches
Dual 18-port Switches

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

VMAX3 - is Dynamic Virtual Matrix


@ which enables 100s of CPU cores to be pooled together and allocated on
demand to meet the performcance requirements for the dynamic work loads.
@ This is first only Dynamic Virtual Matrix in the world today.
@ it is architected for agility and efficiency at scale
@ Resources are dynamically aportioned to host applications, data
services, and storage pools to meet application service levels
@ this enables the system to automatically respond to changing work loads
and optimize itlslf to delever the best performance available fromthe current
hardware
@ Dynamic Virtual Matrix provides fully redundant architecture along with
fully shared resources within a dual controlor node and across multiple controlors
@ Dynamic load distribution architecture essentially is the BOIS of the
VMAX3 operating software and provides truly scalable multi control architecture
that scales and manages
from 2 fully redundant controlors upto 16 fully redundant controlors all
sharing IO processing and cache the resources.

There are 3 core allocation policies


@ balanced (Default)
@ Front-End
@ Back-End

-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-----
The load can be dynamially distributed over time. - currently this feature is
not supported by software ( by user) , we need to contact the EMC service Engineer
to do this

-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-----

VAMX3 KEY FEATURES


## 100% virtually provisioned.
## Arrays are shipped pre-configured.
@@ Data devices, Data pools, storage Resouce pool and Service Level
Objectives.
## FAST ( the FAST is always enabled on VMAX3)
@@ Enables Service Level Objective based provisioning
@@ All host related data managed by FAST.
## Replication
@@ TimeFinder SnapVX
@@ SRDF (Extended SRDF fetures)
@@ ProtectPoint - Integration with DMC Data Domain
## eNAS
@@ Embedded NAS Data Services on Virtual Data Movbers and Control
Stations.

VMAX3 CONFIGURATION TOOLS.

@@ Initial Configuration
-- Configuration is done at the factory.
-- Symwin and Simplified SymmWin
~ Runs on Management Module control Station (MMCS) ( is this like
service processor ?)
~ Access restricted to authorized EMC personnel only.

@@ End User tools for configuration and Managment.


-- solutions Enabler 8.0.x
-- Unisphere for VMAX 8.o.x

SE(solutions Enabler) integration with HYPERMAX OS

-----------------------------------------------------------------------------------
---

Unisphere for VMAX


SYMCLI

-----------------------------------------------------------------------------------
---
SYMAPI

-----------------------------------------------------------------------------------
---
^
^
|
|
|
|
|
|
|
| MMCS(Management Module control Station)
|
| SYMMWIN
v
v

-----------------------------------------------------------------------------------
----
HYPERMAX OS

-----------------------------------------------------------------------------------
----

@@ EMC Solution Enabler APIs are the storage management programming


interfaces that provide an access mechanism for managing the VAMX3 Storage Arrays.
@@ They can be used to develep storage Management applications
@@ SYMCLI resides on Mangment Host system to manage , monitor , to
perform control operations on the storage array
@@ SYMCLI commands are invoked from the Managment Host operating
system command line shell
@@ SYMCLI commands are built on top of sym API library functions
which uses system calls that generate low level I/O SCSI commands to the storage
arrays.
@@ UNISPHERE for VMAX3 is the GUI that makes API calls to SYM API
to get the access to the storge array
@@ SymmWIN running on the VMAX3 MMCS access Hypermax OS directly.

Introduction to solutions Enabler.

## Symmetrix Command Line Interface (SYMCLI).


## Comprehensive command set for Managing VMAX3 Arrays.
-- Invoked from the Host OS command line.
-- Scripts that may provide further integration with OS and
application.
## Security and access controls
-- Monitor only
-- Host-based and user-based controls.

Introduction to UNISPHERE for VMAX

## VMAX3 Arrays
--Service Level Based Management.
## Performance Analyzet
-- Installed by default
-- pOstgreSQL
## APIs for Automation and Provisioning.

UNISPHERE functionality
## Manage eLicenses, User and Roles
## Storage Configuration Managment
-- SLO based provisioning
-- FAST
## Configure and Monitor Alerts
## Performance Monitoring
-- Real time, Root cause and Historical ( real time
analysis and historical trending of VMAX performance data)
we can also see high frequency metrix in the
real time , VMAX3 system HEAT maps ,graphs ,detailing system performance.
we cab also drill down through data to
investigate the issues to monitor perfomance overtime,execute schedule and ongoing
reports and
to export data to a file
-- Dashboards
users can customise their own Dashboards along
with system pre given.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

==========================

VMAX3 Storage Provisioning


==========================

Factory Pre-Configuration

## Disk Groups
@ A collection of Physical Drives.

-each drive in the Disk Group shares the same perfomance


characterstics determined by the rotational speed and technology of the dirves it
may be 15k/10k/7.2k/flash and capacity

## Data Pools
@ Collection of Data Devices(Tdats)
- Preconfigured in each Disk Group
- All the disks in the disk group are of same RAID protection(
so all the disks in the given group are having same RAID protection and fixed
size )

@ All Data Devices in a dis Group added to a Data Pool.


- 1-to-1 relationship between Data Pool and disk group.

@ Performance capability of each Data Pool is Known


- Baed on the drive type, spped, capacity, quantity of drives
and RAID protection

## Storage Resource Pool


@ collection of Data Pools

## Service Level Objectives.

-----------------------------------------------------------------------------------
-------------------------------------------------------
IMP Disk Groups,Data Pools, Storage Resource pool, Service Level
Objectives can'b be configured or modfied with solutions enabler of UNISPHERE
These are created during the configuration process at the
factory.

-----------------------------------------------------------------------------------
-------------------------------------------------------
All the data pools must belong to only one disk group, there is one to
one relationship between disk groups and data pools.
Disk group must contaning the disk of same disk technology,
capacity,rotational speed and RAID type
The performance capability of each data pool is known and based on
drive type , speed, capacity , Quantity of drives , and the RAID protection.

+++++++++++++++++++++++

Data Pools -Protection


+++++++++++++++++++++++

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Option Characteristics
protection Performanace cost

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RAID 1 @ Write to two separate Physical drives
Higher Fastest LOW
@ Read frin Single Drive

-----------------------------------------------------------------------------------
-------------------------------------------------------
RAID 5 @ Parity based protection
High Fast Read and Good Write Lower
@ Striped data and parity
@ 3+1 and 7+1

-----------------------------------------------------------------------------------
-------------------------------------------------------
RAID 6 @ Two parity drives 6+2 and 14+2
@ Data availability is primary consideration
Highest Fast Read and Fair Write Lower
@ Performance is a secibdart consideration

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

write penalaties

RAID 1: 2 writes on back end


RAID 5: each host write results to 2 reads and 2
writes
RAID 6: each host write results to 3 reads and 3
writes

++++++++++++++++++++++++++
SRP -Storage Resource Pool
++++++++++++++++++++++++++

@@ Collection of Data Pools


@@ Factory Pre-configured includes one SRP (which contains
all the data pools in the array)
@@ Not configurable via solutions Enabler or Unisphere for
VMAX
@@ Multiple SRPs may be configured (by qualified EMC
personal)
if there are multiple SRPs one of them must be
marked as default

storage resource pool

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| ~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | Flash -RAID(3+1) | | SA 15k - RAID 1 | | SAS
7.2K -RAID 6(14+2) | |
| | Data Pool 0 | | Data Pool 1 | | Data
Pool 2 | |
| ~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

++++++++++++++++++++++++++++++++++++++++++++++++
SLO - Service Level Objective Based Provisioning
++++++++++++++++++++++++++++++++++++++++++++++++

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Service Level Objective
Storage Groups

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ Defined ideal performance Operating range of an
application @ Canbe explicitly associated with an SRP
@ Can be combined with a workload type
@ Can be explicitly associated with an SLO and workload type
-- further refines the performance objective
-- Defines the SG and FAST Managed.
@ Preconfigured
@ SGs are implicitly associated with the default SRP and the Optimized SLO

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The Service Level Objective(SLO) defines the ideal


performance operating range of an application
## each SLO contains an expected maxim responce time
range
The responce time is measured from the perspective
of the Front end adopter.

## The SLO can be combined with work load type before


refind the performance objective
## SLO are pre-defined /pre-configured ( these are not
configurable using solutions enabler or UNISPHERE)

--------------

storage groups
--------------

** storage groups in VMAX3 (with Hypermax o/S


5977 ) are similar to the storage group used in previous generation of the VMAX
arrays.
** It is logical grouping of devices used for
FAST ,Device Masking , control and monitoring.
** In the HyperMax OS 5977 a storage group can be
associated with SRP.
-- this is allows the devices in the SGs to
allocate storage from in any pool in the SRP.
** When an SG is associated with a SLO it defines
a SG as FAST managed storage group .

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

SLO Performance Behaviour


Expected Average Response Time

$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

Diamond Flash
0.8 ms

-----------------------------------------------------------------------------------
---------------------
Platinum Between Flash & 15K RPM
3.0 ms

-----------------------------------------------------------------------------------
---------------------
Gold 15k RPM
5.0 ms
-----------------------------------------------------------------------------------
---------------------
Silver 10K RPM
8.0 ms

-----------------------------------------------------------------------------------
---------------------
Bronze 7.2K RPM
14.0 ms

-----------------------------------------------------------------------------------
---------------------
Optimized(default) System optimized
N/A

-----------------------------------------------------------------------------------
---------------------
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$

++ there are 5 available service level objectives


(SLO) varying and expected average Response time targets

@@ the Optimized SLO has no explicit Resoonce time


target

@@ the optimized SLO achieves the optimal


performance by placing the most active data on higher performing storge
and the least active data on the most cost
effictive storage.

## Diamond emulates Flash Drive performance.


%% platinum emulates performance between Flash and
15K RPM drives
&& Gold emulates performance of 15K RPM dirves
@@ Silver emulates performance of 10k RPM drives
** Bronze emulates performance of 7.2k RPM drives

++ the actual response time of an application


associated with a SLO vary based on the actual workload
it will depend
-- on the average I/O size
-- Read,Write Ratio
-- and use of local and remote replication

-----------------------------------------------------------------------------------
-------------------------------
IMP @@ these SLOs are Fixed and Cannot be
modified
@@ the end user can associate the desired
SLO with a storage group.
@@ please also note that Certain SLOs are
may not available on an array , if certain drive type are not configured
-----------------------------------------------------------------------------------
-------------------------------

ex:
Diamond SLOs are not available if there
no Flash drives present in an array.
Broanze SLOs are not available if there
no 7.2k RPM drives are not present in any array.

+++++++++++++++++++++++++
AVAILABLE WORKLOAD Types
+++++++++++++++++++++++++

@@ there are 4 workload types


@@ The workload type can be speficied with a
diamond,platinum,Gold,Silver, Bronze Slo types to further refine response time
expectations
IMP Note : we cannot associate the
work load type with the Optimized SLO

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
Workload type
Description

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
OLTP Small
block I/O workload

-----------------------------------------------------------------------------------
-------------------
OLTP with Replication Small
block I/O workload with local or remote replication

-----------------------------------------------------------------------------------
-------------------
DSS Large
Block I/O workload

-----------------------------------------------------------------------------------
-------------------
DSS with Replication Large
blockI/O workload with local or remote replication

-----------------------------------------------------------------------------------
-------------------

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++
AUTO PROVISIONING GROUPS
++++++++++++++++++++++++

@@ AUTO-PROVISIONING GROUPS ARE USE TO


ALLOCATE VMAX3 STORAGE TO HOSTS
@@ VMAX3 ARRAYS PRESENT THIN DEVICES TO HOSTS
-- Open system host sees VMAX3 Thin Devices
as FBA SCSI disk drives.
@@ VMAX3 Thin Device -Size Metrics
-- Sector - 16 blocks (512 byte Block) 8
kb.
-- Track Size - 16 Sectors - 16 *8 =128 KB
-- Cyclinder Size - 15 Trakcs - 15 * 128 =
1920 KB
-- Maximum Device Size - 8947848 Cylinders
= 16384 Gb = 16 TB
-- Device sizes are typically specified in
Cylinders.

++ Open system host sees VMAX3 Thin Devices


as FBA SCSI disk drives.
-- standard SCSI commands such as SCSI
enquiry and SCSI read capacity,
return low lovel physical device
data (such as vendor ,configuration,and basic configuration
but have limited knowledge of the
configuration details on the storage system.
-- knowledge of VMAX3 specific
information such as Director configuration,Cache size,number of devices,
mapping of physical to logical port
status,flags etc require a different set of tools and that is what the solutions
enabler and UNISPHERE.

++ Host IOPS are managed by Hypermax O/S


environment, which runs on VMAX3 array.
++ VMAX3 thin devices are presented to the
host with these emulation attributes
-- Eacn device has N cyclinders
( the number is configurable)
each cyclinder has 15 tracks
each device track is in
FBA(fixed Block Architecture) is 128 KB
256 blocks of 512 each
++ Maximum thin device that can be
configured on a VMAX3 is 16 TB

+++++++++++++++++++
STORAGE ALLOCATION
+++++++++++++++++++

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Initiator Group @ Fibre Channel
Initiator /HBA/HBA WWNS
- A initiator
group can have upto max of 64 initiators(WWS) or 64 child Initiator group names
- A initiator
group can't contain mixture of host Initiators(wwn) and child IG names
- An individuall
belongs to only one initiator group
-- howerver
once the initiator is in the group, the group can be member of another initiator
group
-- this
feture is called cascaded initiator group ( this is allowed cascaded level of 1.
so it can do one level deep)
@ Port Flags set on
Initiator Group basis with one set of port flags applying to all the initiators in
the group
- FCID Lockdown
per Initiator ( it is set on per initiator basis)
(FCID
lockdown stops the threat of WWN spoofing)
(WWN
Spoofing: An attacker gains access to a storage system in order to
access/modify/deny data or metadata.)

-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Port Group @ Front End Ports
-- it can contain
maximum of 32 FA ports
@ A port can belong
to multiple port groups
@ Ports must have
the ACLX flag enabled ( beore a port is added to the port group m the ACLX shouold
have been enabled)

** ACLX
-Access Control Logix
symmaskdb
-sid <sid #> list database -v

** What
controls the visibility of the VMAX3 ACLX device to a host?
Show ACLX
device flag set to Enabled

-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Storage Group @ VMAX3 Thin Devices

@ A device can
belong to more than one storage group
@ can be associated
with SRP,SLO and Workload Type

-- storage group
can only contain devices or other storage groups
No mixing is
permitted.
-- storage grop
with devices may contain upto 4k VMAX3 logical volumes/LUNS
-- A logical
Volume may belong to more than one storage group.
-- There is limit
of 16K storage groups per VMAX3 storage array.
-- A parent
storage group can have upto 32 child storage groups.
-- one of each
type of group is associated with to form a Masking view.

** Storage
group is a logical grouping of upto 4096 symm devices.
** LUN
addressing done automatically via dyanamic LUN feature

-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Masking View @ one of each type
of group is associated together to form a masking view

** It
defines an association between one Initiator group, one port group and one storage
group.
** when a
masking view is created, the devicves in the storage group are mapped to the ports
in the port group
and
maksined to the initiators in the initiator group.

-----------------------------------------------------------------------------------
---------------------------------------------------------------------

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+++++++++++++++++++++++++++++++++
MANAGING STORAGE AND PROVISIONING
+++++++++++++++++++++++++++++++++

@@ This can be done using UNISPHERE for VMAX or


SYMCLI
-- UNISPHERE for VMAX - various wizards and
tasks
-- SYMCLI
* symconfigure
* symaccess
* symsg

@@ perform configuration and storage provisioning


-- Thin device management - creation, deletion,
attribute modification
-- Front-end port managment - attributes,
association
-- Array metrics
-- Manage Auto-rpovisioning groups - storage
provisioning.

++++++++++++++++++++++++++
configuration Architecture
++++++++++++++++++++++++++

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~
| ---------------------------
---------------------------- ------------------------ |
| | | |
~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~ | |
| | HOST | | | LOCAL
VMAX3 | | | | REMOTE VMAX3 | | |
| | | |
~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~ | |
| | ~~~~~~~~~ ~~~~~~~~~~ | | ~~~~~~
| | | |
| | |SYMCLI | |UNIVMAX | | | | FA |
| | | |
| | ~~~~~~~~~ ~~~~~~~~~~ | | ~~~~~~
| | | |
| | ~~~~~~~~~~~~~~~~~~~ |---------------->|
| | | |
| | | SYMAPI | | | | |
| | | |
| | ``````````````````` | | | ~~V~~
| | ~~~~~~ | |
| | | SIL | | | | | RA
|------------------------>| RA | | |
| | ~~~~~~~~~~~~~~~~~~~~ | | | ~~~~~
| | ~|~~~~ | |
| | | | |
| | | | |
| ----------------------------
----|----------------------- ----|------------------- |
| | ^
| ^ |
| Ethernet | |
Ethernet| | |
| | |
| | |

| | |
| | |

| v |
v | |
|
--------------------------- -------------------------- |
| |
~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~ | |
| | ! | SYMWIN
scripts | !| | ! | SYMWIN scripts | ! | |

| | !
~~~~~~~~~~~~~~~~~~ !| | ! ~~~~~~~~~~~~~~~~~~ ! | |

| | !
!| | ! ! | |
| | !
!| | ! ! | |

| | ! SYMWIN
!| | ! SYMWIN ! | |
| | !
!| | ! ! | |
| |
~~~~~~~~~~~~~~~~~~~~~~~~~| | ~~~~~~~~~~~~~~~~~~~~~~~ | |
| |
~~~~~~~~~~~~~~~~~~~~~~~~~| | ~~~~~~~~~~~~~~~~~~~~~~~ | |
| | ! MMCS
!| | ! MMCS ! | |
| | !
!| | ! ! | |

| |
~~~~~~~~~~~~~~~~~~~~~~~~~| | ~~~~~~~~~~~~~~~~~~~~~~~ | |
|
|--------------------------- -------------------------- |
|
|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~

IMP ## The configuration Manager Architecture allows you to run symwin


scripts on vmax3 MMCS

@@ configuration change requests are generated either by the symconfigure


symcli command or a symapi library call generated by a user making a request
through the UNISPHERE for VMAX GUI.
@@ These requests are converted by SYMAPI on the host to VMAX3 sys-calls and
trasmitted to the VMAX3 through the channel interconnect
@@ The VMAX3 front-end routes requests to the MMCS which invokes symwin
procedures to perform the requested changes to the VMAX3.

@@ in the case of SRDF connected array , configuration requests can be sent to


the remote array over the SRDF links.

++++++++++++++++++++++++
VMAX3 Gatekeeper Devices
++++++++++++++++++++++++

## solutions enabler is an EMC software component used to control the


storage feature of VMAX3 array.
## It receives user requests by a symcli/GUI or other means and generate
system commands that are transmitted to the VMAX3 array for action.
## Gatekeeper devices are luns that act as the target of command requests
to enguinity based functionality.
## these commands are arrive in the form of disk I/O requests - the more
commands that are issued from the host , and the more complex the action required
by those commands
the more gate keepers are required to handle those requests in timely
manner.
## When solutions Enabler succefuly obtain a gate keeper , it locks the
device and then processes the system commands.
## Once the solutions enabler processes the systemm commands, it closes and
unlock the device freeing it for other processes
## A Gate keeper is not intended to store the data , it is usually
configured as a small 3 cylinder device ( or approxmately 6MB device)
## Gatekeeper devices should be mapped and masked to a single host only
and should not be shared across the hosts.

IMP ## there is specific recomendation on the number of gate keepers was


required for all VMAX3 configurations
*** EMC recomends it to be 6 gate keepers for each HBA.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| %% 3-cyl thin devices(~ 6MB)
|
| %% Receives low-level SCSI I/O from SYMCLI/GUI
|
| %% Used as target of SYMCLI/SYMAPI commands
|
| *** commands passed through gatekeepers to VMAX3 for action
|
| *** Locked during the passing of commands
|
| *** Lots of commands flowing to the VMAX3 from many
applications on the same host can cause gatekeeper shortage |
|
|
| %% must be accessible from the host executing the commands
|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++
CONCURRENT CONFIGURATION SESSIONS ON VMAX3
++++++++++++++++++++++++++++++++++++++++++++

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| @@ Concurent Provisioning
|
| - upto four concurrint non-conflicting configuration change
seesions. |

| *** should not include conflits of the following


|

| -- Device Back-end port


|
| -- Device front-end port
|
| -- Device
|
| @@ array Manages its own device locking
|

| @@ Session ID identifies each running configuration sesssion.


|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## VAMX3 allows to run the 4 concurrent sessions at


the same time, when are not conflicting with each other.
this means multiple configuration parallel
sessions can run at the same time as long as the changes do not include any
conflicts as given.
-- Device Back-end port
-- Device front-end port
-- Device itself.

## the VMAX3 array manages itself the device locking


and each running session is defined by session ID.

++++++++++++++++++++++++++++++++++++++++++++++
CONFIGURATION CHANGES USING UNISPHERE FOR VMAX
++++++++++++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| @@ Multiple ways to invoke configuration changes via unispehere
|
| -- Depends on the rype of configuration change
|
| -- Unisphere has many wizards.
|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

## configuration changes can be invoked by UNISPHERE


for VMAX in many different ways.
## The method depends on the type of configuration
change
## Unispehere has number of wizards are available
## configuration tasks can be submitted to a job list.

++++++++++++++++++++++++
Unispehere -SRP Headroom
++++++++++++++++++++++++

@@ the storage groups dashboard in unisphere for vmax shows all the
configured storage resouce pools and the available headroom for each SLO.
@@ prior to allocating new storage to a host,it is a good idea too
check the available headroom

to navigate to the storage group dashboards simply click on the


storage section button.

----------------------
Unisphere -SRP details
----------------------

one can also look at the details of configured storage resource


pools to see the details of usable and allocated and free capacity
to navigate to the storage resource pools click on the storage
resource pool link in the storage resource section dropdown.

----------------------
Unisphere -Job list
----------------------

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| @@ List of jobs
|
| - Yet to be run - CAn be run on demand or scheduled
for later execution |
| - Jobs that are running , successfully completed,
or failed. |

| @@ Job list can be accessed by clicking


|
| - Job list link in the system section dropdown
|
| - Job list link in the status bar
|

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

most of the configuration tasks in the Unispehere for vmax can be


added to the job list for execution at a later time.
the job list shows all the that are yet to be run , jobs that are
running currently,and jobs that are run successfully. and also the jobs that are
failed
you can naviagate to the job list by clicking the job list in
the sytem section dropdown. or clicking on joblist link in the status bar

for example :
we can create a job for creating the
volumes/luns ...we can click run botton to create the luns now and click on the
scheduled button to create it later.

---------------------------
configuration chages SYMCLI
---------------------------

## varify configuration changes can be made safely:


- symconfigure verify -sid <sid#> ...shows if the
symmetrix is ready for configuration changes
## check usage of the configure storage resource pools
- symcfg list -srp -sid <sid#>

## consider impact on I/O


- to make devices not ready use :
symdev not_ready <symdev>
## After allocation/de-allocation of storage to a host update
the host operating system environment before attempting I/O

## Query
- symconfigure query -sid <sid#>
## Abort
- configuration change session can be terminated
prematurely using the abort command.
- premature termination is only possible before the point
of no return.
- symconfigure -sid <sid#> abort -session_id <session-
id>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
###################################################################################
###################################################################################
#######################################################
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

-------------
FAST with SLO
-------------

One of the major changes with V3 is the way we provision storage. FAST has been
enhanced to work on a more granular level (128KB track level) and we have
abstracted a lot of the internals so that the end user need not be so concerned
about
the mechanics of the array they can simply provision capacity and set a performance
expectation which the array will work to achieve.

In VMAX3 FAST is always on and the majority of the configuration is pre-configured,

available SLO are dictated by the disks available in the array


and Storage Resource Pools are defined in the bin file.

we are no longer required to create meta devices to support larger devices and the
SLO model makes provisioning intuitive and easy.

-------------------------
Creation of storage Group
-------------------------

@@ We can creae the Storage group like which we created earlier for VMAX2.
@@ Assigning to SLO and Workload is optional ,
@@ If no SLO or Workload is specified FAST will still manage everything but yout
SLO will be optimized.
@@ VMAX3 supports 64k (64000) storage grops, ( we can create one storage group
for one application).
@@ At present we can create devs up to 16TB soon to be increased further.

example of creation of luns

-----------------------------------------------------------------------------------
--------------------------
symconfigure -sid 007 -cmd "create dev count =5 config=tdev,
emulation=fba size=2048 GB sg=myapp_sg;" preview

-----------------------------------------------------------------------------------
--------------------------

----------------------------------------------------------
symsg �sid <sid#> create myapp_sg �slo gold �workload oltp
----------------------------------------------------------

@@ Present to the host via a masking view, no change from VMAX here.

++ Here I will highlight a few of the key commands to gather information about
the configuration and interaction with the SRP and SLO.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++
IMP NOTE:- Monitoring and Alerting of FAST SLO is built into Unisphere for VMAX.
SLO compliance is reported at every level when looking at storage group components
in Unisphere.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++

-----------------------------------
Viewing SRP configured on the Array
-----------------------------------

@@ Most VMAX3 arrays will only have a single SRP.


@@ However it is also possible to have multiple, if we are using
FAST.X or Protect point ( for Data Domain Backups) we may have
additional SRP in the config.

----------------------------
symcfg -sid <sid#> list -srp
-----------------------------
----------------------------------------------
symcfg -sid <sid#> show -srp <srp-name> -tb/gb
----------------------------------------------

++ Default SRP is set to be usable by RDFA DSE, this is normal.


++ There is no need to configure a separate pool for DSE in VAMX3.
++ we can reserve and cap some space from the default SRP for this
purpose.

---------------------------------
Viewing Available SLO in the VMAX
---------------------------------

----------------------------
symcfg -sid <sid#> list -slo
----------------------------

++ to get more details look at the SLO's and the workloads that can
be associated with storage groups
we can run this command.
------------------------------------------------------
symcfg -sid <sid#> list -slo -detail -by_resptime -all
------------------------------------------------------

--------------------------------
Viewing SRP capacity consumption
--------------------------------

@@ to the get an idea of how your storge is being consumed ( from the
command line we can run this command)

---------------------------------------------
symsg -sid <sid#> list -srp -demand -type slo
---------------------------------------------

++ this above command shows you how the SRP is being consumed by
each of the SLO,
++ it will also list how much space is consumed by DSE and Snapshot
( please note that , this capacity all comes from the SRP , so it's worth keeping
an eye on)

------------------------------------------
Listing SLO associations by Storage Group.
------------------------------------------

@@ the previous command (symsg -sid <sid#> list -srp -demand -type
slo) gives good idea at high level.
@@ but if we want to see from a storage level like which storage
groups are associated with each SLO we can use this undergiven command

--------------------------------------
symsg -sid <sid#> list -by_SLO -detail
--------------------------------------
++ this above command shows , each storage group whether or not is
associated with SLO.
++ we can also get some detial about the number of devices in storage
group ( not the capacity)

@@ we can use this follwoing command to get the additional


informatiion like the space consumption of an individual device level on the
applicatiion storage group.

----------------------------------------------
symcfg list -tdev -bound -detail -sd <sg-name>
-----------------------------------------------

## We can see the full breakdown of the SRP including drive pools
and which SLO you have availabe as well as TDAT information.
with this command we can have information like thin
devices(Tdevs) bound to the SRP and how much space they are each consuming.
---------------------------------------------------------
symcfg -sid <sid#> how -srp <srp-name> -gb -detail | more
---------------------------------------------------------

-----------------------------------------------------------------------------------
----------------------------------------------------------
Changing the SLO on Existing Storage Groups ( the storage groups which are already
associated to the masking view and they are in production)
-----------------------------------------------------------------------------------
----------------------------------------------------------

Ex: changing SLO (service level objective ) to platinum and workload to


OLTP_REP for a storage group test.

---------------------------------------------------------
symsg -sid <sid#> -sg test set -slo platinum -wl OLTP_REP
---------------------------------------------------------

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++
solutions Enabler 8.X also allows for moving devices between groups non-
disuprively
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++

You might also like