You are on page 1of 8

EMC Simple Support Matrix

EMC Unified VNX Series


APRIL 2014
P/N 300-012-396 REV 35

2011 - 2014 EMC Corporation. All Rights Reserved.


EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY
DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United State and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulator document for your product line, go to EMC Online Support (https://support.emc.com).
The information in this ESSM is a configuration summary. For more detailed information, refer to https://elabnavigator.emc.com.

Table 1

EMC E-Lab qualified specific versions or ranges for EMC Unified VNX Series (page 1 of 2)
Path and Volume Management
PowerPath

Platform Support

Target Rev

AIX 5300-09-01-0847 g
AIX 6100-02-01-0847 g

Native
MPIO/LVM

Target Rev

5.3 - 5.3
SP1 P01 c;

5.1 SP1

5.5 P02 - 5.5 P06, 5.7

AIX 7100-00-02-1041 g

5.3 SP1
P01 c;

5.1 SP2

HP-UX 11iv2 (11.23) g

5.1 - 5.1 SP1

5.1 SP2 - 5.2

Linux: AX 2.0 GA - 2.0 SP3 b , n, n


b

5.1 SP1 - 6.0.1,


6.0.3

Linux: OL 5.6
Linux: OL 5.7- 5.10
Linux: OL 5.6, 6.0 - 6.1 (UEK)

Linux: OL 5.7 (UEK)

Linux: OL 5.8 - 5.9, 6.2 - 6.4 (UEK)

Linux: RHEL 5.8


Linux: RHEL 5.9
Linux: RHEL 5.10

4/4/14

11g

Symantec SF
5.1 SP1 - 6.0.1,
6.0.3, 6.1.0
Symantec SF
5.1 SP1 PR1 6.0.1, 6.0.3, 6.1.0

SG 11.16

10g

5.0

SG 11.16 11.19

5.0

10g R2 11g R1

5.0 - 6.0.1,
6.0.3

SG 11.17 11.20

5.0 - 6.0.3

Symantec SF
10g R2 - 5.1 SP1 - 6.0.1,
11g R2
6.03

Y
5.0 MP2 5.0 MP4
5.5 k

5.0 MP3

5.3.x c, k
5.1 SP1

5.5 - 5.6 k

10g 11g

5.0 MP3 - 5.1

Symantec SF
6.0

6.0

5.6 P01, 5.7 SP1,


5.7 SP1 P01- P02 k,
5.7 SP3
5.6 P02, 5.7 SP1,
5.7 SP1 P01 - P02 k,
5.7 SP3
5.7 SP1,
5.7 SP1 P01 - P02k,
5.7 SP3

11 g

11 g R2

Ya

5.5 - 5.6.
5.7 SP1,
5.7 SP1 P01- P02 k,
5.7 SP3

Symantec SF
5.1 SP1

5.6 k

Linux: RHEL 4.0 U2 - U4 n, b, e;


4.5 - 4.9 n, b, e
Linux: RHEL 5.0, 5.1 b

Linux: RHEL 5.7 i

5.0 MP3 - 6.0.3

5.6 and 5.7 SP1 k

Linux: OL 5.3 - 5. 5

Linux: RHEL 5.5 - 5.6 i

Symantec SF
5.1 SP1

5.1 SP1 - 6.0.3, 6.1 11g R2

Linux: OL 5.0, 5.1 - 5. 2 b

Linux: RHEL 5.3 - 5.4 i

10g - 11g

5.3.x c, k

Linux: OL 4 U4 - 4 U7 b , n

Linux: RHEL 5.2 b

5.0 - 5.0 MP3

Ya
5.5 - 5.6 and
5.7 SP1 - 5.7 SP1 P01 k

Linux: AX 4
Linux: Citrix XenServer 6.x

5.1 SP1- 6.0.1, 5.0 MP3 - 5.1 PowerHA


6.0.3, 6.1.0
(HACMP)
5.4 - 7.1

(Pvlinks) d /
Y
Y

Linux: AX 3

Allowed

5.5 or later

HP-UX 11iv1 (11.11) g

HP-UX 11iv3 (11.31)

Native

5.1 SP1 PR16.0.1, 6.0.3,


6.1.0

5.5 - 5.5 P01


(AIX) VIOS 2.1.1 or later o

Symantec / Veritas /
DMP / VxVM / VxDMP

Allowed

5.5 - 5.5 P01

Virtual
Provisioning
Host
Symantec / Veritas / Oracle
f
SFRAC/VxCFS / SF RAC Reclamation
h
HA
Cluster

5.3.x c, k
5.1 SP1

4.1 MP2 5.0 MP4

4.1, 5.0

5.0 MP3 5.0 MP4

5.0 MP3 - 5.1

5.0 MP3 - 5.1

10g 11g

Symantec SF
5.1 SP1

Y
5.0 MP4 - 5.1 SP1

5.6, 5.7 SP1,


5.7 SP1 P01- P02 k,
5.7 SP3
5.7 SP1 P01- P02 k,
5.7 SP3

5.1 SP1 - 6.0.3, 5.0 MP4 - 5.1


6.1.0

Symantec SF
5.1 SP1 - 6.0.3,
6.1.0

Table 1

EMC E-Lab qualified specific versions or ranges for EMC Unified VNX Series (page 2 of 2)
Path and Volume Management
PowerPath

Platform Support
Linux: RHEL 6.0

Target Rev

Native
MPIO/LVM
Allowed

5.1 SP1 PR2 &


RP3, 6.0.1 6.0.3, 6.1.0

Microsoft Windows 2003 (x86 and x64) SP2


and R2 SP2
Microsoft Windows 2008 (x86) SP1 and SP2

5.1 SP1 - 6.0.3

5.5 - 5.7 SP1,


5.7 SP1 P01 - P02 k

5.0 - 5.0 MP4

4.1, 5.0

5.0 MP3 - 5.1

5.0 - 5.1,
5.1 SP1- 6.0.3

5.1 SP1 - 6.0.3, 5.0 MP4 - 5.1


6.0.4, 6.1.0
5.7 SP2

5.1 SP2
5.5 - 5.7 SP2

5.3 5.3 SP1c


MPIO/Y

Microsoft Windows 2008 R2 and R2 SP1 i

5.1 SP2 - 6.0.1

5.0 - 5.1 SP1

MSCS

Failover
Cluster

6.0.2

5.5 SP1 - 5.7 SP2

10g R2 - Symantec SF
5.1 SP1 - 6.0.3
11g

5.0- 5.1
5.0- 5.1 SP2

5.1 - 5.1 SP1

10 g

Symantec SF
5.0 MP3 - 5.1,
11g R25.1
SP1 - 6.0.3,
5.1 SP1- 6.0.3, 6.1 12c R1
6.0.4, 6.1.0
Symantec SF
6.0.4
11g R2
6.0.4, 6.1.0

6.0.4, 6.1.0

Microsoft Windows 2008 (x64) SP1 and SP2 i

Microsoft Windows Server 2012

5.3.x c, k

Linux: SLES 11 - SLES 11 SP2


Linux: SLES 11 SP3

Symantec SF
5.1 SP1 PR2
Symantec SF
5.1 SP1 PR2 - 6.0.3
5.1 SP1 PR2 11g R2
6.0.3, 6.1.0
Symantec SF
5.1 SP1 RP3, 6.0.1 5.1 SP1 PR2 &
6.0.3
RP3, 6.0.1 6.0.3, 6.1.0
5.1 SP1 PR2

Y
5.5

Linux: SLES 10 - SLES 10 SP3 b


Linux: SLES 10 SP4 b

Allowed

5.1 SP1 PR1 6.0.3, 6.1.0

5.6, 5.7 SP1,


5.7 SP1 P01 - P02 k,
5.7 SP3

Linux: RHEL 6.5


Linux: SLES 9 SP3 - SLES 9 SP4

Native

5.1 SP1 PR1 &


PR2

Linux: RHEL 6.1, 6.2


Linux: RHEL 6.3, 6.4

Symantec / Veritas /
DMP / VxVM / VxDMP
Target Rev

Virtual
Provisioning
Host
Symantec / Veritas / Oracle
f
SFRAC/VxCFS / SF RAC Reclamation
h
HA
Cluster

5.0 - 6.0.1

10g R2 11g

Symantec SF
5.1 SP2
Symantec SF
5.1 SP2 - 6.0.1

5.1 - 6.0.1
6.0.2

OpenVMS Alpha v7.3-2, 8.2, 8.3, 8.4 g


Y

OpenVMS Integrity v8.2-1, 8.3, 8.3-1H1, 8.4 g


Solaris 8 SPARC e

Refer to the Solaris Letters of Support, located at https://elabnavigator.emc.com, Extended Support tab.

Solaris 9 SPARC

5.1 SP1

Solaris 10 SPARC

5.1 SP1 - 6.0.3,


6.1.0

5.3 5.3 P04 c


Solaris 10 (x86)

5.5 - 5.5 P02

5.1 SP1_x64 6.0.3_x64

Solaris 11 SPARC

Solaris 11.1 SPARC


Solaris 11 x86
Solaris 11.1 x86

MPxIO/Y

6.0.3, 6.1.0

5.5 - 5.5 P02

6.0 PR1_x64 6.0.1, 6.0.3_x64

5.5 P01 - 5.5 P02

6.0.3_x64

PP/VE 5.4 - 5.4 SP1

VMware ESX/ESXi 4.1 (vSphere) i

PP/VE 5.4 SP2

VMware ESXi 5.1 (vSphere) i, j


VMware ESXi 5.5 (vSphere) i, j

6.0 PR1 - 6.0.1,


6.0.3, 6.1.0

5.5 P01 - 5.5 P02

VMware ESX/ESXi 4.0 (vSphere)

VMware ESXi 5.0 (vSphere) i, j

PP/VE 5.7 l,
5.7 P02 l, 5.8, 5.9 SP1
PP/VE 5.7 P02 l, 5.8,
5.9 SP1
PP/VE 5.8, 5.9 SP1

SF
10g - 11g Symantec
5.1 SP1
Symantec SF
5.1 SP1 - 6.0.3,
6.1.0
R2 SC 3.1 - 3.3 5.0 - 6.0.1, 6.0.3 10g11g
Symantec SF
5.1 SP1_x64 6.0.3_x64
Symantec
SF
6.0 PR1 - 6.0.1, 11g R2 6.0 PR1 - 6.0.1,
SC 4.0
6.03, 6.1
6.0.3, 6.1.0
SC 4.1
Symantec
SF
11g
R2,
6.0.3, 6.1
12c R1
6.0.3, 6.1.0
Symantec SF
SC 4.0 6.0 PR1 - 6.0.1, 6.03 11g R2 6.0 PR1_x64 6.0.1, 6.0.3_x64
SC 4.1
Symantec SF
11g
R2,
6.0.3
12c R1
6.0.3_x64
SC 3.1 - 3.2

5.0 - 5.1

5.0_x64 5.1_x64

11g

VxDMP 6.0 6.0.1


N/A

NMP

VxDMP 6.0.1,
6.1

5.0 - 5.1

ESX

VxDMP 6.1

Target Rev = EMC recommended versions for new and existing installations. These versions contain the latest features and provide the highest level of reliability.
Allowed = Older EMC-approved versions that are still functional, or newer/interim versions not yet targeted by EMC. These versions may not contain the latest fixes and features and may require
upgrading to resolve issues or take advantage of newer product features. Versions are suitable for sustaining existing environments but should not be targeted for new installations.

4/4/14

Table 1 Legend, Footnotes, and Components


a. MPIO with AX supported at AX 2.0 SP1 and later. MPIO with RHEL supported at RHEL 4.0 U3 and later.
Legend

b. 8 Gbs initiator support begins with AX 2.0 SP3, AX 3.0 SP2, OL 4.0 U7, OL 5.2, RHEL 4.7, RHEL 5.2, SLES 10 SP2.
c. These PowerPath versions have reached End-of-Life (EOL). For extended support contact PowerPath Engineering or
https://support.emc.com/products/PowerPath.
d. Pvlinks is an HP-UX alternate pathing solution that does not support load balancing.
e. Refer to the appropriate Letter of Support on the E-Lab Navigator, Extended Support tab, End-of-Life Configurations heading, for information on
end-of-life configurations.
f. Refer to the EMC Virtual Provisioning table (https://elabnavigator.emc.com) for all VP (without host reclamation) qualified configurations.
g. Only server vendor-sourced HBAs are supported.
h. Refer to Symantec Release Notes (www.symantec.com) for supported VxVM versions, which are dependent on the Veritas Cluster version.
i. For EMC XtremSF and XtremCache software operating system and host server support information, refer to the ESM, located at
https://elabnavigator.emc.com, and use VNX, along with the XtremSF model, in the search engine.
j. Refer to the VMware vSphere 5 ESSM, at https://elabnavigator.emc.com, Simple Support Matrix tab, Platform Solutions, for details.
k. Version and kernel specific.
l. Before upgrading to vSphere 5.1, refer to EMC Knowledgebase articles emc302625 and emc305937.
m. All guest operating systems and features that Citrix supports, such as XenMotion, XenHA, and XenDesktop, are supported.
n. This operating system is currently EOL. All corresponding configurations are frozen. It may no longer be publicly supported by the operating
system vendor and will require an extended support contract from the operating system vendor in order for the configuration listed in the ESSM
to be supported by EMC. It is highly recommended the customer upgrade the server to an ESSM-supported configuration or install it as a virtual
machine on an EMC-approved virtualization hypervisor.
o. Consult the Virtualization Hosting Server (Parent) Solutions table in E-Lab Navigator at https://elabnavigator.emc.com for your specific VIOS
version prior to implementation.
All Fibre Channel HBAs from Emulex, QLogic (2 Gb/s or greater), and Brocade (8 Gb/s or greater), including vendor rebranded versions. Solaris
requires an RPQ when using any Brocade HBA. All 10 Gb/s Emulex, QLogic, Brocade, Cisco, Broadcom, and Intel CNAs (10GbE NICs that
support FCoE). All 1 Gb/s or 10 Gb/s NICs for iSCSI connectivity, as supported by Server/OS vendor. Any EMC or vendor-supplied OSapproved driver/firmware/BIOS is allowed. EMC recommends using latest E-Lab Navigator listed driver/firmware/BIOS versions. Adapters must
support link speed compatible with switch (fabric) or array (direct connect). All host systems are supported where the host vendor allows the
host/OS/adapter combination. If systems meet the criteria in this ESSM, no further ELN connectivity validation and no RPQ is required.
All FC SAN switches (2 Gb/s or greater) from EMC Connectrix, Brocade, Cisco, and QLogic for host and storage connectivity are supported.
Refer to the Switched Fabric Topology Parameters table located on the E-Lab Navigator (ELN) at
https://elabnavigator.emc.com for supported switch fabric interoperability firmware and settings.
R30 VNX 5100/5300/5500/5700/7500; EFD Cache, SATA, EFD support added. Minimum FLARE revision 04.30.000.5.xxx.
R30++ VNX 5100/5300/5500/5700/7500; FCoE support added. Minimum FLARE revision 04.30.000.5.5xx.
R31 VNX 5100/5300/5500/5700/7500; Unisphere/Navisphere Manager v1.1; Minimum FLARE revision 05.31.
R32 VNX 5100/5300/5500/5700/7500; Unisphere/Navisphere Manager v1.2; Minimum FLARE revision 05.32.
R33 (only) VNX 5200/5400/5600/5800/7600/8000; Unisphere/Navisphere Manager v1.3; Minimum FLARE revision 05.33.

Blank = Not supported


Y = Supported
NA = Not applicable

Host Servers/ Adapters

Switches
Operating environment

Table 2

VNX2 Limitations (page 1 of 5)


VNX8000

Maximum number of storage pools


Maximum number of disks in a storage pool
Maximum number of usable disks for all storage pools
Maximum number of disks that can be added to a pool at a
time
Maximum number of pool LUNs per storage pool
Minimum user capacity
Maximum user capacity
Maximum number of pool LUNs per storage system
Clones per storage system
Clones per source LUN
Clones per consistent fracture
Clone groups per storage system
Clone private LUNs a, b per storage system (required)
SnapView Snapsc per storage system
Snapshots per source LUN
SnapView sessions per source LUN
Reserved LUNsd per storage system
Source LUNs with Rollback active
Concurrent Executing Sessions
Destination LUNs per Session
Incremental Source LUNse
Defined Incremental Sessionsf
Incremental Sessions per Source LUN
Snapshots per storage system
Snapshots per Primary LUN
4/4/14 per consistency group
Snapshots

VNX5800

VNX5600

VNX5400

VNX5200

40
746
746
120

20
496
496
120

15
246
246
80

15
121
121
80

1000

1000

1000

1block
256 TB
1000

1block
256 TB
1000

1block
256 TB
1000

1024
8
32
256
2
512

256
8
32
128
2
256

256
8
32
128
2
256

8
8
256

8
8
256

8
8
128

8
8
128

300
300
300
SAN Copy Storage system limits
32
32
16
100
100
50
512
512
256

300

300

300

8
50
256

8
50
256

8
50
256

512

512

512

512

8000
256
64

8000
256
64

8000
256
64

60
996
996
180

VNX7600
Storage Pool capacity
40
996
996
120

4000

3000
2000
Pool LUN limits
1block
1block
1block
256 TB
256 TB
256 TB
4000
3000
2000
SnapView supported configurations
2048
2048
2048
8
8
8
64
64
32
1024
1024
1024
2
2
2
2048
1024
512
8
8
512

8
8
512

512

512

8
8
8
VN X Snapshots Configuration Guidelines
32000
24000
16000
256
256
256
64
64
64

Table 2

VNX2 Limitations (page 2 of 5)


VNX8000

Consistency Groups

256

Snapshot Mount Points


Concurrent restore operations

4000
512

VNX7600
256

Maximum number of consistency groups


Maximum number of members per consistency group

Maximum number of concurrent compression operations per


Storage Processor(SP)
Maximum number of compressed LUNs involving migration
per storage system
Maximum number of compressed LUNs
Concurrent compression/decompression operations per SP
Concurrent migration operations per system
LUN migrations
Deduplication pass

Deduplications processes per SP


Deduplication run time

VNX5600

VNX5400

128

128

128

128

1000
128

1000
128

1000
128

256 (MVA)
256 (MVS)
64
32 (MVA)
32 (MVS)

256 (MVA)
128 (MVS)
64
32 (MVA)
32 (MVS)

256 (MVA)
128 (MVS)
64
32 (MVA)
32 (MVS)

128 (MVA)
128 (MVS)
64
32 (MVA)
32 (MVS)

Block compressed LUN limitations


40
32
20

20

20

20

24

16

16

16

24

16

4000
3000
2000
1000
1000
1000
Block compression operation limitations
40
32
20
20
20
20
24
24
16
16
16
16
Block-level deduplication limitations (VNX for Block)
8 maximum
Started every 12 hours upon the last completed run for that pool
Will only trigger if 64GBs of new/changed data is found
Pass can be paused
Resume causes a check to run shortly after resuming
3 maximum
4 hour maximum
After 4 hours, deduplication process is paused and other queued process on the SP are allowed to run
If no other processes are queued, deduplication will keep running
Storage systems used for SAN Copy replication
Fibre Channel
SAN Copy system
Target g

Storage System

SAN Copy system

system
VNX8000, VNX7600, VNX5800, VNX5600h, VNX5400, VNX5200
VNX7500, VNX5700, VNX5500, VNX5300, VNX5100
CX4-960, CX4-480, CX4-240, CX4-120i
CX3-80, CX3-40, CX3-20, CX3-10
CX3-40C, CX3-20C, CX3-10C
CX700, CX500
CX500i, CX300i
CX600, CX400
CX300
CX200
AX100, AX150,
AX100i, AX150i
AX4-5F
AX4-5SCF
AX4-5i, AX4-5SCi
Guideline/ Specification

VNX5200

2000
256

3000
512
MirrorView
256 (MVA)
512 (MVS)
64
64 (MVA)
64 (MVS)

256 (MVA)
1024 (MVS)
64
64 (MVA)
64 (MVS)

Maximum number of mirrors

VNX5800

P
P

P
P

P
P

P
P

P
P

P
P

N/A
P

N/A
P

P
N/A
P

P
N/A
P
P

N/A
X
N/A
N/A

N/A
P
N/A
N/A

Xj
P
X
P
X
N/A
N/A
P
P
P
X
N/A
N/A
Operating Environment for File 8.1 Configuration guidelines
Maximum tested value

iSCSI
Target
System

N/A
N/A

N/A
N/A

X
N/A
N/A
X

P
N/A
N/A
P

Comment
CIFS guidelines

CIFS TCP connection

64K (default and theoretical


max.), 40K (max. tested)

Param tcp.maxStreams sets the maximum number of TCP connections a Data Mover can have. Maximum
value is 64K, or 65535. TCP connections (streams) are shared by other components and should be changed
in monitored small increments.
With SMB1/SMB2 a TCP connection means single client (machine) connection.

Share name length


Number of CIFS shares
4/4/14

80 characters (Unicode).
40,000 per Data Mover(Max.
tested limit)

With SMB3 and multi channel, a single client could use several network connections for the same session. It
will depend on the number of available interfaces on the client machine, or for high speed interface like 10Gb
link, it can go up to 4 TCP connections per link.
Unicode: The maximum length for a share name with Unicode enabled is 80 characters.
Larger number of shares can be created. Maximum tested value is 40K per Data Mover.

Table 2

VNX2 Limitations (page 3 of 5)

Guideline/ Specification
Number of NetBIOS names/
compnames per Virtual Data
Mover
NetBIOS name length

Maximum tested value


509 (max)

Comment (ASCII chars) for


NetBIOS name for server

256

Compname length

63 bytes

Number of domains

10 tested

Block size negotiated

64 KB
128KB with SMB2

Number of simultaneous
requests per CIFS session
(maxMpxCount)

127(SMB1)
512(SMB2)

Total number files/directories


opened per Data Mover
Number of Home Directories
supported
Number of Windows/UNIX users
connected at the same time

500,000

Number of users per TCP


connection

64K

Number of files/directories
opened per CIFS connection in
SMB1

64K

Number of files/directories
opened per CIFS connection in
SMB2
Number of VDMs per Data
Mover

127K

15

20,000(Max possbile limit, not


recommended)
40,000(limited by the number of
TCP connections)

128

Comment
Limited by the number of network interfaces available on the Virtual Data Mover. From a local group
perspective, the number is limited to 509. NetBIOS and compnames must be associated with at least one
unique network interface.
NetBIOS names are limited to 15 characters (Microsoft limit) and cannot begin with an@ (at sign) or a - (dash)
character. The name also cannot include white space, tab characters, or the following symbols: / \ : ; , = *
+ | [ ] ? < > . If using compnames, the NetBIOS form of the name is assigned automatically and is derived
from the first 15 characters of the <comp_name>.
Limited to 256 ASCII characters.
Restricted Characters: You cannot use double quotation ("), semicolon (;), accent (`), and comma (,)
characters within the body of a comment. Attempting to use these special characters results in an error
message. You can only use an exclamation point (!) if it is preceded by a single quotation mark ().
Default Comments: If you do not explicitly add a comment, the system adds a default comment of the
form EMC-SNAS:T<x.x.x.x> where <x.x.x.x> is the version of the NAS software.
For integration with Windows environment releases later than Windows 2000, the CIFS server computer
name length can be up to 21 characters when UTF-8 (3 bytes char) is used.
509 (theoretical max.) The maximum number of Windows domains a Data Mover can be a member of. To
increase the max default value from 32, change parameter cifs.lsarpc.maxDomain. The Parameters Guide for
VNX for File contains more detailed information about this parameter.
Maximum buffer size that can be negotiated with Microsoft Windows clients. To increase the default value,
change param cifs.W95BufSz, cifs.NTBufSz or cifs.W2KBufSz. The Parameters Guide for VNX for File
contains more detailed information about these parameters.
Note: With SMB2.1, read and write operation supports 1MB buffer (this feature is named large MTU).
For SMB1, value is fixed and defines the number of requests a client is able to send to the Data Mover at the
same time (for example, a change notification request). To increase this value, change wsathe maxMpxCount
parameter.
For SMB2 and newer protocol dialect, this notion has been replaced by credit number, which has a max of
512 credit per client but could be adjusted dynamically by the server depending on the load.
The Parameters Guide for VNX for File contains more detailed information about this parameter.
A large number of open files could require high memory usage on the Data Mover and potentially lead to outof-memory issues.
As Configuration file containing Home Directories info is read completely at each user connection,
recommendation would be to not exceed few thousands for easy management.
Earlier versions of the VNX server relied on a basic database, nameDB, to maintain Usermapper and secmap
mapping information. DBMS now replaces the basic database. This solves the inode consumption issue and
provides better consistency and recoverability with the support of database transactions. It also provides
better atomicity, isolation, and durability in database management.
To decrease the default value, change param cifs.listBlocks (default 255, max 255). The value of this
parameter times 256 = max number of users.
Note: TID/FID/UID shares this parameter and cannot be changed individually for each ID. Use caution when
increasing this value as it could lead to an out-of-memory condition. Refer to the Parameters Guide for VNX
for File for parameter information.
To decrease the default value, change param cifs.listBlocks (default 255, max 255). The value of this
parameter times 256 = max number of files/directories opened per CIFS connection.
Note: TID/FID/UID shares this parameter and cannot be changed individually for each ID. Use caution when
increasing this value as it could lead to an out-of-memory condition. Be sure to follow the recommendation for
total number of files/directories opened per Data Mover. Refer to the System Parameters Guide for VNX for
File for parameter information.
To decrease the default value, change parameter cifs.smb2.listBlocks (default 511, max 511). The value of
this parameter times 256 = max number of files/directories opened per CIFS connection.
The total number of VDMs, file systems, and checkpoints across a whole cabinet cannot exceed 2048.
FileMover

Max connections to secondary


storage per primary (VNX for
File) file system
Number of HTTP threads for
servicing FileMover API requests
per Data Mover

1024

Mount point name length

255 bytes (ASCII)

File system name length

240 bytes (ASCII)


19 chars display for list option
255 bytes (NFS)
255 characters (CIFS)

Filename length

4/4/14

64

Number of threads available for recalling data from secondary storage is half the number of whichever is the
lower of CIFS or NFS threads. 16 (default), can be increased using server_http command, max (tested) is 64.
File system guidelines
The "/" is used when creating the mount point and is equal to one character. If exceeded, Error 4105:
Server_x:path_name: invalid path specified is returned.
For nas_fs list, the name of a file system will be truncated if it is more than 19 characters. To display the full
file system name, use the info option with a file system ID (nas_fs -i id=<fsid>).
With Unicode enabled in an NFS environment, the number of characters that can be stored depends on the
client encoding type such as latin-1. For example: With Unicode enabled, a Japanese UTF-8 character may
require three bytes.
With Unicode enabled in a CIFS environment, the maximum number of characters is 255. For filenames
shared between NFS and CIFS, CIFS allows 255 characters. NFS truncates these names when they are
more than 255 bytes in UTF-8, and manages the file successfully.

Table 2

VNX2 Limitations (page 4 of 5)

Guideline/ Specification
Pathname length

Maximum tested value


1,024 bytes

Directory name length

255 bytes

Subdirectories (per parent


directory)
Number of file systems per VNX
Number of file systems per Data
Mover
Maximum disk volume size

64,000

Total storage for a Data Mover


(Fibre Channel Only)

VNX Version 7.0


200 TB VNX5300
256 TB VNX5500, VNX5700,
VNX7500, VG2 & VG8

File size
Number of directories supported
per file system
Number of files per directory
Maximum number of files and
directories per VNX file system

16 TB

4096
2048
Dependent on RAID group (see
comments)

500,000
256 million (default)

Comment
Note: Make sure the final path length of restored files is less than 1024 bytes. For example, if a file is backed
up which originally had path name of 900 bytes, and it is restoring to a path with 400 bytes, the final path
length would be 1300 bytes and would not be restored.
This is a hard limit and is rejected on creation if over the 255 limit.The limit is bytes for UNIX names, Unicode
characters for CIFS.
This is a hard limit, code will prevent you from creating more than 64,000 directories.
This max number includes VDM and checkpoint file systems.
The mount operation will fail when the number of file systems reaches 2048 with an error indicating maximum
number of file systems reached. This maximum number includes VDM and checkpoint file systems.
Unified platforms Running setup_clariion on the VNX7500 platform will provision the storage to use 1
LUN per RAID group which might result in LUNS > 2 TB, depending on drive capacity and number of drives in
the RAID group. On all VNX integrated platforms (VNX5300, VNX5500, VNX5700, VNX7500), the 2 TB LUN
limitation has been lifted--this might or might not result in LUN size larger than 2 TB depending on RAID group
size and drive capacity. For all other unified platforms, setup_clariion will continue to function as in the past,
breaking large RAID groups into LUNS that are < 2 TB in size.
Gateway systems For gateway systems, users might configure LUNs greater than 2 TB up to 16 TB or
max size of the RAID group, whichever is less. This is supported for VG2 and VG8 NAS gateways when
attached to CX3, CX4, VNX, and Symmetrix DMX-4 and VMAX backends.
Multi-Path File Systems (MPFS) MPFS supports LUNs greater than 2 TB. Windows 2000 and 32-bit
Windows XP, however, cannot support large LUNs due to a Windows OS limitation. All other MPFS Windows
clients support LUN sizes greater than 2 TB if the 5.0.90.900 patch is applied. Use of these 32-bit Windows
XP clients on VNX7500 systems require request for price quotation (RPQ) approval.
These total capacity values represent Fibre Channel disk maximum with no ATA drives. Fibre Channel
capacity will change if ATA is used. Notes:
On a per-Data-Mover basis, the total size of all file systems, and the size of all SavVols used by SnapSure
must be less than the total supported capacity. Exceeding these limit can cause out of memory panic.
Refer to the VNX for file capacity limits tables for more information, including mixed disk type
configurations.
This hard limit is enforced and cannot be exceeded.
Same as number of inodes in file system. Each 8 KB of space = 1 inode.

Exceeding this number will cause performance problems.


This is the total number of files and directories that can be in a single VNX file system. This number can be
increased to 4 billion at file system creation time but should only be done after considering recovery and
restore time and the total storage utilized per file. The actual maximum number in a given file system is
dependent on a number of factors including the size of the file system.
Maximum amount of
256 TB
When quotas are enabled with file size policy, the maximum amount of deduplicated data supported is 256
deduplicated data supported
TB. This amount includes other files owned by UID 0 or GID.
All other industry-standard caveats, restrictions, policies, and best practices prevail. This includes, but is not limited to, fsck times (now made faster through multi-threading), backup
and restore times, number of objects per file system, snapshots, file system replication, VDM replication, performance, availability, extend times, and layout policies. Proper planning
and preparation should occur prior to implementing these guidelines.
Naming services guidelines
Number of DNS domains
3 - WebUI
Three DNS servers per Data Mover is the limit if using WebUI. There is no limit when using the command line
unlimited CLI
interface (CLI).
Number of NIS Servers per Data 10
You can configure up to 10 NIS servers in a single NIS domain on a Data Mover.
Mover
NIS record capacity
1004 bytes
A Data Mover can read 1004 bytes of data from a NIS record.
Number of DNS servers per DNS 3
domain
NFS guidelines
Number of NFS exports
2,048 per Data Mover tested
You might notice a performance impact when managing a large number of exports using Unisphere.
Unlimited theoretical max.
Number of concurrent NFS
64K with TCP (theoretical)
Limited by TCP connections.
clients
Unlimited with UDP (theoretical)
Netgroup line size
16383
The maximum line length that the Data Mover will accept in the local netgroup file on the Data Mover or the
netgroup map in the NIS domain that the Data Mover is bound to.
Number of UNIX groups
64K
2 billion max value of any GID. The maximum number of GIDs is 64K, but an individual GID can have an ID in
supported
the range of 0- 2147483648.
Networking guidelines
Link aggregation/ether channel
8 ports (ether channel)
Ether channel: the number of ports used must be a power of 2 (2, 4, or 8). Link aggregation: any number of
12 ports (link aggregation
ports can be used. All ports must be the same speed. Mixing different NIC types (that is copper and fibre) is
(LACP)
not recommended.
Number of VLANs supported
4094
IEEE standard.
Number of interfaces per Data
45 tested
Theoretically 509.
Mover
Number of FTP connections
Theoretical value 64K
By default the value is (in theory) 0xFFFF, but it is also limited by the number of TCP streams that can be
opened. To increase the default value, change param tcp.maxStreams (set to 0x00000800 by default). If you
increase it to 64K before you start TCP, you will not be able to increase the number of FTP connections. Refer
to the Parameters Guide for VNX for File for parameter information.
4/4/14

Table 2

VNX2 Limitations (page 5 of 5)

Guideline/ Specification

Maximum tested value

Number of tree quotas


Max size of tree quotas
Max number of unique groups
Quota path length

8191
256 TB
64K
1024

Number of replication sessions


per VNX
Max number of replication
sessions per Data Mover
Max number of local and remote
file system and VDM replication
sessions per Data Mover
Max number of loopback file
system and VDM replication
sessions per Data Mover
Guideline/ Specification

1365 (NSX)
(other platforms)
1024

Comment
Quotas guidelines
Per file system
Includes file size and quota tree size.
Per file system
Replicator V2 guidelines
682
This enforced limit includes all configured file system, VDM, and copy sessions.

682

341

Maximum tested value

Comment
Snapsure guidelines
Up to 96 read-only checkpoints per file system are supported as well as 16 writeable checkpoints.

Number of checkpoints per file


96 read-only
system
16 writeable
On systems with Unicode enabled, a character might require between 1 and 3 bytes, depending on encoding type or character used. For example, a Japanese character typically uses
3 bytes in UTF-8. ASCII characters require 1 byte.
VNX for File 8.1 capacity limits
VNX5200
VNX5400
VNX5600
VNX5800
VNX7600
VNX8000
VG10
VG20
Usable IP storage capacity limit per
256 TB
256 TB
256 TB
256 TB
256 TB
256 TB
256 TB
256 TB
blade (all disk types /uses)k
Max FS size
16TB
16TB
16TB
16TB
16TB
16 TB
16TB
16TB
2048
2048
2048
2048
2048
2048
2048
2048
Max # FS per DM/Bladel
Max # FS per cabinet
Max configured replication sessions
per DM/blade for Replicator v2m

4096
1024

4096
1024

4096
1024

4096
1024

4096
1024

Max # of checkpoints per PFSn


Max # of NDMP sessions per DM/
blade
Memory per DM/blade
Table 2 Footnotes:

96
4
6GB

6GB

4096
1024

4096
1024

4096
1024

96

96

96

96

96

96

96

12GB

12GB

24GB

24GB

6GB

24GB

a. Each clone private LUN must be at least 1 GB.


b. A thin or thick LUN may not be used for a clone private LUN.
c. The limits for snapshots and sessions include SnapView snapshots or SnapView sessions as well as reserved snapshots or reserved sessions used in other applications, such as SAN
Copy (incremental sessions) and MirrorView/Asynchronous.
d. A thin LUN cannot be used in the reserved LUN pool.
e. These limits include MirrorView Asynchronous (MirrorView/A) images and SnapView snapshot source LUNs in addition to the incremental SAN Copy LUNs. The maximum number of source
LUNs assumes one reserved LUN assigned to each source LUN.
f. These limits include MirrorView/A images and SnapView sessions in addition to the incremental SAN Copy sessions.
g. Target implies the storage system can be used as the remote system when using SAN Copy on another storage system.
h. The VNX5400 and VNX5600 may not be supported as target array over Fibre when source array Operating Environment is 04.30.000.5.525 and earlier for CX4 storage systems and
05.32.000.5.207 and earlier for VNX storage systems.
i. The VNX storage systems can support SAN Copy over Fibre Channel or SAN Copy over iSCSI only if the storage systems are configured with I/O ports of that type. Please see the release
notes for VNX Operating Environment version 05.33.000.5.015.
j. The CX300, AX100 and AX150 do not support SAN Copy, but support SAN Copy/E. See the release notes for this related product for details.
k. This is the usable IP storage capacity per blade. For overall platform capacity, consider the type and size of disk drive and the usable Fibre Channel host capacity requirements, RAID group
options and the total number of disks supported.
l. This count includes production file systems, user-defined file system checkpoints, and two checkpoints for each replication session.
m. A maximum of 256 concurrently transferring sessions are supported per DM.
n. PFS (Production File System).

4/4/14

You might also like