You are on page 1of 58

White Paper

EMC STORAGE DESIGN AND DATA PROTECTION


FOR VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5,
and EMC VNX series storage
Efficient storage configuration based on EMCs building-block sizing approach
Instant snapshot-based backup and recovery with EMC Replication Manager
and EMC VNX SnapView
Simplified high availability with VMware HA clustering

EMC Solutions Group

Abstract
This white paper describes how to design an enterprise email solution based on
VMware Zimbra Collaboration Server (ZCS), VMware vSphere 5, and
EMC VNX Series storage. This solution offers a high-performing, efficient,
and predictable storage design with full availability and protection by
leveraging the features and strengths of EMC VNX series storage, EMC VNX
SnapView and EMC Replication Manager for snapshot-based backup and
recovery, and VMware vSphere 5 for high availability.

July 2012
Copyright 2012 EMC Corporation. All Rights Reserved.

EMC believes the information in this publication is accurate as of its


publication date. The information is subject to change without notice.

The information in this publication is provided as is. EMC Corporation makes


no representations or warranties of any kind with respect to the information in
this publication, and specifically disclaims implied warranties of
merchantability or fitness for a particular purpose.

Use, copying, and distribution of any EMC software described in this


publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation
Trademarks on EMC.com.

VMware, ESX, VMware vCenter, and VMware vSphere are registered trademarks
or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

All trademarks used herein are the property of their respective owners.

Part Number H10866.1

EMC STORAGE DESIGN AND DATA PROTECTION FOR 2


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Table of contents
Executive summary............................................................................................................................... 6
Business case .................................................................................................................................. 6
Efficiently size and provision storage........................................................................................... 6
Ensure continuous availability..................................................................................................... 6
Ensure nondisruptive backup and near-instantaneous recovery .................................................. 6
Solution overview ............................................................................................................................ 7
Key findings ..................................................................................................................................... 7

Introduction.......................................................................................................................................... 8
Purpose ........................................................................................................................................... 8
Scope .............................................................................................................................................. 8
Not in scope..................................................................................................................................... 8
Audience ......................................................................................................................................... 8

Technology overview ............................................................................................................................ 9


Overview .......................................................................................................................................... 9
VMware Zimbra Collaboration Server ............................................................................................... 9
VMware vSphere .............................................................................................................................. 9
EMC VNX series storage ................................................................................................................... 9
EMC VNX5700 storage array ........................................................................................................... 10
EMC SnapView ............................................................................................................................... 10
EMC Navisphere Admsnap ............................................................................................................. 10
EMC Replication Manager .............................................................................................................. 11
EMC PowerPath/VE ........................................................................................................................ 11
EMC Virtual Storage Integrator (VSI) plug-in for vSphere ................................................................. 11

VMware ZCS architecture ................................................................................................................... 12


Core advantages ............................................................................................................................ 12
Zimbra packages ........................................................................................................................... 12
Zimbra Core............................................................................................................................... 12
Zimbra LDAP.............................................................................................................................. 12
Zimbra Store (Zimbra server) ..................................................................................................... 13
Zimbra MTA ............................................................................................................................... 13
Zimbra SNMP ............................................................................................................................ 13
Zimbra Logger ........................................................................................................................... 13
Zimbra Convertd ........................................................................................................................ 13
Zimbra Spell .............................................................................................................................. 13
Zimbra Proxy ............................................................................................................................. 13
Zimbra Memcached ................................................................................................................... 13
Zimbra Archiving ....................................................................................................................... 13

EMC STORAGE DESIGN AND DATA PROTECTION FOR 3


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Zimbra mailbox server architecture .................................................................................................... 14
Disk layout ..................................................................................................................................... 14

Solution architecture and design........................................................................................................ 15


Target ZCS mail user profile............................................................................................................ 15
Architecture overview ..................................................................................................................... 15
Architecture diagram...................................................................................................................... 16
Hardware components ................................................................................................................... 17
Software ........................................................................................................................................ 17

VMware vSphere design ..................................................................................................................... 18


DRS affinity rules............................................................................................................................ 18

Storage sizing and configuration........................................................................................................ 20


Mailbox server building block design methodology ........................................................................ 20
Building block design .................................................................................................................... 20
Phase 1: Collect the requirements ............................................................................................. 20
Phase 2: Design the mailbox server building block .................................................................... 21
Phase 3: Deploy the building block and validate its design ....................................................... 21

Storage design considerations for Zimbra mailbox servers ................................................................ 22


ZCS I/O characteristics................................................................................................................... 22
ZCS mailbox server partitions......................................................................................................... 23
Mount points ................................................................................................................................. 23
Message stores .............................................................................................................................. 24
Redo logs ....................................................................................................................................... 24
ZCS mailbox server building block specifications ........................................................................... 25
Scaling to 10,000 users ................................................................................................................. 25
Mailbox server disk layout ............................................................................................................. 25

Storage provisioning with the VSI plug-in .......................................................................................... 27


Storage provisioning process with the VSI plug-in .......................................................................... 28
Notes ........................................................................................................................................ 30
EMC VSI Storage Viewer plug-in ..................................................................................................... 31

Configuration guidelines and best practices ...................................................................................... 32


VNX storage configuration recommendations................................................................................. 32
vSphere ESXi host configuration recommendations ....................................................................... 33
Guest virtual machine configuration recommendations.................................................................. 33
Linux filesystem alignment ........................................................................................................ 33
Linux filesystem alignment on very large LUNs .......................................................................... 34
Filesystem formatting ................................................................................................................ 34

EMC STORAGE DESIGN AND DATA PROTECTION FOR 4


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Protection (backup and recovery) configuration ................................................................................. 35
Preparing ZCS virtual machines for Replication Manager/SnapView snapshots ......................... 35
Special consideration for Linux filesystems.................................................................................... 36
Backup process with Replication Manager ..................................................................................... 37
Recovery process ........................................................................................................................... 39

Performance validation and test results ............................................................................................. 41


Disclaimer...................................................................................................................................... 41
Testing methods ............................................................................................................................ 41
Zimbra Soapgen test tool .......................................................................................................... 41
Test scenarios ................................................................................................................................ 42
Key performance indicators............................................................................................................ 43

Test 1 results: ZCS mailbox server building block performance .......................................................... 44

Test 2 results: ZCS mailbox server building block scalability ............................................................. 47

Test 3 results: Advanced protection for ZCS data using Replication Manager with VNX SnapView
snapshots .......................................................................................................................................... 49
Snapshot space calculations ..................................................................................................... 49

Test 4 results: Benefits of using EMC VNX FAST Cache with ZCS data ................................................. 51

Test 5 results: Performance of NL-SAS disks with ZCS data ................................................................ 53

Conclusion ......................................................................................................................................... 54
Easy scaling with EMCs building-block approach to sizing ............................................................ 54
Benefits of EMC VNX series FAST Cache.......................................................................................... 54
NL-SAS disk performance with ZCS ................................................................................................ 54
VMware HA clustering .................................................................................................................... 54
Advanced protection (backup and recovery) .................................................................................. 55

References.......................................................................................................................................... 56
White papers ................................................................................................................................. 56
Product documentation.................................................................................................................. 56

Appendix A: Example ZCS mailbox server storage space calculation .................................................. 57

Appendix B: Zimbra deployment worksheet ....................................................................................... 58

EMC STORAGE DESIGN AND DATA PROTECTION FOR 5


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Executive summary
Business case Todays full-featured and robust enterprise collaboration platforms, such as VMware
Zimbra Collaboration Server (ZCS), run best on an economical world-class storage
system, such as the EMC VNX series, designed to deliver maximum performance
and scalability for mid-tier enterprises and optimized for virtual applications. It is
essential to guarantee access to information for all end users, all the time, while also
making the same information available to IT for backup, recovery, reporting, and
testing.

Efficiently size and provision storage


Sizing and configuring storage for use with enterprise collaboration platforms can be
a complicated process, driven by many variables and requirements, which vary from
organization to organization. Properly configured storage, combined with optimally
sized server and network infrastructures, can guarantee smooth enterprise
collaboration operation.

One of the methods that can be used to simplify the sizing and configuration of large
amounts of VNX series storage for ZCS is to define a unit of measurea mailbox
server building block 1.

Ensure continuous availability


A highly available infrastructure, such as VMware High Availability clustering and
VMware Distributed Resource Scheduling, provides uniform high availability across
the entire virtualized IT environment without the cost and complexity of failover
solutions tied to either operating systems or specific applications.

Ensure nondisruptive backup and near-instantaneous recovery


Disk-based replicas, made possible with products such as EMC SnapView and EMC
Replication Manager, address the information access problem by using point-in-time
copies, enabling parallel access to the desired resource in physical and virtual
environments. These point-in-time replicas can be leveraged for backup acceleration,
reporting, and testing with no impact to production, thus eliminating the need to
prioritize access and increasing operational efficiency.

1
A mailbox server building block represents the amount of storage and server resources required to support a
specific number of ZCS users. The amount of required resources is derived from a specific user profile type, mailbox
size, and disk requirements. Using the building block approach simplifies the design and implementation of ZCS.
Once the initial building block is designed, it can be easily reproduced to support the required number of users in
your enterprise. By using this approach, EMC customers can now create their own building blocks that are based on
their companys specific email environment requirements. This approach is very helpful when future growth is
expected because it makes email environment expansion simple and straightforward. EMC best practices involving
the building block approach for storage design have proven to be very successful in many customer implementations.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 6


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Solution overview ZCS is a complete, open source email and collaboration platform. It features email,
contacts, calendar, documents, file sharing, tasks, and social media, plus
synchronization for desktops and devices.

VNX series storage provides a high-performance, unified storage platform with


unsurpassed simplicity and efficiency, optimized for virtual applications. With the
VNX series, you can achieve new levels of performance, protection, compliance, and
ease of management.

Instant snapshot-based backup and recovery is provided by Replication Manager and


SnapView. Replication Manager automates the creation of snapshots of ZCS mailbox
server volumes using SnapView technology.

The combination of ZCS with VNX series storage provides an optimal collaboration
infrastructure. Storage, compute, and network layers maintain high availability, while
EMCs building-block sizing approach achieves predictable performance and a
repeatable storage design.

In addition, EMCs building-block approach to sizing accelerates your deployment of


ZCS. Once deployed, the performance, management, and protection advantages of
running ZCS on VNX series storage are self-evident.

VMware HA provides uniform high availability across the entire virtualized IT


environment without the cost and complexity of failover solutions tied to either
operating systems or specific applications.

Key findings Key aspects of this solution include:


Full support for VMware vSphere 5.0 virtualization with all of vSpheres
advanced features
VNX storage is an excellent platform to house ZCS
Efficient storage configuration and scalability based on EMCs building-block
sizing approach
Building block based on 5,000 heavy users per mailbox server validated
Simplified high availability with VMware High Availability (HA) clustering and
Distributed Resource Scheduling (DRS)
Simple, effective, and quick storage provisioning for VMware Zimbra content
with EMC VSI plug-in for VMware vCenter
Instant snapshot-based backup and recovery with Replication Manager and
SnapView
VNX series FAST Cache accelerates performance to address unanticipated
workload spikes

EMC STORAGE DESIGN AND DATA PROTECTION FOR 7


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Introduction
Purpose The purpose of this white paper is to describe the design and validation of an
enterprise email solution, based on VMware ZCS that offers a high-performing,
efficient, and predictable storage design with full availability and protection. The
paper describes how the solution leverages the features and strengths of EMC VNX
series storage, EMC VNX SnapView, and EMC Replication Manager for snapshot-
based backup and recovery, and VMware vSphere 5 for high availability.

Scope The scope of this paper corresponds to the scope of the solution validation (build,
test, and document) activities performed by EMC engineers in an EMC solutions
laboratory.

What was built and tested is described and, where possible, recommendations and
guidelines are provided for professionals to design a similar solution.

The concepts, instructions, procedures, recommendations, and guidelines presented


in this document are thorough but not all-inclusive.

Not in scope Neither installation/configuration instructions for ZCS nor detailed application
architecture guidelines fall within the scope of this white paper.

Audience The target audience for this white paper is business executives, IT directors, and
infrastructure administrators who are responsible for their companys messaging
infrastructure.

The target audience also includes professional services groups, system integration
partners, and EMC teams tasked with deploying messaging systems in a customer
environment.

A high-level understanding of ZCS and VNX series storage is beneficial. Familiarity


with VMware virtualization concepts is also beneficial.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 8


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Technology overview
Overview This section provides an overview of the primary technologies used in this solution.
The tight integration of these products and technologies yields all of the benefits of
the enterprise messaging solution described in this paper.
VMware Zimbra Collaboration Server (ZCS)
VMware vSphere
EMC VNX5700 storage system
EMC Replication Manager
EMC SnapView
EMC Navisphere Admsnap
EMC PowerPath/VE

VMware Zimbra ZCS is a next generation email, calendar and collaboration solution that is optimized
Collaboration for VMware. ZCS provides an open platform designed for virtualization and portability
Server across private and public clouds, making it simpler to manage and more cost-
effective to scale. With the most innovative Web application available, ZCS boosts
end-user productivity on any device or desktopany time, any placeat dramatically
lower costs compared to other providers. Versions of ZCS include a Network Edition,
an Open-source Edition, and a prepackaged Virtual Appliance.

This solution uses ZCS Network Edition and focuses on ZCS mailbox server design.

VMware vSphere VMware vSphere uses the power of virtualization to transform data centers into
simplified cloud computing infrastructures and enables IT organizations to deliver
flexible and reliable IT services. vSphere virtualizes and aggregates the underlying
physical hardware resources across multiple systems and provides pools of virtual
resources to the data center. As a cloud operating system, vSphere manages large
collections of infrastructure (such as CPUs, storage, and networking) as a seamless
and dynamic operating environment and also manages the complexity of a data
center.

EMC VNX series EMC VNX series storage is powered by Intel Xeon Processors, for intelligent storage
storage that automatically and efficiently scales in performance, while ensuring data integrity
and security. The VNX series is designed to deliver maximum performance and
scalability for mid-tier enterprises, enabling them to grow, share, and cost-effectively
manage multiprotocol file and block systems. VNX arrays incorporate the
RecoverPoint splitter, which supports unified file and block replication for local data
protection and disaster recovery.

EMC Unisphere is the central management platform for the EMC VNX series,
providing a single combined view of file and block systems, with all features and
functions available through a common interface. Unisphere is optimized for virtual
applications and provides industry-leading VMware integration, automatically
discovering virtual machines and VMware ESXi servers and providing end-to-end,
virtual-to-physical mapping.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 9


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
EMC VNX5700 The EMC VNX5700 storage array used in this solution is designed to deliver maximum
storage array performance and scalability for enterprises. It is a converged platform that replaces
EMC CLARiiON and EMC Celerra and enables organizations to dynamically grow,
share, and cost-effectively manage multiprotocol file systems and multiprotocol block
storage access. The VNX Operating Environment enables Microsoft Windows and
Linux/Unix clients to share files in multiprotocol (NFS and CIFS) environments. At the
same time, it supports iSCSI, Fibre Channel (FC), and FCoE access for high-bandwidth
and latency-sensitive block applications. For additional VNX specifications, refer to
the EMC VNX Series Unified Storage Systems Specification Sheet.

EMC SnapView EMC SnapView is a storage system-based software application that enables you to
create a copy of a LUN by using either clones or snapshots. A clone is an actual copy
of a LUN and takes time to create, depending on the size of the source LUN. A
snapshot is a virtual point-in-time copy of a LUN and takes only seconds to create.
SnapView has the following important benefits:
It allows full access to a point-in-time copy of your production data with modest
impact on performance and without modifying the actual production data.
For decision support or revision testing, it provides a coherent, readable, and
writable copy of real production data.
For backup, it practically eliminates the time that production data spends
offline or in hot backup mode, and it offloads the backup overhead from the
production server to another server.
It enables you to maintain a consistent replica across a set of LUNs. You do this
by performing a consistent fracture, which is a fracture of more than one clone
at the same time, or a fracture that you create when starting a session in
consistent mode.
It provides instantaneous data recovery if the source LUN becomes corrupt. You
can perform a recovery operation on a clone by initiating a reverse
synchronization on a snapshot session and by initiating a rollback operation.

Depending on your application needs, you can create clones, snapshots, or


snapshots of clones.

This solution uses EMC SnapView snapshots to protect ZCS mailbox server volumes.

EMC Navisphere The EMC Navisphere Admsnap program runs on host systems in conjunction with
Admsnap SnapView running on the EMC VNX storage processors (SPs), and lets you start,
activate, deactivate, and stop SnapView sessions. All Admsnap commands are sent
to the storage system through the Fibre Channel bus. The Admsnap utility is an
executable program that you can run interactively or with a script.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 10


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
EMC Replication EMC Replication Manager manages EMC point-in-time replication technologies
Manager through a centralized management console. Replication Manager coordinates the
entire data replication processfrom discovery and configuration to the management
of multiple, application-consistent, disk-based replicas. Replication Manager is used
to safeguard and protect your business-critical applications, such as Oracle,
Microsoft Exchange Server, Microsoft SQL Server, and VMware-based virtual
machines.

In this solution, Replication Manager automates the creation of snapshots of ZCS


mailbox server volumes using SnapView technology.

EMC PowerPath/VE EMC PowerPath/VE provides intelligent, high-performance path management with
path failover and load balancing optimized for EMC and selected third-party storage
systems. PowerPath/VE supports multiple paths between a vSphere host and an
external storage device. Having multiple paths enables the vSphere host to access a
storage device even if a specific path is unavailable. Multiple paths can also share
the I/O traffic to a storage device. PowerPath/VE is particularly beneficial in highly
available environments because it can prevent operational interruptions and
downtime. The PowerPath/VE path failover capability avoids host failure by
maintaining uninterrupted application support on the host in the event of a path
failure (if another path is available).

PowerPath/VE works with VMware ESX/ESXi as a multipath plug-in (MPP) that


provides path management to hosts. It is installed as a kernel module on the vSphere
host. It plugs into the vSphere I/O stack framework to provide the advanced
multipathing capabilities of PowerPath/VE, including dynamic load balancing and
automatic failover to the vSphere hosts.

For this solution, PowerPath/VE is installed on all ESXi hosts that house ZCS virtual
machines.

EMC Virtual EMC Virtual Storage Integrator (VSI) for VMware vSphere is a free plug-in for VMware
Storage Integrator vSphere Client that provides a single management interface for managing EMC
(VSI) plug-in for storage within the vSphere environment. Features can be added and removed from
vSphere VSI independently, providing flexibility for customizing VSI user environments.
Features are managed by using the VSI Feature Manager. VSI provides a unified user
experience, allowing each of the features to be updated independently and new
features to be introduced rapidly in response to changing customer requirements.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 11


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
VMware ZCS architecture
VMware ZCS is designed to provide an end-to-end mail solution that is scalable and
highly reliable. The messaging architecture is built with well-known open-system
technology and standards and is composed of a mail server application and a client
interface.

Core advantages The architecture includes the following core advantages:


Open source integrations: Linux, Jetty, Postfix, MySQL, OpenLDAP
Uses industry standard open protocols: SMTP, LMTP, SOAP, XML, IMAP, POP2
Modern technology design: Java, JavaScript thin client, DHTML
Horizontal and vertical scalability:
Each mailbox server can be scaled horizontally by adding more data stores
Each mailbox server can be scaled vertically by adding more CPU and
memory resources to the virtual machines and by using advanced storage
array technologies such as thin provisioning

High availability support: For cluster integration to provide high availability,


ZCS can use either Linux clustering or VMware HA clustering 3
Browser based client interface: Zimbra Web Client gives users easy access to
all the ZCS features
Administration console to manage accounts and servers

Zimbra packages VMware ZCS includes the following application packages. For more information about
each package, visit http://www.zimbra.com.

Zimbra Core
The Zimbra Core package includes the libraries, utilities, monitoring tools, and basic
configuration files.

Zimbra LDAP
ZCS uses the OpenLDAP software, an open source LDAP directory server. User
authentication is provided through OpenLDAP. Each account on the Zimbra Store
server has a unique mailbox ID that is the primary point of reference to identify the
account. The OpenLDAP schema has been customized for ZCS.

2
This solution focuses on the performance of ZCS with the most popular ZCS web client protocol, SOAP.
3
For high availability, this solution uses VMware HA clustering.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 12


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Zimbra Store (Zimbra server)
The Zimbra Store includes the components for the mailbox server, including Jetty,
which is the servlet container the Zimbra software runs within. The Zimbra mailbox
server includes the following components:
Data storeThe data store is a MySQL database
Message storeThe message store is where all email messages and file
attachments reside
Index storeIndex and search technology is provided through Lucene; index
files are maintained for each mailbox
Zimbra MTA
Postfix is the open source mail transfer agent (MTA) that receives email via SMTP and
routes each message to the appropriate Zimbra mailbox server using Local Mail
Transfer Protocol (LMTP). The Zimbra MTA also includes the anti-virus and anti-spam
components.
Zimbra SNMP
Installing the Zimbra SNMP package is optional. If you choose to install Zimbra SNMP
for monitoring, run the package on every Zimbra server.
Zimbra Logger
Installing the Zimbra Logger package is optional. It can be installed on one mailbox
server. The Zimbra logger installs tools for syslog aggregation and reporting. If you do
not install the logger, the server statistics section of the administration console will
not display.
Zimbra Convertd
The Zimbra Convertd package is installed on the Zimbra Store server. Only one
Zimbra Convertd package needs to be present in the ZCS environment.
Zimbra Spell
Installing the Zimbra Spell package is optional. Aspell is the open source spell
checker used on the Zimbra Web Client. When Zimbra Spell is installed, the Zimbra
Apache package is also installed.
Zimbra Proxy
Installing the Zimbra Proxy is optional. Use of an IMAP/POP proxy server allows mail
retrieval for a domain to be split across multiple Zimbra servers on a per user basis.
The Zimbra Proxy package can be installed with the Zimbra LDAP, the Zimbra MTA,
the Zimbra mailbox server, or on its own server.
Zimbra Memcached
Memcached is a separate package from Zimbra Proxy and is automatically selected
when the Zimbra Proxy package is installed. One server must run Zimbra Memcached
when the proxy is in use. All installed Zimbra proxies can use a single memcached
server.
Zimbra Archiving
The Zimbra Archiving and Discovery feature is an optional feature for Zimbra Network
Edition. Archiving and Discovery offers the ability to store and search all messages
that were delivered to or sent by Zimbra. This package includes the cross-mailbox
search function, which can be used for both live and archive mailbox searches. Using
Archiving and Discovery can trigger additional mailbox license usage.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 13


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Zimbra mailbox server architecture
The Zimbra mailbox server is a dedicated server that manages all of the mailbox
content, including messages, contacts, calendar, briefcase files, and attachments.

Messages are received from the Zimbra MTA server and are then passed through any
filters that have been created. Messages are then indexed and deposited into the
correct mailbox.

In addition to content management, the Zimbra mailbox server has dedicated


volumes for backup and log files. Each Zimbra mailbox server in the system can see
only its own storage volumes. Zimbra mailbox servers cannot see, read, or write to
another server. In a ZCS single-server environment, all services are on one server, and
during installation the computer is configured to partition the disk to accommodate
each of the services.

In a ZCS multi-server environment, the LDAP and MTA services can be installed on
separate servers.

Disk layout The mailbox server includes the following volumes:


Message Store (blob store)Mail message files are located in opt/zimbra/store
Data StoreThe MySQL database files are located in opt/zimbra/db
Index StoreIndex files are located in opt/zimbra/index
Backup areaFull and incremental backups are located in opt/zimbra/backup
Log filesEach component in ZCS has log files. Local logs are located in
/opt/zimbra/log

Additional information about mailbox server design is provided in the Storage design
considerations for Zimbra mailbox servers section.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 14


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Solution architecture and design
Target ZCS mail The solution was validated with the ZCS mail user profile characteristics presented in
user profile Table 1.

Table 1. Target ZCS user profile characteristics

User profile characteristic Value

Total users 10,000

Zimbra mailbox servers 2

Users per mailbox server 5,000

User mailbox size 500 MB

User workload message profile Heavy Enterprise

Message profile characteristics 21 received/hour/user, 7 sent/hour/user


(224 messages/user/8 hour day)
124 KB average message size
80% with 25 KB message body
20% with 20 KB message body and 500 KB
attachment

User type 90% SOAP 4 users, 10% IMAP users

Concurrency 100%

Read/write ratio based on profile and 40% reads/60% writes


workload type

Mail stores per server 1

Blob message store LUN size 3.5 TB per server

Architecture To validate the performance and functionality of ZCS on EMC VNX series storage, we
overview configured two VMware vSphere ESXi servers (nodes) on a VMware vSphere
hypervisor platform and deployed multiple ZCS roles on each ESXi node. Two nodes
were configured for VMware high availability (HA) and VMware Distributed Resource
Scheduling (DRS) to achieve high availability and balanced performance. We
configured each ESXi host with sufficient resources to support an entire 10,000-user
ZCS environment. In the event of a vSphere cluster node failure, or during vSphere
cluster node maintenance, all virtual machines on the affected node were configured
to move automatically to the still-functioning node.

4
SOAP: Simple Object Access Protocol, an XML-based messaging protocol used for sending requests for Web
services. The Zimbra servers use SOAP for receiving and processing requests, which can come from Zimbra command
line tools or Zimbra user interfaces.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 15


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Architecture Figure 1 illustrates the reference architecture for the validated solution.
diagram

Figure 1. Reference architecture diagram for the solution as validated

EMC STORAGE DESIGN AND DATA PROTECTION FOR 16


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Hardware Table 2 lists the hardware components used to validate the solution.
components
Table 2. Hardware components

Item Units Description


Storage platform 1 EMC VNX5700 (block), 36 GB system cache

Fabric switch 1 8 Gb/s FC switching module

vSphere host servers 2 4 sockets, 2.66 GHz Intel Xeon X7460 6-core
processors, 128 GB RAM

Host bus adapter (HBA) 4 8 GB HBAs (2 per vSphere host)

Replication Manager server 1 4 2.99 GHz Intel Xeon quad-core processors, 32


GB RAM

vCenter server 1 42.99 GHz Intel Xeon quad-core processors, 32


GB RAM

Disks 37 32 for mailbox server content:


5 for virtual machine OS: 2 TB 7.2k rpm NL-SAS

Software Table 3 lists software components used in solution validation.

Table 3. Software components

Item Description
VMware ZCS Version 7.2 Network Edition (64-bit)

VNX5700 Operating Environment VNX Block software version 05.31.000.5.704


FAST Cache enabler
SnapView enabler

Multipathing software EMC PowerPath/VE 5.7

Hypervisor operating system VMware vSphere 5.0 (5.0.0,623860)

vCenter Server Version 5.0 (5.0.0,623373) on Windows 2008


Standard Edition R2 (64-bit)

Virtual machine operating system Red Hat Enterprise Linux Server 5.5 (64-bit)

VSI Plug-in for VMware vCenter Version 5.2

VSI Unified Storage Management Version 5.2

Replication Manager Server Version 5.4.1 on Windows 2008 Standard


Edition R2 (64-bit)

Navisphere Admsnap Version 2.30

EMC STORAGE DESIGN AND DATA PROTECTION FOR 17


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
VMware vSphere design
As stated in Solution architecture and design, two VMware vSphere servers were
configured for VMware HA and VMware DRS to achieve high availability and balanced
performance. Each vSphere node in the cluster was configured with sufficient
resources to support an entire 10,000-user ZCS environment. In the event of a
vSphere cluster node failure, or during vSphere cluster node maintenance, all virtual
machines on the affected node were configured to move automatically to the still-
functioning node.

DRS affinity rules Once all ZCS virtual machines were configured and operational, we distributed them
across two vSphere nodes and defined two DRS affinity rules to keep specific sets of
Zimbra virtual machines (each virtual machine had a different messaging role) on
different nodes. The VMware DRS load-balancing automation level was set to
Automatic with a Normal (default) threshold.

We defined Virtual Machines to Hosts (VM-Host)-type affinity rules. This rule type
was introduced in vSphere 4.1 for DRS clusters to augment the existing anti-affinity
rules, which are now known as VM-VM rules.

VM-Host affinity rules enable you to stipulate that a set of virtual machines either
should or must run on specific nodes within a cluster. Unlike a VM-VM rule, which
specifies anti-affinity among specific virtual machines, a VM-Host rule specifies an
affinity relationship between a set of virtual machines and a set of cluster nodes. A
VM-Host rule has either a Required (must) attribute or a Preferential (should)
attribute.

Mandatory rules apply to non-DRS operations even in a DRS cluster, such as manual
power-on operations, manual vMotion operations, and VMware HA host failover
events.

Because DRS honors VM-Host Preferential (should) rules during load balancing
operations and node maintenance operations, we chose VM-Host rules (instead of
VM-VM rules) to ensure automatic failback of virtual machines to the original node
following node maintenance.

We defined two VM-Host Preferential rules to stipulate that nodes Host 1 (R900-11)
and Host 2 (R900-12) should each host a specific Mailbox Server virtual machine,
MTA Server virtual machine, and LDAP Server virtual machine, as presented in
Table 4.

Table 4. DRS affinity rules defined for this solution

This rule Stipulates that this node Should keep


Keep virtual machines on Host 1 (R900-11) DRS Group VMs on (R900-11):
Host 1 (R900-11) Mailbox Server 1, MTA Server 1, LDAP Server 1

Keep virtual machines on Host 2 (R900-12) DRS Group VMs on (R900-12):


Host 2 (R900-12) Mailbox Server 2, MTA Server 2, LDAP Server 2

EMC STORAGE DESIGN AND DATA PROTECTION FOR 18


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 2 shows the DRS VM-Host affinity rules defined for this solution.

Figure 2. DRS affinity rules defined for this solution

EMC STORAGE DESIGN AND DATA PROTECTION FOR 19


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Storage sizing and configuration
Sizing and configuring storage for use with enterprise email systems can be a
complicated process, driven by many variables and requirements, which vary from
organization to organization. Properly configured storage, combined with optimally
sized server and network infrastructures, can guarantee smooth enterprise email
operation.

Mailbox server One of the methods that can be used to simplify the sizing and configuration of large
building block amounts of storage on EMC VNX series storage arrays for use with ZCS is to define a
design unit of measurea mailbox server building block.
methodology
A mailbox server building block represents the amount of storage and server
resources required to support a specific number of ZCS users. The amount of required
resources is derived from a specific user profile type, mailbox size, and disk
requirements. Using the building block approach simplifies the design and
implementation of ZCS.

Once the initial building block is designed, it can be easily reproduced to support the
required number of users in your enterprise. By using this approach, EMC customers
can now create their own building blocks that are based on their companys specific
email environment requirements. This approach is very helpful when future growth is
expected because it makes email environment expansion simple and straightforward.
EMC best practices involving the building block approach for storage design have
proven to be very successful in many customer implementations.

Building block Designing a building block that is appropriate for a specific customers environment
design involves three phases: Collect the relevant requirements, design and build the
building block based on the requirements, and validate the design.

Phase 1: Collect the requirements


In Phase 1, an administrator identifies ZCS user requirements. To gather the
necessary information, answer the following questions. Accurate answers to these
questions are essential for properly sizing your ZCS servers and storage.
What is the total number of users in your environment?
What is the user concurrency?
What is the mailbox quota per user (MB)?
What is the total number of incoming and outgoing messages per day?
What is the average message size (KB)?
What percentage of messages will have attachments?
What is the average attachment size (KB)?
What is the average mailbox utilization percentage of quota?
What percentage of total users will be using Webmail vs. POP vs. IMAP vs.
Mobile vs. Outlook?
What is the message delivery peak period in minutes?
Will you be using the Zimbra AS/AV on Zimbra MTAs?

EMC STORAGE DESIGN AND DATA PROTECTION FOR 20


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Zimbra professional services can help with sizing and will use the answers from these
questions as input into a Zimbra deployment worksheet. Figure 27 in
Appendix B: Zimbra deployment worksheet shows an example of the sizing tab in a
Zimbra deployment worksheet.

Note: The ZCS deployment worksheet is used only by Zimbra professional services.

Phase 2: Design the mailbox server building block


Phase 2 involves designing a mailbox server building block that satisfies the
requirements collected in Phase 1. The building block includes storage and server
resources.
Prior to designing your building block, use the following resources to ensure that you
have a thorough understanding of the relevant concepts and building block
components:
EMC VNX series storage and VMware vSphere design best practices
ZCS best practices (ZCS wiki)
Recommendations provided by the ZCS deployment worksheet

An example calculation of capacity requirements for ZCS is presented in Appendix A:


Example ZCS mailbox server storage space calculation.

Phase 3: Deploy the building block and validate its design


After the mailbox server building block is deployed, the blocks design can be
validated with the Zimbra Soapgen testing tool. Soapgen was developed and is
maintained by the Zimbra performance lab. It is a comprehensive and flexible mail
server load generator designed to provide functionality similar to Microsoft Exchange
Loadgen. Soapgen can test many mail protocols and mailbox profiles. How Soapgen
was used to validate the mailbox server building block designed for this solution is
explained in the Performance validation and test results section.

Note: The Soapgen tool is available only from Zimbra professional services.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 21


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Storage design considerations for Zimbra mailbox servers
Storage design directly affects the performance of a ZCS implementation. You must
consider current and projected workloads when designing storage for your ZCS
deployment.

EMC VNX series storage systems can provide the necessary performance for your ZCS
mailbox servers. VNX storage offers advanced, performance-oriented enterprise array
features such as Fully Automated Storage Tiering (FAST), FAST Cache, thin
provisioning, hardware-based snapshots and clones, and many others.

ZCS I/O Storage must be designed to work optimally with the ZCS application. To do this, ZCS
characteristics I/O characteristics, I/O patterns, and read/write ratios must be identified. Based on
testing performed to validate this solution (test results are presented in the
Performance validation and test results section), we identified the types and sizes of
I/O generated by ZCS. We observed that ZCS mailbox servers generate primarily
small, 4 KB to 32 KB random I/Os to the database. This observation enabled us to
size the test environment accurately for optimal performance and the best user
experience.

ZCS mailbox servers are read/write intensive. Even with appropriately configured RAM
on the relevant virtual machines, the message store generates a large amount of disk
activity. For the ZCS mailbox server, the majority of I/O activity is generated by these
three sources, in decreasing order of load generated:
Lucene search index managed by the Java mailbox process
MySQL instance that runs on each message store server and stores metadata
(folders, tags, flags, and so on)
Blob message store managed by the Java mailbox process

MySQL, the Lucene index, and blob stores generate random I/O and, therefore,
should be serviced by a fast disk subsystem. Blob message stores generate less I/O
than MySQL or the Lucene index but require more capacity, thus blob stores can be
deployed on slower disks in some cases.

Note: The target user profile and other customer-specific requirements directly affect
storage design recommendations for ZCS. Work closely with EMC and VMware
presales support to receive appropriate guidance.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 22


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
ZCS mailbox server In a multi-server ZCS deployment, eight major mailbox server partitions are typically
partitions planned for use. However, the use of VNX-specific enterprise array features can
reduce the number of required partitions. Table 5 provides information about each
ZCS mailbox server partition and its function.

Table 5. ZCS mailbox server partitions

Mailbox server partition Function


/opt/zimbra Root partition for Zimbra application

/opt/zimbra/db MySQL database files (holds all metadata)

/opt/zimbra/store Message store (messages, attachments, etc.)

/opt/zimbra/index Message index partition for fast user searches

/opt/zimbra/redolog MySQL database log files

/opt/zimbra/logs General Zimbra log files

/opt/zimbra/backup Holds all backup data for the Zimbra mailbox server

/opt/zimbra/store02 Zimbra Hierarchical Storage Management (HSM)


Note: It is necessary to deploy a secondary store volume
only when the Zimbra Hierarchical Storage Management
(HSM) feature is used. HSM allows you to configure
storage volumes for older messages. To manage your
email storage resources, you can implement a different
HSM policy for each message server. Messages and
attachments are moved from a primary volume to the
current secondary volume based on the age of the
message. Archived messages are still accessible by the
user.

Mount points For each LUN created on the VNX storage, we configured a mount point and then
created a filesystem on the mounted partition. Each LUN configured for a mailbox
server must provide enough performance and capacity based on user requirements
and recommendations from the ZCS deployment worksheet.

For this solution we configured five LUNs and mounted them to the first five partitions
listed in Table 5.

Note: If native Zimbra backups (zmbackup) are used, you need a sixth LUN and a
mount point (/opt/zimbra/backup). If Zimbra HSM is used, you need to create a
seventh LUN and a mount point for the secondary store (/opt/zimbra/store02).

EMC STORAGE DESIGN AND DATA PROTECTION FOR 23


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Message stores A Zimbra Message Store holds all email messages for a single mailbox server,
including the message bodies and any file attachments. Messages are stored in MIME
format. The message store is located on each Zimbra server under /opt/zimbra/store.
Each mailbox has a dedicated directory named after its internal Zimbra mailbox ID.
Mailbox IDs are unique for each server; IDs are not unique system-wide.

Zimbra is designed with Single Copy Storage (known as Single Instance Storage
(SIS) in Microsoft Exchange versions prior to Exchange 2010) that allows messages
with multiple recipients to be stored only once in the filesystem. On Unix systems, the
mailbox directory for each user contains a hard link to the actual file.

For this solution, we deployed one primary message store for each mailbox server.

Redo logs Each Zimbra server generates redo logs that contain every transaction processed by
that server. If an unexpected shutdown occurs on the server, the redo logs are used
for the following:
To ensure that no uncommitted transactions remain, the server rereads the
redo logs at startup
During restore, to recover data written since the last full backup in the event of
a server failure

When the current redo log file size reaches 100 MB, it rolls over to an archive
directory. At that point, the server starts a new redo log. All uncommitted transactions
from the previous redo log are preserved. In the case of a crash, when the server
restarts, the current redo log and the archived logs are read to reapply any
uncommitted transactions. When an incremental backup is run, the redo logs are
moved from the archive to the backup directory.

For this solution, we placed redo logs on 10k rpm SAS disks with RAID1/0 protection.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 24


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
ZCS mailbox server Having identified the requirements for storage (I/O, capacity, and bandwidth), servers
building block (CPU, memory), we designed a ZCS mailbox server building block with the
specifications specifications shown in Table 6. The building block meets the requirements
presented in Table 1 for 5,000 heavy users with a 500 MB mailbox size. Each mailbox
server ran in a virtual machine.

Table 6. Specifications of the mailbox server building block

Number of users per CPUs per Disk requirements per mailbox server virtual
Memory per
mailbox server virtual virtual machine based on 500 MB mailbox size and
virtual machine
machine machine heavy user profile
5,000 4 16 GB 16 disks:
10 600 GB 10k SAS in RAID 5 (4+1): Message
store
2 600 GB 10k SAS in RAID 1/0 (1+1): Index
2 600 GB 10k SAS in RAID 1/0 (1+1):
Database
2 600 GB 10k SAS in RAID 1/0 (1+1): Redo
logs, Zimbra root

All Zimbra virtual machine VMFS data stores housing the Red Hat Enterprise Linux
Server 5.5 operating system were configured from a RAID group made up of five 2 TB
7.2k rpm NL-SAS drives configured with RAID 5 (4+1) protection.

Scaling to 10,000 To scale the configuration to 10,000 users (two building blocks/mailbox servers), the
users two SAS disks housing the Zimbra redo logs and Zimbra root partitions can be shared
between the two mailbox servers (there is sufficient capacity for this). The second
building block, therefore, requires only 16 disks (two fewer than the first building
block). This configuration was tested and caused no performance degradation (see
the Performance validation and test results section).

Mailbox server We configured six LUNs on the VNX array and configured five mount points on each
disk layout mailbox server. We did not configure a backup volume because we used VNX
SnapView snapshots to back up ZCS content.

Figure 3 shows the LUN configuration on the VNX5700 storage array for mailbox
server storage as viewed with EMC Unisphere:

Figure 3. LUN configuration for mailbox server storage viewed with Unisphere

EMC STORAGE DESIGN AND DATA PROTECTION FOR 25


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Table 7 shows the devices, mount points, and disks configured for one ZCS mailbox
server in this solution. Volume sizes were based on the requirements presented in
Table 1 in conjunction with recommendations generated by the Zimbra deployment
worksheet.

Table 7. Linux volume configuration and mount points for one mailbox server

Filesystem Mounted on LUN Size


/dev/sdb1 /opt/zimbra 40 GB

/dev/sdd1 /opt/zimbra/db 50 GB

/dev/sde1 /opt/zimbra/index 500 GB

/dev/sdc1 /opt/zimbra/redolog 90 GB

/dev/sdf1 /opt/zimbra/store 3.5 TB

Figure 4 shows the same device, mount point, and disk configuration as viewed with
a CLI command issued on the mailbox server virtual machine running Red Hat Linux
5.5.

Figure 4. Mailbox server filesystem details

Table 8 presents the configuration profile of ZCS mailbox server virtual machines
deployed on vSphere.

Table 8. Configuration profile of ZCS mailbox server virtual machines

Role vCPUs Memory Disks (VMDKs) Disk type vSCSI Controller


ZCS mailbox server 4 16 GB reserved 100 GB: OS (200 GB LUN) VMFS 0:0 (LSI Logic SAS)
40 GB: Zimbra root/home RDM/P 0:1 (LSI Logic SAS)
50 GB: MySQL DB RDM/P 1:0 (LSI Logic SAS)
500 GB: Index RDM/P 1:2 (LSI Logic SAS)
90 GB: Redo logs RDM/P 2:0 (LSI Logic SAS)
3.5 TB: Message store RDM/P 3:0 (LSI Logic SAS)

EMC STORAGE DESIGN AND DATA PROTECTION FOR 26


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Storage provisioning with the VSI plug-in
VSI automates and simplifies the task of provisioning storage for VMware virtual
machines. VSI is a client-side plug-in and is installed with VMware Infrastructure (VI)
Client. Once VSI has been installed and enabled, a new icon is displayed in vSphere
client under Solutions and Applications. In the VNX section in vCenter, input the IP
address of the SP A/B controllers and your login credentials.

Figure 5. VSI plug-in for VMware vSphere visible in vCenter under Solutions and
Applications

Figure 6. Enabled EMC VSI visible in Plug-in Manager

EMC STORAGE DESIGN AND DATA PROTECTION FOR 27


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Storage 1. Using VSI, we right-clicked the VMware HA cluster to be configured and
provisioning selected EMC > Unified Storage > Provision Storage.
process with the
VSI plug-in

Figure 7. EMC VSI: Provision new storage

2. We selected the Disk/LUN option so that we could provision storage on the


VNX storage array deployed for this solution.

Figure 8. EMC VSI: Select Disk/LUN

EMC STORAGE DESIGN AND DATA PROTECTION FOR 28


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
3. We then selected one of the storage pools previously configured for this
solution in order to create a LUN.

Figure 9. EMC VSI: Select storage pool

4. For the data store, we selected the filesystem type VMFS-5.

Figure 10. EMC VSI: Select filesystem type

EMC STORAGE DESIGN AND DATA PROTECTION FOR 29


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
5. We then selected a volume type of RDM and specified the LUN properties.

Figure 11. EMC VSI: Specify RDM volume type and LUN properties

6. Finally, we clicked Finish and VSI created the requested LUN on the VNX array.

Notes
If you choose VMFS Datastore instead of RDM Volume, the data store name you
specify becomes the user-friendly name of the LUN when viewed with
Unisphere.
VSI is aware of the EMC FAST VP auto-tiering policy features. These features can
be reviewed by clicking the Advanced button.
When provisioning storage with VSI, storage access policies can be set on an
individual user basis.

For more information about the VSI plug-in for VMware vCenter, visit the EMC
Community Network.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 30


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
EMC VSI Storage As shown in Figure 12, the EMC Storage Viewer plug-in provides additional visibility
Viewer plug-in into VNX storage from vCenter. By selecting the EMC VSI tab and selecting a data
store or an RDM LUN, you can view details about that volume. To display details
about the VNX storage pool in which the LUN resides, select Hard Disk 6 for mailbox
server ZMBX04.

Figure 12. EMC VSI Storage Viewer Plug-in with VMware vCenter

EMC STORAGE DESIGN AND DATA PROTECTION FOR 31


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Configuration guidelines and best practices
This section presents the key guidelines and best practices we discovered and
applied during the validation of this solution.

VNX storage Follow these recommendations when configuring VNX storage in the context of this or
configuration a similar solution:
recommendations During the configuration of storage pools, RAID groups and LUNs for Zimbra
mailbox servers, consider I/O patterns when defining partitions. Separate
random workloads from sequential workloads on different disks.
Create dedicated storage pools or RAID groups for ZCS. If your array supports
other applications, use different storage pools or RAID groups for those
applications.
Size storage not only for capacity but also for performance.
To configure VNX for vSphere clustering, use Unisphere to create a single
storage group containing all ESXi host cluster members.

Figure 13. EMC Unisphere: Storage group configuration for vSphere hosts

EMC STORAGE DESIGN AND DATA PROTECTION FOR 32


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
vSphere ESXi host Follow these recommendations when configuring vSphere ESXi hosts for this or a
configuration similar solution:
recommendations Install EMC PowerPath/VE to improve performance and maximize host-to-
storage I/O throughput
Configure each ESXi host with at least two HBAs connected to different fabrics
for best performance and high availability
Make the number of CPU cores equivalent to (or exceed) the number of virtual
CPUs you plan to run concurrently (within the set of virtual machines on the
ESXi host)
Make physical memory equivalent to the sum of the memory used by the
individual virtual machines plus additional memory required for the ESXi host
Plan for host failures within a vSphere cluster so that a single host's resources
can sustain the environment with minimum required performance
Install EMC VSI Plug-in, which provides the vSphere administrator with a view
of and access to all EMC storage

Guest virtual Follow these recommendations when configuring guest virtual machines for this or a
machine similar solution:
configuration Install the latest version of VMware tools on guest virtual machines to optimize
recommendations performance and enhance the user experience
Use multiple SCSI controllers when creating VMDKs for ZCS mailbox server
virtual machines. Use separate SCSI bus IDs to spread the I/O load (sequential
vs. random) across different SCSI buses.
Reserve memory for each ZCS mailbox server virtual machine
Allocate enough space for a swap file. ESXi Server creates a swap file for each
virtual machine that is equal in size to the difference between the virtual
machines, configured memory size, and memory reservation. The default is to
place the swap file on the same data store as the guest operating system.

Linux filesystem alignment


It is necessary to align Linux partitions used for ZCS mailbox servers. The fdisk
command can be used to create a single, aligned partition on a second Linux sdb or
sdc filesystem LUN to utilize all of the LUNs available capacity.

We used the following procedure to align the Linux filesystem. In this example, the
partition is named /dev/nativedevicename:
fdisk /dev/nativedevicename # sdb and sdc
n # New partition
p # Primary
1 # Partition 1
<Enter> # 1st cylinder=1
<Enter> # Default for last cylinder
# Expert mode
b # Starting block
1 # Partition 1
128 # Stripe element = 128
w # Write

EMC STORAGE DESIGN AND DATA PROTECTION FOR 33


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Linux filesystem alignment on very large LUNs
To support 5,000 users, each with a 500 MB mailbox, we configured a 3.5 TB
message store volume. To create an aligned partition larger than 2 TB, use the GUID
Partition Table (GPT) drive partitioning scheme. GPT provides a more flexible
mechanism for partitioning drives than the older Master Boot Record (MBR)
partitioning scheme.

By default, a GPT partition is misaligned by 34 blocks. In Linux, we used the parted


utility to create and align a GPT partition. The following example shows how to create
a partition larger than 2 TB. In this example, the partition is named /dev/sdf. The
command aligns a 2.35 TB partition to a 1 MB starting offset.
# parted /dev/sdf
GNU Parted 1.8.1
Using /dev/sdf
(parted) mklabel gpt
(parted) p
Disk geometry for /dev/sdf: 0.000-2461696.000 megabytes
Disk label type: gpt
Minor Start End Filesystem Name Flags
(parted) mkpart primary 1 2461696
(parted) p
Disk geometry for /dev/sdf: 0.000-2461696.000 megabytes
Disk label type: gpt
Minor Start End Filesystem Name Flags
1 1.000 2461695.983
(parted) q
# mkfs.ext3 /dev/sdf1 # Use mkfs to format the file system

Filesystem formatting
The following parameters enable filesystem formatting:
mke2fs j L Label name O dir_index m 2 i 10240 J size=400 b
4096 R stride =16

EMC STORAGE DESIGN AND DATA PROTECTION FOR 34


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Protection (backup and recovery) configuration
The native ZCS full backup process can be lengthy, I/O-intensive, and it can degrade
performance. To simplify protection processes and keep maintenance windows short,
this solution protects ZCS mailbox servers with EMC Replication Manager and EMC
VNX SnapView snapshots.

The solution uses VNX SnapView snapshot technology to create the Replication
Manager replica. VNX SnapView can create or remove a snapshot in seconds,
regardless of the LUN size or activity, because it is a point-in-time copy.

Compared with the native ZCS backup process, the use of Replication Manager with
SnapView significantly reduced the time it took to back up all ZCS mailbox servers.

Configuring a VNX SnapView snapshot involves allocating a reserved LUN pool (RLP)
with the proper number and size of LUNs (also known as a snapshot cache). For this
solution, the reserved LUN pool consisted of 50 20 GB LUNs in one RAID 5 (4+1)
group. Note that the actual size of the RLP may be different and depends on the
application change rate and recover point objectives (RPO).

Preparing ZCS virtual machines for Replication Manager/SnapView snapshots


Taking snapshots of ZCS storage requires the following EMC software to be installed
on all ZCS virtual machines:
EMC Replication Manager Agent
EMC Solutions Enabler
EMC Navisphere CLI
EMC Navisphere Admsnap

Once this software is installed, Replication Manager discovers the ZCS virtual
machines, as shown in Figure 14.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 35


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 14. EMC Replication Manager: EMC software installed on ZCS mailbox server virtual
machine

Special To reduce snapshot restore time, it might be necessary to adjust either of two Linux
consideration for filesystem configuration parameters. Linux sets a default value of 30 for the
Linux filesystems Maximum Mount Count parameter. When this value is reached, Linux performs a
filesystem check on the disk, which can cause significant mount delay on a large
filesystem. For this solution, the message store volume was 3.5 TB. There are two
ways to avoid this filesystem check on the mount. The first way is to reset the Mount
Count value to 1. The second way is to increase the Maximum mount count value.
For this solution we used the tune2fs utility to change the Mount Count value to 1.

To set/reset Mount count to 1, run this command:


tune2fs -C 1 /dev/sdf1

To set Maximum mount count to 30 for example, run this command:


tune2fs -c 30 /dev/sdf1

Note the difference in the uppercase C versus the lowercase c in each of the two
commands.

After the values are reset, you can use the CLI to verify the new values.

Note: Zimbra administrators can monitor snapshot restore time and, if necessary,
change these settings as required. Alternatively, a custom script can be created and
then run during Replication Manager backup jobs to monitor and adjust settings
automatically.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 36


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Backup process The Replication Manager Linux File System Agent replicates databases and software
with Replication applications that store their data in supported Linux filesystems. The filesystem
Manager remains mounted for replication. The agent flushes the filesystem I/O buffer
immediately before creating a replica to ensure all changes have been synchronized
to disk. Pre- and post-replication scripts can be implemented to support other
applications and databases, besides those specifically supported by Replication
Manager. The scripts can quiesce data to ensure consistency before a split by:
Shutting down and starting up the application
Putting the database or application into (and out of) an Online Backup mode, if
such a mode is available

Note: In order to keep your ZCS environment fully consistent, and to simplify the
recovery process, mailbox server backups should be performed together with an
LDAP server backup. By doing this, you will avoid out of sync conditions where users
properties might be changed while their mailboxes are being backed up.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 37


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 15 shows an example of the Replication Manager application set to which we
added volumes on the mailbox server housing the Zimbra files. A similar application
set should be created for the Zimbra LDAP server filesystem.

Figure 15. Properties of application set containing ZCS application LUNs

As shown in Figure 16, the Use Consistent Split option was selected to take
advantage of VNX consistent-split technology. Pre- and post-replication scripts were
implemented to ensure the consistency of ZCS data.

Figure 16. Use Consistent Split option selected for replication

EMC STORAGE DESIGN AND DATA PROTECTION FOR 38


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Recovery process The Replication Manager agent automatically unmounts specified Zimbra file systems
before it starts a replica restore. It also automatically mounts these Zimbra
filesystems after the restore completes. To make the automatic unmount action
successful, Zimbra services must be stopped either manually or through an
application callout script.

Application callout scripts allow you to add customized actions to Replication


Manager at several points throughout the replication, mounting, and restore
processes. Replication Manager calls these executable scripts based on the names of
the scripts and their locations in the Replication Manager host.

The callout scripts must be located in the same directory as irccd on the host. The
default location in the Linux environment is /opt/emc/rm/client/bin/.

The naming convention for callout scripts is as follows:


IR_CALLOUT_<application_set_name>_<job_name>_<n>

<application_set_name> is the name of the application set that contains the job that
runs the script.

<job_name> is the name of the job that runs the script within the application set.

<n> is a number that determines when the script runs, as shown in Table 9.

Table 9. Replication Manager recovery process: Application callout scripts

Callout script number The script is called


100 At the beginning of the process

110 At the beginning of failover process (for Celerra iSCSI or VNXe iSCSI replica
promotion)

200 Before checking whether target devices are in the correct state

300 Before the application recovery process starts; the 300 callout is valid only for
mount operations in which some recovery occurs before filesystems are made
visible

400 After checking the application state to verify application recovery is in progress

500 After storage is recovered or mounted

510 After the failover process is complete (for Celerra replica promotion)

550 After the network files have been copied but before the database is recovered;
use the 500 callout to make changes to the Oracle initialization file before the
application starts

600 After application recovery is complete

EMC STORAGE DESIGN AND DATA PROTECTION FOR 39


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
In this solution, the following callout scripts were used:
IR_CALLOUT_ZMBX01-Zimbra_ZMBX01_Snapshot_100 (to stop Zimbra
services)
IR_CALLOUT_ZMBX01-Zimbra_ZMBX01_Snapshot_600 (to start Zimbra
services)

Figure 17 shows the Replication Manager GUI used to restore a mailbox server
replica.

Figure 17. Replication Manager: Object selection for restore

Performance results for ZCS backup and restore with Replication Manager and VNX
SnapView snapshots, and recommendations for calculating necessary storage space
for snapshots, are presented in the Performance validation and test results section.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 40


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Performance validation and test results
This section presents the methods used to validate this solution. The test results
themselves are presented in subsequent sections.

The ZCS test environment deployed in an EMC lab mimicked a typical 10,000-user
enterprise email configuration with very heavy user usage profiles and large mailbox
sizes. This environment enabled us to observe how ZCS ostensibly performs under
extreme enterprise loads.

Disclaimer Benchmark results are highly dependent upon workload, specific application
requirements, and system design and implementation. Relative system performance
will vary as a result of these and other factors. Therefore, this workload should not be
used as a substitute for a specific customer application benchmark when critical
capacity planning and/or product evaluation decisions are contemplated.

All performance data contained in this report was obtained in a rigorously controlled
environment. Results obtained in other operating environments may vary
significantly.

EMC Corporation does not warrant or represent that a user can or will achieve similar
performance expressed in transactions per minute.

Testing methods To validate this solution, we used the Soapgen load generation testing utility.
Soapgen was developed and is maintained by the Zimbra performance lab. It is a
comprehensive and flexible mail server load generator designed to provide
functionality similar to Microsoft Exchange Loadgen. Soapgen can test many mail
protocols and mailbox profiles.

Note: The Soapgen tool is available only from Zimbra professional services.

Zimbra Soapgen test tool


The Zimbra Soapgen utility enables you to test all server functions in a ZCS
configuration and has the following features:
A parser to interpret specific tests in XML
A test task scheduler to schedule tasks for different test accounts; all tasks are
submitted to a queue and wait to be picked for execution at the scheduled time
Various test tasks simulate user interaction with Zimbra through the use of
different protocols (SOAP, HTML, IMAP, POP3, CalDav, and Blackberry
synchronization)

EMC STORAGE DESIGN AND DATA PROTECTION FOR 41


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
We used two client virtual machines configured with Soapgen to generate a load of
5,000 heavy enterprise users against each of two mailbox servers (cumulatively,
10,000 users). We used a workload profile that is typical of an enterprise user. This
profile is presented in Table 10.

Table 10. Target ZCS user profile characteristics

User profile characteristic Value

Total users 10,000

Zimbra mailbox servers 2

Users per mailbox server 5,000

User mailbox size 500 MB

User workload message profile Heavy enterprise

Message profile characteristics 21 received/hour/user, 7 sent/hour/user


(224 messages/user/8 hour day)
124 KB average message size
80% with 25 KB message body
20% with 20 KB message body and 500 KB
attachment

User type 90% SOAP users, 10% IMAP users

Concurrency 100%

Read/write ratio based on profile and 40% reads/60% writes


workload type

Mail stores per server 1

Blob message store LUN size 3.5 TB per mailbox server

Test scenarios We conducted a series of five tests to validate the performance and scalability of the
ZCS mailbox server building block, the benefits of using VNX FAST Cache, the
performance of NL-SAS disks with ZCS data, and protection (backup and restore) for
ZCS data using Replication Manager with VNX SnapView snapshots.
Test 1: ZCS mailbox server building block performance
Test 2: ZCS mailbox server building block scalability
Test 3: Advanced protection for ZCS data using Replication Manager with VNX
SnapView snapshots
Test 4: Benefits of using VNX FAST Cache with ZCS
Test 5: Performance of NL-SAS disks with ZCS data

EMC STORAGE DESIGN AND DATA PROTECTION FOR 42


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Key performance For tests 1 through 4, we followed the relevant VMware recommendations on
indicators establishing performance targets for a well-performing ZCS environment that provides
an optimal user experience. Following each test run, the following key performance
indicators were evaluated against target values for each mailbox server.

Table 11. Key performance indicators for tests 1 through 4

Key performance indicator Target value


ZCS mailbox server CPU utilization 45% to 55% average, 85% maximum

Send mail latency (user experience) Less than 1,000 ms

Disk latency Less than 20 ms

Disk utilization Less than 65%

Disk throughput (KB/s) Higher number of messages per second is better

Disk IOPS Higher number of IOPS is better

LMTP delivery rate Higher number of injected messages per second


is better

Tests 1 through 4 examined messages sent/received, moves and deletes, and all
other functions performed by an enterprise email user on a regular basis. All tests ran
successfully. The Soapgen client performed consistently against the ZCS servers
without causing any corruption to any of the ZCS components.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 43


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Test 1 results: ZCS mailbox server building block performance
This test involved generating a load equivalent to 5,000 concurrent, heavy users on
one mailbox server to evaluate performance and user experience. Another goal for
this test was to characterize ZCS I/O types for the purpose of developing storage
design guidelines and best practices for deploying ZCS on VNX storage.

The VNX array I/O analysis showed that 90 percent of all I/O generated by ZCS was
small, 4 KB random I/O. Figure 18 shows a histogram of I/O types generated on the
VNX5700 array during two hours of Soapgen peak client load.

Figure 18. I/O types generated on the VNX5700 array during two hours of Soapgen client
load

The results of Test 1 demonstrated that the building block we designed provided
solid performance and a significant amount of headroom for additional load.

Table 12 presents the performance results for one building blockone ZCS mailbox
server virtual machine with a heavy workload of more than 200 messages per user
per day. This building block was deployed using 10k SAS disks for all Zimbra
volumes. Test results show excellent VNX storage performance with balanced
distribution of the user load across ZCS volumes. Very low disk utilization with
excellent throughput was also observed during this test.

The LMTP delivery rate was approximately 11.84 messages per second injected,
28.41 messages per second received (multiple recipients). This implied heavy MySQL
writes.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 44


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Table 12. ZCS mailbox server building block performance details

Validation parameter Test results for 5,000 users per server


Zimbra mailbox server CPU 43%
utilization

Send mail latency 184.89 ms

LMTP delivery rate Average received 11.84 messages/sec


Average delivered 28.41 messages/sec
ZCS volumes Disk Disk Disk Avg. Disk
utilization throughput IOPS latencies
(ms)
Zimbra root volume 0.21% 923 KB/sec 14 3.78
(/opt/zimbra)

Zimbra redo logs volume 10% 3,890 KB/sec 172 0.41


(/opt/zimbra/redolog)

MySQL DB volume 7% 4,135 KB/sec 129 1.66


(/opt/zimbra/db)

Index volume 8% 798 KB/sec 39 3.25


(/opt/zimbra/index)

Message store volume 41% 6,507 KB/sec 202 4.58


(/opt/zimbra/store)

Total n/a 16,253 KB/sec 556 n/a

Figure 19 shows throughput and utilization details for each Zimbra volume. VNX
storage easily handled ZCS application I/O and produced 16,254 KB/s throughput
with 556 IOPS across all disks supporting the 5000-user ZCS mailbox server building
block.

Figure 19. Throughput and utilization details for ZCS volumes in 5,000-user mailbox
server building block

EMC STORAGE DESIGN AND DATA PROTECTION FOR 45


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 20 shows the number of disk IOPS for each ZCS volume in the building block,
with a total of 556 IOPS across all volumes.

Figure 20. Number of disk IOPS for each Zimbra volume in 5,000-user mailbox server
building block

EMC STORAGE DESIGN AND DATA PROTECTION FOR 46


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Test 2 results: ZCS mailbox server building block scalability
For this test, we scaled the environment to two building blocksto support 10,000
concurrent, heavy users on two mailbox serversto evaluate performance and user
experience.

Test results demonstrated excellent performance with only a slight increase in


average send mail latency, from 189 ms to 295 ms, which was still significantly below
the 1,000 ms maximum target.

Table 13. Performance results for two building blocks (10,000 users):
Server utilization and latencies

Results for two mailbox servers (10,000 users)


Validation parameter
Mailbox server 1 Mailbox server 2
ZCS mailbox server CPU 45% 43%
utilization

Send mail latency 296.06 ms 294.44 ms

LMTP delivery rate Average received 10.83 Average received 10.26


messages/sec messages/sec
Average delivered 27.82 Average delivered 25.30
messages/sec messages/sec

Table 14 presents details of disk throughput and IOPS achieved during the validation
of two ZCS mailbox server building blocks. The total throughput of 36,523 KB/s was
achieved with 1,445 IOPS across all ZCS volumes and disks.

Table 14. Performance results for two building blocks (10,000 users):
Disk throughput and IOPS

Disk Avg. Disk


ZCS volume throughput Disk IOPS Latencies
(KB/s) (ms)
Zimbra root volume (/opt/zimbra) 1,944 32 4.1

Zimbra redo logs volume 8,246 406 0.78


(/opt/zimbra/redolog)

MySQL DB volume (/opt/zimbra/db) 8,878 188 2.1

Index volume (/opt/zimbra/index) 4,030 84 4.2

Message store volume (/opt/zimbra/store) 13,425 255 5.1

Total 36,523 1,445 n/a

EMC STORAGE DESIGN AND DATA PROTECTION FOR 47


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 21 and Figure 22 provide graphical representations of disk throughput,
utilization, and IOPS for two 5,000-user mailbox server building blocks (10,000
users).

Figure 21. Disk throughput and utilization for two building blocks (10,000 users)

Figure 22. Test Results: Disk IOPS for two building blocks10,000 users

EMC STORAGE DESIGN AND DATA PROTECTION FOR 48


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Test 3 results: Advanced protection for ZCS data using Replication
Manager with VNX SnapView snapshots
The goals of this test were as follows:
Successfully back up and restore critical ZCS data by using Replication
Manager with VNX SnapView snapshots
Identify the durations required to back up (replicate) and recover the ZCS data
Determine snapshot space requirements in order to provide sizing guidelines

We successfully created a Replication Manager replica of the critical ZCS data. We


then ran Soapgen to simulate a heavy enterprise user workload and evaluated the
VNX SnapView snapshot space that was used for the replica.

The backup and restore times were very short and the operations were very efficient:
Replication Time: 2 minutes and 26 seconds
Restore Time: 2 minutes and 9 seconds

Table 15 shows the LUN size and actual data size of each ZCS application LUN that
was used in the replication and restore processes.

Table 15. ZCS replication data used for snapshot testing

ZCS application LUN LUN size Data size


/opt/zimbra 40 GB 3 GB

/opt/zimbra/db 50 GB 26 GB

/opt/zimbra/index 500 GB 160 GB

/opt/zimbra/store 3.5 TB 2.7 TB

/opt/zimbra/redolog 90 GB 22 GB

Total 4,180 GB 2,911 GB

Snapshot space calculations


Figure 23 shows VNX reserve LUN pool usage statistics. During testing we observed
that a typical heavy enterprise workload generates around 61 GB of data in the
reserve LUN pool. This translates to around 12 MB per user (61 GB/5,000 users).
Carefully consider the data change rate and how long the replicas need be kept to
determine the size of the reserve LUN pool.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 49


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 23. VNX SnapView reserved LUN pool usage following Soapgen test

To calculate total snapshot space by using percentage data, we sum all space usage
for all LUNs participating in the replication. For this solution, total snapshot space
was 61 GB in the SnapView reserved LUN pool.

Total snapshot space used =


(20 GB * 56%) + (20 GB * 38%) + (40 GB * 50%) + (40 GB * 53%) + (20 GB * 4%) =
11.2 GB + 7.6 GB + 20 GB + 21.2 GB + 0.8 GB = 61GB

Note that 61 GB also includes snapshot metadata that is usually 5% to 10% of the
total source LUN space. In the solution as validated, metadata was about 7%
(32 GB).

Another way to calculate the used snapshot space is to look at the writes to the
reserve LUN pool in the SnapView session properties. For this solution, there were
around 463,932 writes produced during a two-hour heavy enterprise user load, which
consumed 29 GB of space.

Figure 24. Writes before and after replication, in SnapView session properties

Total snapshot space used = (Writes to RLP after Soapgen test) (Writes to RLP before
Soapgen test) * 64 KB SnapView chunks + (metadata) = (464077 - 145) *
64 KB = 463,932 * 64 KB = 29 GB + metadata

The remaining 32 GB (7%) was used for snapshot metadata. The amount of metadata
is a percentage of the total source LUN size (4,180 GB in this case), which is allocated
for map entries.

Total snapshot space used = 29 GB + 32 GB = 61 GB

EMC STORAGE DESIGN AND DATA PROTECTION FOR 50


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Test 4 results: Benefits of using EMC VNX FAST Cache with ZCS data
The goal of this test was to evaluate whether enabling EMC VNX FAST Cache for the
storage pool containing the Zimbra store volume (/opt/zimbra/store) would improve
the performance and user experience.

Because we observed low disk utilization during previous tests with a Sopagen user
workload of 200 messages sent/received per user per day, there was no point in
enabling FAST Cache with the same workload.

We doubled the Soapgen user workload to from 200 messages to 400 messages
sent/received per user per day and ran it on one mailbox server for two hours without
enabling FAST Cache. We did not change either the CPU or memory configuration on
the mailbox server virtual machine. After running this extreme workload (double the
heavy user workload) for two hours, we observed that Zimbra mailbox server CPU
utilization jumped to 85% and send mail latencies jumped above the 1,000 ms
target. This outcome was expected.

We then created 200 GB FAST Cache on VNX5700 array (made from two 200 GB SSD
drives in RAID 1/0) and enabled it on the LUN configured for the ZCS store volume
(/opt/zimbra/store). We then ran the same extreme workload for two hours. After very
short warm-up time, FAST Cache began to absorb most of the extra load. Mailbox
server average CPU utilization fell to 51% and the average send mail latency fell
below the 1,000 ms target. We ran this test several times and confirmed the
repeatability of these results.

Thus, enabling FAST Cache on the Zimbra store volume permitted the VNX array to
handle twice the original workload (400 messages sent/received/user/day) without
reducing performance, degrading the user experience, or requiring additional server
CPU or memory resources on the ZCS mailbox server virtual machine. At the end of
the test only 2% of the 200 GB FAST Cache was used.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 51


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Figure 25 shows the effect of FAST Cache on the Zimbra mailbox server when running
an extreme user workload.

Figure 25. Effect of FAST Cache on Zimbra mailbox server with extreme user workload

EMC STORAGE DESIGN AND DATA PROTECTION FOR 52


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Test 5 results: Performance of NL-SAS disks with ZCS data
The purpose of this test was to determine whether the use of 7.2k rpm NL-SAS disks
on the VNX5700 array for ZCS data would satisfy the key performance criteria for this
solution (see Table 11).

As of this papers publication, current Zimbra best practices published on the ZCS
wiki discourage the use of SATA disks. This guideline, however, is based on older
types of SATA without considering the new advantages of using NL-SAS (near-line
serial-attached SCSI) disks on EMC VNX5700 storage. NL-SAS drives offer
performance and capacity similar to SATA drives, but NL-SAS drives utilize a SAS
interface for I/O.

For this test, we reconfigured the message store for one of the ZCS mailbox servers
and migrated the message store data from a SAS disk pool to an NL-SAS storage pool.
The new NL-SAS storage pool had eight 2 TB 7.2k rpm NL-SAS disks in a RAID1/0
(4+4) configuration. This configuration provided 7,323 GB of user capacity and
sufficient performance for future expansion.

We generated a load equivalent to 5,000 concurrent heavy users on each of two


mailbox servers, concurrently, for two hours and monitored the performance. The
performance on both severs was almost identical. Both servers successfully met all
key performance criteria. NL-SAS disks on VNX5700 storage system demonstrated
excellent performance with minimal disk utilization and low latencies.

Based on our observations from this test, we can now advise customers to consider
using NL-SAS drives for ZCS on EMC VNX series storage provided the user workload
profile is similar to the one validated for this solution.

Figure 26 shows the results of these tests.

Figure 26. Performance of NL-SAS disks with ZCS data

EMC STORAGE DESIGN AND DATA PROTECTION FOR 53


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Conclusion
The combination of ZCS with EMC VNX series storage provides an optimal
collaboration infrastructure. Storage, compute, and network layers maintain high
availability, while EMCs building-block sizing approach achieves predictable
performance and a repeatable storage design.

Easy scaling with EMCs building-block approach to sizing accelerates your deployment of ZCS. Once
EMCs building- deployed, the performance, management, and protection advantages of running ZCS
block approach to on EMC VNX series storage are self-evident.
sizing Based on an EMC sizing building block of 5,000 users, your ZCS environment
can be scaled in multiples of 5,000 seats. Two building blocks supporting a
total of 10,000 heavy users were successfully validated.
The results of testing demonstrated that the building block we designed
provided solid performance and a significant amount of headroom for
additional load.
Test results showed excellent VNX storage performance with balanced
distribution of the user load across ZCS volumes. Results showed very low disk
utilization with excellent throughput.
VNX storage easily handled ZCS application I/O and produced 16,254 KB/s
throughput with 556 IOPS across all disks supporting the 5000-user ZCS
mailbox server building block.

Benefits of EMC EMC VNX series FAST Cache accelerates performance to address unanticipated
VNX series FAST workload spikes.
Cache
Enabling FAST Cache on the Zimbra store volume permitted the VNX array to
handle twice the original workload (400 messages sent/received/user/day)
without reducing performance, degrading the user experience, or requiring
additional server CPU or memory resources on the ZCS mailbox server virtual
machine. By the end of the test run, only 2% of the 200 GB FAST Cache was
used.

NL-SAS disk NL-SAS disks on VNX5700 storage provide excellent performance with minimal disk
performance with utilization and low latencies.
ZCS The performance was almost identical to 10k SAS disks; all key performance
criteria were successfully met.
Based on our observations from testing, we can now advise customers to
consider using NL-SAS drives for ZCS on EMC VNX series storage, provided the
user workload profile is similar to the one validated for this solution.

VMware HA VMware HA provides uniform high availability across the entire virtualized IT
clustering environment without the cost and complexity of failover solutions tied to either
operating systems or specific applications.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 54


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Advanced EMC Replication Manager and EMC VNX SnapView provide instant snapshot-based
protection (backup backup and recovery for ZCS data. Replication Manager automates the creation of
and recovery) snapshots of ZCS mailbox server volumes using SnapView technology. We
determined snapshot space requirements in order to provide efficient sizing
guidelines.
Compared with the native ZCS backup process, the use of Replication Manager
with SnapView significantly reduced the time it took to back up all ZCS mailbox
servers.
The backup and restore times were very short and the operations were very
efficient:
The replication time was 2 minutes and 26 seconds.
The restore time was 2 minutes and 9 seconds.

EMC STORAGE DESIGN AND DATA PROTECTION FOR 55


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
References
White papers For additional information, see the white papers listed below.
Introduction to the EMC VNX Series
Performance Tuning Guidelines for Large Deployments
Zimbra Mail Server Performance on vSphere 5.0

Product For additional information, see the product documents listed below.
documentation
Zimbra Collaboration Server Datasheet
Zimbra Collaboration Server Documentation
Zimbra Network Edition Multi-Server Installation Guide
EMC SnapView
EMC Replication Manager
VMware vSphere
EMC VSI Plug-in for VMware vCenter (EMC Community Network)

EMC STORAGE DESIGN AND DATA PROTECTION FOR 56


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Appendix A: Example ZCS mailbox server storage space calculation
Proper storage sizing is essential for excellent ZCS application performance. You
must accurately identify the appropriate types, numbers, and sizes of disks, RAID
protection levels, and configuration of the ZCS store volume to satisfy application I/O
requirements.

The following storage space calculation example is based on a 500 GB mailbox size
for each of 5,000 users.

(User data) + (MySQL data) + (ZCS binaries) + (ZCS logs) + (ZCS indexes) = Total space

User data: 5,000 users with 500 MB = 2,500 GB user data


MySQL data: 5% of 2,500 GB (user data) = 125 GB
ZCS binaries: 10 GB
ZCS logs: 20 GB
ZCS indexes: 25% of 2,500 GB (user data) = 625 GB

Total space without backups: 2,500 + 125 + 10 + 20 + 625 = 3,280 GB

In this solution we used storage snapshots. The space calculations for snapshots are
described in Snapshot space calculations.

If ZCS native backups are used, allocate an additional 160% of the space required.

Backups: 160 % of subtotal: 3,280 * 160% = 5,248 GB

Total space with backups: 3,280 GB + 5,248 GB = 8,528 GB

EMC STORAGE DESIGN AND DATA PROTECTION FOR 57


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage
Appendix B: Zimbra deployment worksheet
Accurate information about your environment is essential for proper sizing of your
ZCS servers and storage. Zimbra Professional Services has its own deployment
worksheet to assist in sizing your ZCS servers and storage. For some cells, the
worksheet supplies values recommended by VMware based on related input.
Figure 27 shows an example of the sizing tab in a Zimbra deployment worksheet.

Note: The deployment worksheet is used only by Zimbra Professional Services.

Figure 27. Example of sizing tab in a ZCS deployment worksheet

EMC STORAGE DESIGN AND DATA PROTECTION FOR 58


VMWARE ZIMBRA COLLABORATION SERVER
VMware Zimbra Collaboration Suite 7.2, VMware vSphere 5, and EMC VNX Series storage

You might also like