Professional Documents
Culture Documents
HP CONVERGED STORAGE:
explosion in the amount and types of datacoupled with new demands from the virtualization of servers and clientshas made storage increasingly inflexible and complicated to manage. These factors stand in the way of the kind of adaptability, agility and integrated management that the efficient enterprise requires. If organizations are to continue toward the goal of delivering ITaaS, they need to break down these barriers and lay the groundwork for a next-generation architecture.
TRADITIONAL STORAGE
The typical storage architecture was designed 20 years ago, when workloads were predictable and data was structured. But today companies are dealing with an unprecedented amount of information, including unstructured data such as audio and video, which requires massive capacities. Storage systems must accommodate many different types of workloads with different performance requirements. Add to the mix increasingly demanding applications, distributed data center environments, legacy business processes that must be supported and nonstandard infrastructure inherited through acquisitions, and you get a gerrymandered architecture comprising many discrete storage resources
that must be managed individually. Such an architecture is disruptive to scale, expensive to own and operate and increasingly difficult and labor-intensive to manage. ITaaS requires a pool of storage thats flexible and fungible. The IT staff must be able to quickly configure storage for a particular need and then just as quickly reconfigure it so it can be used again elsewhere. The storage must be malleable so that capacity can be quickly expanded, data and applications can be easily and securely migrated and workloads can be automatically rebalanced. Applications need to be online 24/7/365, so high availability is paramount. Finally, management of the entire storage pool, as well as coordination with virtualized servers and networking, should be streamlined and simplified.
8 ULTITENANCY: M
the ability to securely host many different applications in a single pool of storage, delivering the appropriate level of resources and performance for each application distribute storage resources and move data among those resources without disrupting user access to that data
remote disaster recovery site a practical alternative to using tape-based backups. Once a data set is established at the backup site, only changes to the data need to be replicated over the WAN. Similarly, data restores are much faster, since theyre handled over the WAN and dont involve finding and transporting tapes. And the entire process can be automated, managed centrally via a single pane of glass. All of this dramatically increases the reliability of data backups. Automated backup and disaster recovery also means theres no need for operator intervention at remote sites, since data center staff can handle all tasks. Tape handling can also be eliminated, thus freeing up staffers time at remote sites. Most backup applications can track where data is stored following a replication, making restores faster and easier. And by reducing data volumes, deduplication enables more data to remain on hand locally in near-term storage for even faster file restores. All of this once again results in lower costs by requiring less bandwidth for backups. Additionally, deduplication reduces restore complexity, because all data can be restored from the same backup device, regardless of the platform it came from.
8 Control licensing costs by redeploying deduplication agents on application or backup servers, at no cost to existing customers with Advanced Backup to Disk licenses
deal with increasing amounts of big data. The more data thats backed up, the faster restores need to be. Dedupe 2.0 products can deliver restore speeds that are just as fast as backup speeds. Dedupe 2.0 products also deliver high availability, which is increasingly important in helping companies back up more data in the same or shortened backup windows. Under such circumstances, companies cant afford to have a backup process fail at 3 a.m. and require a restart. Some Dedupe 2.0 systems can now be configured to have a backup storage system kick in if a primary system fails, all without operator intervention. That means theres no single point of failure in the backup processa crucial consideration for massive storage systems that have to back up hundreds or thousands of servers on a routine basis.
Source: Complete Storage and Data Protection Architecture for VMware vSphere, 2011, HP. http://h20195.www2.hp.com/v2/ getdocument.aspx?docname=4AA3-5141ENW.pdf
2
Source: Top 10 Reasons Why You Should Choose HP StoreOnce solution brief, 2010. http://h20195.www2.hp.com/V2/ GetPDF.aspx/4AA3-2347ENW.pdf
5 ///
Suggested Reading
These additional resources include business white papers and previously published articles from IDG Enterprise.
////////////
////////////
6 ///
Suggested Reading
////////////
////////////
4AA3-9132ENW