Professional Documents
Culture Documents
INTRODUCTION .............................................................................................................................................................................2
TABLE OF FIGURES
OpsCenter Analytics extends the reporting time window and provides the ability to create reports for Trending,
Chargeback and other options. There is also a custom SQL report generator for those Admins who need that
functionality. Ease of use and ease of creating reports and then sending those reports to stakeholders
automatically via email on a regular basis is part of the OC and OCA product offering.
2|P a g e
Backup Reporting – Top 10 Reports
When looking at a “Top 10” reports for the OpsCenter and OpsCenter Analytics product, we have taken a track of a
basic NetBackup Admin that has received requirements from various business units as well as their day to day
needs. We start with simple success rate/failure rate reporting, and then move in the direction of infrastructure
reporting. Finally we look at the longer term strategy of trending and forecasting. One of the key differences
between OC and OCA is the ability to go back longer than 60 days on a report. The “Top 10” may not be “your” Top
10 but these reports should provide an understanding of the capabilities of the OpsCenter and OpsCenter Analytics
product. If interested in trying OCA, a 60 day evaluation key can be obtained from the Symantec Sales Team. If you
are a current VBR customer, the upgrade to OpsCenter Analytics is free.
With the power of OpsCenter and OpsCenter Analytics these reports are all customizable to your environment not
to mention they are just a small portion of the canned reports available to run.
These reports are all using OpsCenter 7.0.1 which will be released for GA in August 2010. There were a number of
changes to the formatting of the reports from 7.0 to 7.0.1.
Knowing the success rate and having a goal – such as 97% successful – and then being able to provide a daily or
weekly report to management can be very valuable, however it is important for each company to determine how
“success rate” is going to be calculated and what constitutes a failure. If a machine is turned off at night and fails is
that going to be counted as a failure? If there are open files that cannot be backed up, will that be counted? These
are things that each customer needs to understand before the reports can be configured to be meaningful and
provide correct data.
OpsCenter and OpsCenter Analytics have several out of box, one-click reports that measure success rate for any
time interval in a number of different reports. Depending on the needs of the environment, one of these canned
reports may be adequate, or one of these reports may be able to be customized to provide the needed insights
into the success rate.
In the screen shot example below, the following report was used:
o Reports > Report Templates > Backup > Status and Success Rate > Success Rate > Advanced
Success Rate
o It was then filtered using Edit Report to choose a one week timeframe and a single Master.
3|P a g e
Figure 1 - Success Rate Report
In this small environment, there is a 100% success rate. All backups completed successfully and it shows a rollup of
the amount of data that was backed up. Creating this report weekly and emailing it to concerned parties is a good
way to show a great deal of information about the past week’s backups.
In the screen shot example below, the following report was used:
o Report Templates > Backup > Status and Success Rate > Status > Skipped Files Summary
There are a number of other canned reports related to the success and failure of the
backups over the time period specified in each report.
4|P a g e
In the example above, clicking on the hyperlink to the far right will show which files were skipped. In many cases a
single file would not be an issue, but seeing 20 skipped files would indicate that there may be an issue with open
files on the system and that critical files are being missed.
Based on the filtering ability of OpsCenter and OpsCenter Analytics, reports can be created to report on specific
exit status and better yet, to exclude exit statuses such as Status 150 where the Operator has canceled the backup
job. Another example would be a report that shows when there are too many Status 96 errors indicating that the
library is getting full at which point it would be time to swap out tapes.
All in all, understanding why things have failed is critical to a well run data protection solution and reporting can be
used to gather information about failures so that corrective action can be taken.
The Job Count within Backup Window report can show if jobs are finishing within a specified backup window.
5|P a g e
Figure 3 - Backup Window Report
These reports are complemented with a graphical drawing of the backup window for compelling visual
presentation and analyses. One can quickly determine if all backup activity is occurring within the defined window.
When spillovers outside of windows occur, there is also reporting to zoom in on why this is happening. Longer
running jobs, sub-optimal scheduling, data growth, and client growth are typical contributors to the out-of-window
condition. Window activity should be examined collectively in the context of number of jobs, number of clients,
and amount of data being backed up across each hour. Furthermore, similar to the drive utilization and throughput
reports, looking at window performance across broad timelines using intelligent averaging is also necessary.
Missing a window once or twice doesn’t necessarily point to broader systematic problems and thus the averaging
context needs to be examined alongside the actual daily context.
The Drive Throughput report shown below can be found by using the following steps:
o Reports > Report Templates > Disk and Tape Device Activity > Drive Throughput
6|P a g e
The output is shown as a heat-map that shows how fast data is moving through the drives in the environment.
Since these Masters are all in test labs, they are not showing production data but they are a good indicator about
how fast backups are running. When you look at an LTO4 drive that can run at 120+MB/sec, having a report that
shows only 5 -6 MB/sec would indicate that the drives are not even close to being at the maximum capacity.
Adding drives in an environment such as this would not impact the backup window. We see a lot of money being
spent on new tape drives when customers have trouble pushing tape drives from years ago. For example, if you
cannot push an LTO2 drive, then upgrading to an LTO4 will not benefit you (unless you wanted the hardware
encryption option of course).
In addition to the Drive Throughput report, the Drive Utilization report can quickly show where there are gaps in
the drive utilization that can be filled with backups. This is a very small environment however it shows that while
many tape drives are used, they are only being used at 1% capacity which leaves a lot of room for growth. In a
production setting there would obviously be more throughput but in this example it is obvious at a glance that
some drives are idle most of the time. Emailing this report to the Admins once per week would be a good way to
keep track of capacity and when it is getting close to the time to add new tape drives and/or media servers. These
reports also provide for smart filtering, which shows shared utilization in addition to physical utilization. The
reports can be aggregated and/or filtered by logical/physical drive, drive type, media server, and library. Thus the
drive utilization reports from an hourly and day-of-week type template can provide more than a dozen different
types of analyses of the tape drive infrastructure. And for the architect that must know the drive utilization metric,
a precise metric is produced.
7|P a g e
To create the Drive Throughput report shown below use the following steps:
o Reports > Report Templates > Disk and Tape Device Activity > Drive Throughput
The other reports in this area are very useful as well for making certain there is disk capacity for backups and for a
quick overview of performance if using the SAN Client. All in all this section of canned reports has very useful
reports whether using OpsCenter or OpsCenter Analytics. OCA allows you to go back further in time to see the
trending on things like the drive utilization and how fast the backup disk is filling up.
OpsCenter and OpsCenter Analytics provide the ability to report on the various forms of deduplication available
with NetBackup. It can provide reports that show things such as how much deduplication is being done at a remote
site before the data hits the WAN pipe. Using the Trending feature of OCA can also help to understand if the data
is growing faster than the hardware or when the WAN bandwidth can be expected to max out. This allows the
company to make educated decisions on when to add infrastructure proactively instead of waiting for things to
fail.
8|P a g e
In the data center, deduplication reporting helps to determine what the actual deduplication rate is occurring at
each PureDisk storage unit. Reporting can be abstracted to view the protected size or stored size across all
PureDisk environments or drilled down in to one specific area. Common file or data types can then be identified
and matched with their ability to be deduplicated.
The report below is a simple dedupe report created using a Media Server Deduplication Pool. It is a very basic
report and shows how much data would have been backed up without deduplication enabled. In large
environments this would show the number of GB or TB that is NOT being used due to deduplication. The other
reports can contrast the dedupe data savings vs. the actual deduped information however in this very small test
environment this report shows better information.
Knowing how much storage space is being saved with deduplication is a good way to justify the cost of the dedupe
solution. In most cases, the cost of the software and support is much less than the amount of disk that would be
needed without the dedupe solution.
9|P a g e
addition the information available in the “Show Chart as Table” will need to be used to get a rollup of the total
amount of data backed up but used correctly it is very easy to figure out how much data is being backed up
daily/weekly or monthly and show it graphically in MB/GB or TB of data.
Again, filters will be necessary in large environments. The environment reported on in this example is a fairly small
environment with minimal daily backups.
This report provides visibility into the amount of data being backed up on a per server basis. There are several
versions of this report—which should be in every technology manager’s portfolio—including a running total of
amount of data backed up by days for the last two weeks. This helps verify whether there are significant swings in
the amount of data crossing the wire. For environments in which there are well-defined schedules where
incremental backups occur during the week followed by full backups on weekends, a 2–3 week timeline will verify
the spikes on the weekends from the full backups. By zooming in on the amount of data being backed up on the
10 | P a g e
weekends, significant shifts in overall data volume can be observed. Another type of size-based report considers
data on a monthly basis and then restricts the totals to full backups only. This report provides significant insights
on whether data growth in general is being observed across the computing environment. An additional derivative
of this report is looking at size in a business context; for example, by geography, data center, business unit, and
application. The primary audiences for this report are the CIO, operations manager, and data protection
infrastructure architects. Just as we expect CEOs to be very familiar with numbers such as annual revenue, %
growth and margins, this is one of the metrics that CIOs and CTOs need to know.
An excellent use case is the need to segregate the data relevant to the audience. For example, a DBA will expect to
only see the servers on which their databases reside and not have to sift through a report that includes servers
that are of no interest. While providing for a weekly summary, this report is quite versatile because you can drill
down to any server on any day and get the job level detail. The grouping requirement is more critical for service
providers because they need to ensure that each customer sees only their data. This is a perfect report for using
the automated report generation and email features where different report filters can be used to show specific
systems and then that data emailed to the relevant parties once a week. Care should be taken on this report in
large environments that filters are used. By default it will show every Master and every Client in the report. In large
environments this can take a while to run.
11 | P a g e
Figure 8 - Week At A Glance Report
This report has been filtered to a single master server with twelve Clients. It shows that there were a number of
failures and also quickly shows that most of the backups completed successfully. If this were sent to a DBA – for
example – and a single system or a handful of database systems were selected, the DBA would be able to see if
backups had been run each day of the week and the status of that backup. Again, this report could be automated
and emailed weekly which will save a great deal of time.
OpsCenter and OpsCenter Analytics provide three canned reports that look at Risk Analysis.
Client Risk Analysis – Takes a look at all of the Clients configured and determines if the client has had a full
successful backup in the time frame specified. If not, it will be called out as a Risk so that action can be taken to get
a successful full backup.
12 | P a g e
Client Coverage – This is a report where a .CSV file can be imported and compared to the existing backup solution
to determine if there are systems that are not being protected correctly.
Recovery Point Objective – Shows how long it has been since each system has had a successful backup and
therefore would have difficulty meeting a Recovery Point.
This report shows the hosts that have gone too long without a backup, therefore have a very high Recovery Point
Objective. In larger environments being able to run this report on a weekly basis will quickly show where there is
risk due to systems that have not been backed up in a pre-determined amount of time. In the example below no
backups have been run for nearly 45 hours since the Master is not running in the test lab. This means that for the
five systems on the report, the best RPO possible would be nearly 44 hours. If you have committed to a 48 hour
RPO, then these systems need to be backed up ASAP to be able to meet that SLA.
13 | P a g e
Of the several forecasting reports, the “backup size” forecast is one that many customers have found to be very
useful. This does require that OpsCenter Analytics has collected sufficient historical data so that forecast numbers
are meaningful and that OpsCenter Analytics has been purchased. One of the nice things about OpsCenter
Analytics is that the data is gathered regardless of whether or not Analytics is enabled with the key. So data is
collected if using base OpsCenter and when OCA is turned on with the purchase of a key, the data can be used
immediately rather than having to collect additional information.
The report shows data from the last six months, and reporting on two small Masters with a one week resolution
for backups. It shows how much data has been backed up each day and the progression of the data growth. It
should be noted that the environment used for this screen shot is a fairly limited environment therefore it appears
as though the environment will soon be out of capacity.
An example of how this report can be beneficial would be to look at this scenario - If backed up data grows 30% a
year from now, can the backup servers and media servers accommodate the additional load and still meet backup
windows? Will tape drive performance become a bottleneck where new drives may need to be procured? Answers
to these and similar questions are what the backup size forecast reports can shed light on. Beyond backup size,
other key forecast variables include number of clients, number of virtual machines, and file count. All of these
collectively provide important information on components that can impact performance and breach SLAs. Going
beyond the physical infrastructural components, the use of business-level views around this infrastructure
provides yet even more powerful perspectives. Insights into whether data will grow faster in Europe or North
14 | P a g e
America, whether the finance business unit will require more tape inventory, or whether the Hong Kong data
center will double the amount of virtual machines, will become available as OCA is customized for the specific
environment.
OpsCenter Analytics has a number of reports geared towards media trending to better help in the planning
process. For example, a report can be created that shows historical supply and demand of the tape media or the
amount of disk needed for backups. Since you can report on the last six months and see how fast the tapes, disk or
VTL is filling up, you should be better able to forecast when additional resources will be needed. This allows most
customers the ability to plan for purchases proactively instead of reactively when the disk is full and it becomes an
emergency and thus may be more expensive and not in the budget.
The screen shot below shows a forecast of capacity needed for the upcoming backups and the report was
generated by using the following steps:
o Reports > Report Templates > Backups > Planning Activity > Capacity Planning > Forecasted Size
The report shows that the amount of supply available exceeds the demand by a great deal. As these lines get
closer together, it would be time to add additional resources. The report can be filtered per Master and a number
of other filters can also be applied depending on the requirements in the environment. These reports are from a
very small test environment. In a production environment these trend lines would probably be closer together
with less peaks and valleys.
15 | P a g e
Figure 11 - Media Forecasting, Trending and Analysis Report
The screen shot below shows a forecast of capacity needed for the upcoming backups and the report was
generated by using the following steps:
o Reports > Report Templates > Client Reports > Virtual Client Summary
16 | P a g e
Figure 12- BONUS REPORT – Virtual Machine Protection
This report shows an example of the ESX Server that has a number of VM’s configured. The top four do not exist in
any NetBackup Policies and would need to be added to be protected. It also shows at a glance that last time the
system was backed up. It is clear that if you are running Virtual Machines that OpsCenter and OpsCenter Analytics
are a “must have” reporting tool!
Summary
As data grows, backup operations are becoming increasingly complex; however, customers are benefiting from
great gains in operational productivity by leveraging backup reporting and service-level management disciplines. It
is very difficult to improve on a process that isn’t measurable, so the first step is defining, monitoring, and
analyzing data on an ongoing basis. Symantec’s OpsCenter and OpsCenter Analytics provide an excellent means for
doing so. The reporting tool provides many different types of canned reports that address many of the reporting
needs of the data protection environment. Add to that the ability to filter the reports, automate reports and create
custom SQL queries among other options and it becomes an extremely powerful tool. To make it even more
attractive, OpsCenter is included at no charge with NetBackup 7 and OpsCenter Analytics can be purchased
through the sales channel. A 60 day evaluation key is available to see the benefits of OCA and current VBR
customers are upgraded for free.
17 | P a g e
About Symantec
information is available at
www.symantec.com.
This document is provided for informational
For specific country offices and Symantec Corporation purposes only and is intended for distribution
only by Symantec employees to selected
contact numbers, please visit our World Headquarters partners and customers. All warranties relating
to the information in this document, either
Web site. For product information 20330 Stevens Creek Boulevard express or implied, are disclaimed to the
maximum extent allowed by law. The
in the U.S., call Cupertino, CA 95014 USA information in this document is subject to
change without notice. Copyright © 2010
toll-free 1 (800) 745 6054. +1 (408) 517 8000 Symantec Corporation. All rights reserved.
Symantec, the Symantec logo and NetBackup
1 (800) 721 3934 are trademarks or registered trademarks of
Symantec Corporation or its affiliates in the
U.S. and other countries. Other names may be
trademarks of their respective owners.
www.symantec.com